
“I loved that AI, thank you for that.”
Judge Todd Lang’s words echoed through the Maricopa County Superior Court on May 1, 2025, after watching a deepfake video of a murder victim deliver his own impact statement. That endorsement, delivered from the bench of an Arizona courtroom, revealed more about how technology is outpacing prudence than any futurist manifesto could.
Christopher Pelkey, a 37-year-old Army veteran killed in a 2021 road rage incident in Chandler, Arizona, appeared digitally resurrected on screen during the 2025 sentencing of Gabriel Horcasitas, the man convicted of his death. Pelkey’s family had used voice recordings, videos on Facebook, and photographs to create a deepfake video where an AI-generated avatar of Pelkey delivered a victim impact statement from beyond the grave.
“I am a version of Chris Pelkey recreated through AI that uses my picture and my voice profile,” the avatar introduced itself, directly addressing the defendant. In the AI-crafted statement, written by Pelkey’s sister based on what she believed her brother would say, the avatar expressed unexpected compassion: “To Gabriel Horcasitas, the man who shot me, it is a shame we encountered each other that day in those circumstances.” The digital Pelkey even suggested that under different conditions, “we probably could have been friends,” and emphasized forgiveness: “I believe in forgiveness, and a God who forgives. I always have and still do.”
When the lights came back on, Judge Lang handed down the statutory maximum โ ten and a half years. Despite praising the video’s message of forgiveness, the judge ultimately imposed the harshest possible sentence, noting that while he “heard the forgiveness” and felt it was “genuine,” he also acknowledged the family’s anger and their request for the maximum penalty. The apparent contradiction between appreciating the victim’s AI-delivered forgiveness while still imposing the maximum sentence โ which clearly had the intended effect of making the heartbroken familyโs desire for a strong punishment for the murder of such a graceful, kind man seem understandable rather than harsh or vindictive โ highlights the complex emotional and legal terrain courts must navigate when synthetic testimony enters the courtroom.
Half the commentators who watched that clip appear to have called it moving, half have called it ghoulish, but nearly all of them missed the legal point. The real trouble isn’t that the Pelkey deepfake tugged at a judge’s heartstrings. The trouble is that it crossed a line we never authorized, and the judicial system applauded instead of flinching. Somewhere between the moment Pelkeyโs sister pressed upload and the moment the judge said he loved it, we decided โ without debate โ that synthetic people belong in court. That decision will echo long after the appellate briefs in State v. Horcasitas are forgotten.
Admissibility, Ethics, and the Slippery Slope of Synthetic Testimony
Once a courtroom admits a fake, every future litigant gains leverage to demand their own. And because fakes improve every few months while human fact-finders stay the same, we are fast approaching the day when judges and juries face two polished, contradictory deepfakes and must choose which phantom feels truer.
The controversy exploded immediately. Veteran Arizona columnist E.J. Montini put it bluntly: “what has to be taken for testimony in a court of law is something that someone actually said.” No matter how close the family’s approximation, we can never know what the victim would have actually said under the circumstances. In Montini’s view, the avatar’s statement was inherently inauthentic โ a fiction that “really shouldn’t be in a court of law.”
Defense attorney Jason Lamm strongly objected along similar lines. “Human beings have thoughts, feelings and emotions. It doesn’t matter how much we try to simulate that with AI. It’s simply inauthentic,” Lamm said. He likened the deepfake to Geppetto manipulating Pinocchio, with the puppet “saying” whatever the puppeteer wishes. Indeed, Lamm noted that the forgiving words spoken by the AI avatar stood in “stark contrast from the reality” of Pelkey’s final moments. At trial, testimony indicated that Pelkey’s last actions in the confrontation were aggressive โ “violently getting out of his car… waving his arms” โ not the serene picture of forgiveness presented by the AI video.
Evidence law was built for greasy Polaroids, not generative adversarial networks. Rule 901 of the Federal Rules of Evidence asks only for “evidence sufficient to support a finding” that an item is what its proponent claims. A lay witness can swear a blurry clip shows the defendant and โ barring obvious tampering โ the judge sends it to the jury. That low bar works when manipulation is expensive. It cracks when anybody with a rented GPU can order a face-swap that fools retinal scans.
Beyond authenticity concerns, the defense fears the emotional impact of such synthetic testimony prejudices outcomes. In this case, the judge’s effusive praise of the video and decision to impose the maximum sentence have already become fodder for appeal. Within hours of the hearing, Lamm filed a notice of appeal, signaling that a higher court will likely decide whether the sentencing judge improperly relied on the AI-generated video when determining punishment.
Legal ethics experts worry that admitting AI-generated statements in court blurs the line between truth and simulation in a forum that depends on truth. “It’s definitely a disturbing trend,” warns Professor Cynthia Godsoe of Brooklyn Law School, noting that as technology pushes the envelope, courts will face novel challenges: “Does this AI photograph really match the witness’s testimony? Does this video exaggerate the suspect’s height, weight, or skin color?” She cautions that we could “veer even more into fake evidence that maybe people don’t figure out is false.”
At present, rules of evidence are generally lenient during sentencing hearings, which often allow victim impact materials that might not be admissible at trial. This helped clear the way for Pelkey’s family to show the deepfake to Judge Lang โ there was no jury present, and the usual evidentiary strictures were relaxed. Using the AI in a post-conviction setting was allowed precisely because it was for a judge’s eyes only at sentencing, not to prove guilt before a jury.
That’s generous; in practice, the system is ripe for manipulation. A grief-stricken family, convinced it knows a victim’s mind, can script sainthood (Pelkey certainly came off as the sort of person who “turns the other cheek”) . A prosecutor hungry for deterrence can commission an animation that turns a hazy homicide into a first-degree ambush. A defense team with hedge-fund backing can hire technologists to show the jury a parallel universe where the decedent brandished a phantom pistol. Poor litigants, without a deepfake lab of their own, will stare at the screen and lose.
AI Avatars and the “Clarence Darrow” Problem
Even before AI “resurrected” a victim in Arizona, there have been a growing number of experiments with AI-generated avatars appearing in court proceedings. One recent episode in New York captured just how controversial this can be. In March 2025, a litigant named Jerome Dewald attempted to use an AI avatar as his attorney during oral arguments in an appellate case. Dewald, who had no lawyer, had pre-recorded a video of a deepfake “lawyer” avatar presenting his argument.
When his case was called, a “smiling, youthful-looking man” in a suit appeared on the courtroom monitor and began, “May it please the court…” Within seconds the judges grew suspicious. “Hold on โ is that counsel for the case?” the presiding judge interjected. Dewald confessed, “I generated that. That’s not a real person.”
The courtroom was stunned: the “attorney” speaking so eloquently didn’t exist. The judges’ reaction was swift and scathing. Feeling misled, Justice Sallie Manzanet-Daniels immediately ordered the video shut off. “I don’t appreciate being misled,” she admonished, then allowed Mr. Dewald to continue in his own voice. The appellate panel “chewed him up pretty good,” Dewald later admitted.
This case points to what some legal commentators have dubbed “the Clarence Darrow problem.” What if one could create an AI avatar with the skills and persona of the greatest attorneys in history and deploy it in court? If a person with no legal training can simply download an AI advocate โ one that might replicate the eloquence of Darrow or the incisive logic of Louis Brandeis โ would that upend the adversarial system?
Such an AI could, in theory, write and deliver arguments more persuasively than many human attorneys. This raises concerns about fairness and the very nature of legal representation. Would using an AI avatar-lawyer constitute the unauthorized practice of law, or even fraud upon the court? Or might it be seen as a technological aid, not fundamentally different from an earpiece feeding suggestions to a self-represented litigant?
Courts so far have signaled extreme skepticism. In Dewald’s case, the judges emphasized that he failed to disclose that his submission was computer-generated, thus breaching the duty of candor to the tribunal. There are also practical concerns: an AI with no legal authority or accountability cannot be held to the professional standards required of real lawyers โ it can’t be cross-examined, penalized for contempt, or subjected to disciplinary action if it makes false statements. This fundamentally challenges the court’s control over proceedings.
Despite these concerns, the allure of AI-assisted advocacy persists. In early 2023, a startup famously tried to have the “world’s first robot lawyer” participate in a U.S. court. The CEO of DoNotPay, a tech company, planned to equip a traffic defendant with smart glasses and an earpiece, through which an AI chatbot would feed the individual lines to challenge a speeding ticket. But when word got out, the experiment was shut down under threats of prosecution: state bar officials warned that if the AI “practiced law” in the courtroom, those involved could be charged and even jailed. The FTC later fined DoNotPay $193,000 for deceptive claims about its AI capabilities.
Beyond legality, there’s a philosophical question: Do we want robot advocates, even if they were allowed? Part of a lawyer’s role is using human judgment to advise and persuade, and part of a court’s role is evaluating the credibility and integrity of those officers of the court. An AI avatar, no matter how logically adept, lacks the personal accountability and ethical intuition of a human advocate. It will say whatever it’s programmed or prompted to say, without conscience.
Deepfakes, Digital Evidence, and the Authentication Crisis
The Pelkey deepfake foreshadows a looming challenge for courts everywhere: How do we authenticate evidence when AI can convincingly manipulate any digital content?
Scholars have begged the advisory committee for a decade to tighten the gate; the committee keeps taking polite notes and postponing action. Soon it won’t matter whether a video is genuine, because every party will assert the right to call it fake, or to call it partly fake, or to call an almost-identical replacement more accurate. At that point, authentication ceases to be a predicate question and becomes the whole trial.
With sophisticated generative AI, a video can depict a person saying or doing something they never did, in a manner virtually indistinguishable from genuine recording. Unacknowledged AI-generated evidence โ fake media presented as real โ poses a serious threat to the fact-finding process. Courts fear scenarios where one side introduces a damning video of the opponent thatโs actually a fabrication. Conversely, even authentic footage might be falsely challenged as a “deepfake” by the opposing side, sowing doubt and confusion about reality. This has been termed the “liar’s dividend” of deepfakes โ the mere existence of the technology allows liars to continuously claim real evidence is fake.
Legal experts and rulemakers are actively grappling with this. The issue of deepfake evidence has reached the U.S. Judicial Conference’s Advisory Committee on Evidence Rules. In 2024, the committee considered several proposals from scholars and jurists to amend the Federal Rules to better address AI-generated evidence.
One proposal, by Professor Rebecca Delfino, suggested updating Rule 901 to require a more structured foundation for digital audiovisual evidence, possibly including expert analysis or technological verification of authenticity. Another, by Judge Paul Grimm and Professor Maura Grossman, urged a new framework that would remove the authenticity question from the jury in cases of suspected deepfake and require judges to make a determination in a pretrial hearing.
Outside of rule changes, practical tools and guidelines are emerging. The National Center for State Courts, together with the Thomson Reuters Institute, recently published bench cards for judges on handling AI-generated evidence. These guides help judges by providing structured questions to ask when a piece of evidence might be AI-created: What is the source of this media? Who had custody? Are there detectable signs of manipulation?
There’s also a push for developing technology-driven authentication solutions. Researchers and companies are racing to create deepfake detection tools โ AI that spots the subtle artifacts of falsified media. But detection lags behind generation. “We aren’t at the place right now where we can count on the reliability of automated tools,” warns Grossman, noting that computer scientists consider this a “tricky problem.” Of course it is: Deepfake creators are going to stay a step ahead of detectors, much like how sophisticated steroid-using athletes are always a step ahead of the drug testers.
AI Judges and the Abdication of Human Judgment
A separate but related danger lurks in the temptation to replace or outsource the judge herself. Estonia is debuting an AI that resolves small-claims disputes. China boasts of Internet courts where algorithms churn out verdicts in seconds. American jurisdictions haven’t yet given a machine a gavel, but they flirt with the idea whenever they lean on opaque risk scores at bail or sentencing.
Consider the realm of sentencing and bail. In the United States, many jurisdictions have for years used algorithmic risk assessment tools to inform decisions on pretrial release, sentencing, and parole. These aren’t humanoid robots but rather statistical models (like the COMPAS algorithm) that predict the likelihood of reoffending. Judges consult these scores to gauge a defendant’s risk.
This practice has been hotly debated since 2016, when investigative reporting by ProPublica suggested the COMPAS algorithm was biased against Black defendants. That investigation highlighted a key issue: AI-driven decisions can reflect and magnify societal biases embedded in data.
In the Wisconsin Supreme Court’s State v. Loomis (2016) decision, the court allowed use of COMPAS but cautioned that it should not be the determinative factor in sentencing, partly because its inner workings were opaque and proprietary. This points to a general principle: transparency and accountability are required if we’re to accept algorithmic aids in judicial decisions.
Automation bias makes the bench deferential; once a black-box score pegs a defendant “high risk,” a human judge needs rare courage to overrule the math. Yet the vendor’s code is proprietary, the training data is hidden, and the litigant’s right to confront the true source of the accusation evaporates. We inch toward a synthetic judiciary clothed in trade secrets.
Even in more conservative legal systems, judges have begun leaning on AI for assistance. A striking example occurred in early 2023 in Colombia: Judge Juan Manuel Padilla openly stated that he used ChatGPT to help draft part of his judgment in a case about an autistic child’s medical coverage. In his written ruling, Judge Padilla included the dialogue he had with the AI chatbot, in which he asked questions about the relevant law and received answers that aligned with his decision.
Similarly, in India, Judge Anoop Chitkara of the Punjab & Haryana High Court made headlines in 2023 by asking ChatGPT for input on a bail decision during proceedings. The AI provided a general answer about bail jurisprudence in cases of violent crime, which the judge noted and then proceeded to deny bail, emphasizing that the AI was not deciding the case but offering a “broader picture” on the issue.
These developments raise a fundamental concern: if AI heavily assists or controls judicial decisions, do we risk losing the human conscience and judgment central to justice? The European Union actively warns against full automation of judging. Recital 61 of the AI Act explicitly states “the use of AI tools can support the decision-making power of judges or judicial independence, but should not replace it: the final decision-making must remain a human-driven activity.”
This principle recognizes that judging isn’t just data-processing. It involves empathy, moral reasoning, and value judgments that we expect a human, with lived experience and accountability, to make. Imagine an AI judge determining a prison sentence by aggregating thousands of past cases and optimizing for consistency. It might yield a statistically “average” sentence for a crime, but miss the unique mitigating or aggravating human factors that a human judge would see in that particular case.
Inequality and the Digital Divide
That disparity isn’t hypothetical. Large firms already license multimillion-dollar line models to aid in discovery, shape voir dire, and crank out closing arguments tuned to a juror’s social media footprint. Public defenders share passwords to dated Lexis accounts over coffee. Professor Drew Simshaw calls it the coming two-tier bar: the well-resourced “cyborg lawyer” for those who can pay, the chatbot paralegal for those who cannot.
In the Pelkey case, it appears the victim’s family had the savvy or the means to commission an AI video. If a similar crime happened to a family without such knowledge or means, they likely wouldn’t be able to present an AI avatar of their loved one. Even the opportunity to leverage AI in court becomes a privilege.
Experts warn of an emerging unequal system of justice with regard to AI. Simshaw describes the danger of “an inequitable two-tiered system of legal services” where the wealthy benefit from AI and the poor are stuck with either inferior or no AI assistance. One scenario he outlines is “Superior Human Lawyers vs. Inferior Machines” โ poor individuals might only afford an AI chatbot or automated counsel to help them, while richer parties hire real attorneys.
Another scenario is “Well-Resourced ‘Cyborg’ Lawyers vs. Inferior Humans” โ top law firms augment their attorneys with powerful AI tools (for research, strategy, evidence analytics), essentially creating super-lawyers, whereas under-resourced lawyers (like public defenders or solo practitioners for the indigent) lack access to these tools and thus are outmatched.
Both scenarios result in a widening gap in already-wide outcomes: the side with AI muscle could prepare better, argue better, perhaps even create more persuasive exhibits (like high-quality animations or deepfake reconstructions), while the other side struggles.
We can already see glimmers of this. Large corporate law firms invest in AI-driven legal research platforms and document review algorithms that drastically cut the time needed to find supporting case law or sift through evidence. These tools, often costing tens or hundreds of thousands of dollars in licensing, give an edge in complex litigation โ an edge not available to a small-town lawyer or a public legal aid clinic with a tight budget.
Proponents of access-to-justice tech point to self-help portals and eviction-defense chatbots, and they’re right to say the floor can rise. But if the ceiling rises faster โ if corporate counsel harness sentiment analysis, emotion recognition, and bespoke deepfakes while the indigent get canned ChatGPT motions โ the gap widens even as the average quality improves. The Sixth Amendment wasn’t meant to promise that everyone receives some kind of assistance; it promises adequate assistance. When the baseline is an algorithm that never forgets a case citation and never sleeps, the constitutional adequacy floor lifts. We will either fund defense accordingly or admit, out loud, that our adversarial model is now pay-to-win.
In the courtroom itself, imagine a criminal trial where the prosecution (backed by government funds) uses an AI system to analyze hours of CCTV footage, identifying the defendant’s face in a crowd with facial recognition โ but the public defender lacks any comparable tech to challenge the identification or to comb through the same footage for exonerating images. Or consider a civil trial where a wealthy plaintiff presents a virtual reality reenactment of an accident, created with the help of AI and forensic experts, giving a visceral demonstration to the jury. The impecunious defendant cannot afford a rebuttal exhibit of similar quality.
Kavyasri Nagumotu, writing in the University of New Hampshire Law Review, noted that use of things like deepfake videos in court could be something that advantages parties that have more resources over parties that don’t. This encapsulates the digital divide issue. When high-tech evidence or methods are introduced, wealthier litigants will be the early adopters, and those without means may not even know such options exist, let alone be able to counter them.
There’s also the risk that if courts start expecting AI-enhanced presentations, then not using them might implicitly signal lesser effort. For instance, if AI could have been used to verify the authenticity of a video but a defendant can’t afford an expert to do so, will the court hold that against them when they challenge the video’s authenticity?
Necessary Guardrails Before the Crash
Guardrails exist, but only if we install them before the crash. Iโd put five in place tout de suite.
First, no synthetic exhibit โ image, audio, or avatar โ should enter evidence without full disclosure of its provenance, method of creation, and chain of custody. That disclosure needs to be sworn, not summarized.
This means a detailed affidavit under penalty of perjury that specifies: which AI tools were used (GPT-4, Midjourney, ElevenLabs), what source materials fed the algorithm (photos, voice samples, text), who operated the software, when the generation occurred, and what prompts or parameters guided the output. Think of it as an enhanced version of the authentication requirements for surveillance footage under Rule 901(b)(9), but with teeth. The disclosure should include any post-generation editing, even minor touch-ups.
We already require similar documentation for DNA evidence chains of custody. Why should digital resurrection get a pass? In Melendez-Diaz v. Massachusetts, the Supreme Court held that forensic analysts must testify about their methods. The same principle should apply here: if you want to summon the digital dead, you testify under oath about your sรฉance.
Second, any party that introduces AI-generated media must pay for an independent forensic expert whom the court appoints and the opposing side can question. If you want the emotional punch, you bankroll the neutral referee.
This cost-shifting mechanism already exists in other contexts. Under Rule 706, courts can appoint neutral experts in complex cases. Several states require the requesting party to pay for independent medical examinations in personal injury cases. Here, the principle is simple: AI-generated evidence is inherently suspect, so the party seeking its admission bears the burden, both financial and evidentiary, of proving its reliability.
The expert wouldnโt just verify authenticity; theyโd analyze potential biases in the training data, assess whether the output fairly represents the source material, and identify any artifacts or anomalies invisible to lay observers. Think of it as mandatory insurance against manipulation. Professor Delfino has proposed a similar framework where deepfake authentication becomes a question for the court under Rule 104(a), removed from jury consideration entirely.
Third, authenticity challenges should be decided in a pre-trial or pre-hearing forum that does not expose the jury (or the judge in a bench trial) to the contested media until its legitimacy is settled.
This builds on the Daubert standardโs gatekeeping function while also going further. Just as judges hold hearings on the admissibility of scientific evidence before trial, they should conduct โdigital authenticity hearingsโ for any AI-generated content. The burden would be on the proponent to establish authenticity by clear and convincing evidence โ a higher standard than the current โsufficient to support a findingโ under Rule 901.
These hearings would function like reverse in limine motions: instead of keeping evidence out, theyโd be required to let evidence in. No more ambush deepfakes dropped during closing arguments. The Pelkey video, for instance, would have faced scrutiny about whether the sisterโs script truly reflected her brotherโs views, whether the AI voice clone captured his actual speech patterns, and whether the videoโs emotional impact unfairly prejudiced the proceeding.
Fourth, undisclosed AI representation must be treated as an ethical violation on par with practicing law without a license. A litigant who tries to pass a deepfake lawyer off as real counsel should face sanctions.
Model Rule 8.4 already prohibits conduct involving โdishonesty, fraud, deceit or misrepresentation.โ Presenting an AI avatar as human counsel violates this rule and more. But we need specific amendments that address AI deception. The sanctions should be severe: monetary penalties starting at $10,000, potential contempt charges, and adverse inference instructions to the jury.
Jerome Dewaldโs case in New York provides the template. When he tried to pass off โJimโ the AI lawyer as real counsel, the judges should have done more than scold him โ they should have referred him for criminal prosecution for fraud upon the court. Several states already criminalize the unauthorized practice of law with penalties including jail time. AI avatars practicing law should trigger the same response.
Fifth, ownership of the decision must stay human. An algorithm may suggest a sentence, never impose one. An algorithm may draft findings, never sign them.
The Sixth Amendment guarantees the right to be sentenced by a judge, not a machine. Yet weโre already sliding down this slope. The Wisconsin Supreme Court in Loomis allowed COMPAS risk scores at sentencing but required warnings about their limitations. Thatโs not enough.
We need bright-line rules: AI tools can be used for research or administrative tasks, but every substantive legal decision must be made by a human judge who can articulate their reasoning without reference to algorithmic recommendations. The EUโs AI Act gets this right in Recital 61: โfinal decision-making must remain a human-driven activity.โ
When Judge Padilla in Colombia included ChatGPT responses in his ruling, he crossed this line. The proper approach is demonstrated by courts that use AI for case management and scheduling but reserve all merits decisions for human judgment. All judges suffer from a degree of โautomation bias.โ They defer to algorithmic recommendations even when they shouldnโt. The only cure is prohibition, not disclosure.
None of these rules is a silver bullet. All are better than the vacuum we have now. Critics will call them heavy-handed, expensive, innovation-stifling. They should remember the point of a trial is not to dazzle spectators with the newest toy but to reach a reliable verdict, then impose a just sentence. Novelty that clouds truth isnโt innovation; itโs malpractice.
These proposals would require coordinated action. The Judicial Conference should amend the Federal Rules of Evidence by yearโs end โ seemingly an easy lift given that the advisory committee has been studying the issue since 2022. State courts should adopt parallel rules through their own rulemaking processes. The ABA should amend the Model Rules of Professional Conduct to specifically address AI deception.
Congress is already moving. The Deepfakes Accountability Act, reintroduced in 2024, would criminalize certain malicious deepfakes with penalties up to 10 years in prison. The REAL Political Ads Act would require disclosure of AI use in political advertising. These bills should be expanded to cover courtroom use, with mandatory minimum sentences for presenting false AI evidence.
Professional bodies are beginning to formulate responses, too: the National Association of Criminal Defense Lawyers has proposed model jury instructions for AI evidence, the Federal Judicial Center is developing training programs for judges, and at least twelve states have formed AI task forces. Arizonaโs newly formed steering committee on AIโs role in the courts is drafting proposed rules that could serve as a national model.
The cost concerns are real but manageable. Courts already fund expert witnesses in criminal cases through the Criminal Justice Act. The same funding mechanism could cover AI authentication experts. For civil cases, the cost-shifting rule would incentivize parties to think twice before deploying synthetic evidence. If you canโt afford to authenticate your deepfake, you canโt afford to use it.
Various jurisdictions are already experimenting with solutions. The Northern District of California requires disclosure of any AI use in drafted documents. The Eastern District of Texas has proposed local rules governing AI-generated exhibits. International frameworks like the EUโs AI Act classify judicial AI systems as โhigh-risk,โ requiring extensive documentation and human oversight.
The Pelkey case should be a wake-up call that future-proofing the justice system is an urgent task. It means training judges to recognize AI manipulations (the Federal Judicial Center could mandate annual training), equipping defense lawyers with resources to challenge AI evidence (public defender offices need funding for digital forensics experts), updating rules like Federal Rules of Evidence 901 and 902 (the advisory committee should fast-track amendments), and setting ethical boundaries (state bars should issue advisory opinions now, not after the damage is done).
It also means grappling with philosophical questions about what justice means in an AI age. Is a sentence influenced by a deepfake victim statement as just as one influenced by a genuine one? How do we value the โvoiceโ of a victim or an accused when that voice can be artificially manufactured? These arenโt abstract concerns. Theyโre live issues in courtrooms today, demanding immediate answers.
Truth and Dignity in the Age of Synthetic Justice
The family that loved Pelkey no doubt acted in sincere grief. The judge who praised their ingenuity no doubt meant to offer comfort. But good intentions don’t redeem a defective procedure. We allow hearsay exceptions because we can cross-examine the living witness who lays the foundation. We allow victim photos because a photo, even a heart-wrenching one, depicts what once existed. A deepfake does neither. It edits history and then dares the adversary to prove the negative: prove the dead man wouldn’t have said those words in that tone. That burden is impossible by design.
High-resolution deceit is no longer a national-security worry reserved for summits and spycraft; it has arrived at ground level, wearing a suit, holding forth in criminal court, and receiving judicial applause. The Pelkey deepfake won’t be the last. Prosecutors somewhere have already taken notes on its emotional resonance. Defense strategists have flagged it as precedent. Tech vendors smell a market.
If legislatures don’t act quickly โ if evidence committees keep tabling the hard questions โ we’ll wake up in five years to find that every serious trial is a contest of competing deepfakes, jury instructions will include primers on generative adversarial networks, and the side with the larger compute budget enjoys a presumption of credibility. Justice will become a branch of the entertainment industry, and truth will be a subscription feature.
Better to draw the line while the paint is still wet. Require transparency, fund parity, preserve human judgment, punish deceit, and above all remember that a courtroom, unlike a social media feed, is supposed to be the one place where reality cannot be scrolled away. That principle is older than the Republic, older than common law, older than the first village moot or hundred court where one neighbor accused another of torching a barn.
The technology challenging it is brand new, but the remedy is the same as it ever was: skepticism, cross-examination, and a shared commitment to let the dead rest in peace instead of drafting them into our digital pageantry.
About The Author