
Introduction
Artificial intelligence (AI) promises to leave no corner of American society unaltered, and the legal system is no exception. Dr. Bateman believes AI will birth an inequitable and disorderly justice system. On the contrary, we believe AI is positioned to be an ultimate equalizer of justice.
AI is cheap and fast intelligence; intelligence facilitates truth-seeking, and it is truth-seeking that is the primary function of our courts. Bateman fears that AI adoption will be an ultimatum for the rule of law, or an epistemic weapon of mass destruction. We believe it is functionally identical to previous technologies adopted by courts to improve the truth-seeking process.
AI is a broad term for a broad set of technologies. U.S. courts are similarly decentralized in structure and diverse in function. Through his piece, Bateman conflates distinct parts of the justice system (impact statements, evidence rules, representation) and distinct AI technologies (chatbots, deepfakes, audio cloning) with one another. In our response, we clarify the different forms of courtroom AI and consider the unique operating procedures and rules of different parts of the legal system.
In explaining how AI could be used and in what contexts, we challenge his assertions that it will weaken the foundations of our justice system. Bateman envisages a world where legal criterion and judicial precedent evaporates, while agents of the court cease to operate rationally. Through the examples he offers, Bateman fundamentally misinterprets the courtโs present frustrations, e.g., a victim impact statement made with prejudice, an unlicensed attorney practicing law, the introduction of falsified evidence, &c., as essential to AI. They are not.
We confront his conclusion that AI is outpacing prudence and reach the conclusion that it is luddism that is imprudent. AI is nothing more than a tool. An auditable, increasingly interpretable, unprecedentedly powerful tool for ascertaining and evaluating the truth. Judges, juries, public defenders, court clerks, self-representing defendants, expert witnesses, and mediators all stand to benefit from AI.
How AI is Being Used in Courtrooms (by the People)
We begin with Batemanโs hook, an AI video of murder victim Christopher Pelkey making a victim impact statement (VIS) in May 2025. The video, made by editing an image of Chris, using an AI tool to animate his upper body, and cloning his voice to give him the appearance of speaking, is known as a deepfake. The video itself is interstitched with real clips of Chris while he was alive, him addressing and thanking the court and judge, and the commonly replayed two sentences where his likeness indirectly addresses his convicted killer.
We ourselves find this video unsettling, but there is nothing to suggest that an admissible, fully disclosed, and judge-approved impact statement is illegal because it uses AI. Bateman admits as much, saying โthe AIโฆwas allowed precisely because it was for a judge’s eyes only at sentencing, not to prove guilt before a jury.โ There are established standards for what can be said during impact statements; there is no rule against statements spoken in the first person, as is done in Pelkeyโs impact statement. Payne v. Tennessee (1991) explains: VIS is constitutionally permissible so long as it isnโt โso unduly prejudicial that it renders the trial fundamentally unfair.โ
In the status quo, a mother may read a statement on behalf of her slain sonโs behalf to elicit empathy from the judge. Audio-visual statements made with multi-modal AI are not essentially different from other VIS; theyโre only different insofar as it may be more evocative. Judges routinely sustain objections against non-AI VIS and they can do so in the case of AI VIS.
Therefore, we reject Batemanโs claim that โwe are fast approaching the day when judges and juries face two polished, contradictory deepfakes and must choose which phantom feels truerโ in the context of impact statements. Bateman uses this example as a springboard to consider the influx of cheap, malicious, AI-generated evidence into the court system. We consider this fear later on.
Bateman next considers the case of Jerome Dewald, an appellant from NY who, in April 2025 used a prerecorded video of an AI avatar to plead his appeal case. The video depicts a โsmiling, youthful-looking manโ presenting Dewaldโs statement on his behalf. Similar to the Pelkey impact statement, Dewaldโs โAI lawyerโ likely uses text-to-video technology that turns a text input into a video depicting a person reading the prompt. These personas are not โintelligentโ in the way a chatbot (like ChatGPT) listens and responds to a userโs voice or text input. For all intents and purposes, they are prerecorded videos.
Within seconds of starting this video, Justice Manzanet-Daniels becomes suspicious and asks Dewald whether the man in the video was his counsel. (Dewald was pro se in this hearing: representing himself.) Dewald replies that the man in the video is not a real person. Manzanet-Daniels then scolds Dewald for the โmisleadingโ video and orders it shut off.
Bateman regards this incident as evidence of AIโs existential risk. We do not.
Perhaps the appellant demonstrated a lack of foresight by not disclosing that the person in the video was AI-generated. Regardless, Dewald did not falsely represent AI as his counsel but introduced the video as his pro se argument. The video even begins with the AI-generated avatar saying โI come here today a humble pro se before a panel of five distinguished justices.โ The presiding judge then interrupted the video to ask, โHold on โ is this counsel for the case?โ (Oral arguments are almost always delivered by admitted counsel or by the self-representing litigant.) Itโs possible that the judges assumed Dewald was using the AI to act as his counsel, hence the โmisleadingโ admonishment. Therefore, Batemanโs claim that Dewald โattempted to use an AI avatar as his attorneyโ is misleading at best. The AI was merely used to utter his words; not to falsify evidence and introduce it into the court record, and certainly not to act as counsel. In his apology, Dewald said he felt the avatar would be able to present better than him.
The takeaway is that this court was upset with Dewaldโs perceived lack of candor, not his use of AI per se. Dewald might as well have submitted a video of himself using a sock puppet and speaking in a falsetto voice. This would not be essentially different, and would be similarly admonished.
There is no evidence this court was signaling โextreme skepticismโ about AI avatars per se. Furthermore, Batemanโs claim that AI โcanโt be cross-examined, penalized for contempt, or subjected to disciplinary action if it makes false statementsโ is similarly irrelevant here, as it conflates true AI legal representation with Dewaldโs naive video recording, which did not and could not, provide legal counsel. Batemanโs severe pronouncement that the court โshould have referred [Dewald] for criminal prosecution for fraud upon the courtโ evinces a fundamental misunderstanding with how the litigant used AI in court and what upset the judge.
The more consequential use of AI in courtrooms is actual legal representation. Legal representation is a sacrosanct right that demands a non-negotiable standard. The public defender system struggles to live up to this standard. Unlike Bateman, we see AI representation as a panacea for chronically overworked attorneys and inadequately represented clients.
In January 2023, DoNotPayโs CEO Joshua Browder made his own attempt, making headlines after tweeting plans to give a defendant in a parking ticket case a โrobot lawyerโ: The defendant would wear smart glasses at their hearing; DNPโs AI would listen into the proceedings, and give advice to the defendant on what to say. DNPโs โAI systemโ was likely composed of three parts: transcript via a speech-to-text model, to know what the defendant was hearing; text processing via a large language model, to give advice to the defendant; and a text-to-speech model, to read aloud the advice the system generated.
DNP was in the good graces of the U.S. judicial system until this incident: The American Bar Association bestowed DNP its prestigious Louis M. Brown Award for Legal Access in January 2020 That didnโt stop state bar associations from reaching out to Browder once they caught wind of this โexperiment,โ threatening him with district attorney referrals and jail time.
There were two reasons why states werenโt excited about this experiment. First, the โrobot lawyerโ would be transmitting the contents of a court proceeding, which is in direct violation of Californiaโs statewide law on photographing, recording, and broadcasting in legal proceedings. Because DNPโs AI would need to receive the recording as it occurred in real-time, it would violate the guidelines that personal audio recordings โmay be allowed only for private notesโ and would place the defendant in contempt of court. This law exists irrespective of AI in the loop.
Second, by giving the defendant advice in real-time, the robot lawyer would be engaging in unauthorized practice of law (UPL). States treat UPL as a serious public protection issue, as their core concern is protecting the public from incompetent or unqualified legal representation. The FTC eventually issued a judgement against DNP in 2024 arguing, among other things, the product wasnโt properly tested and didnโt work as advertised.
Courts are increasingly aware of AIโs proliferation, and are rightfully alert to all unauthorized practices of law. Consumers, meanwhile, are increasingly interested in using these services, as they recognize the potential (if not yet actual) value of this type of representation. Consumer protection agencies are aware of companies building products at the intersection of AI and law, and are taking action to protect consumers from low-quality products.
Courts have begun to implicitly address Batemanโs philosophical question as to whether we want robot advocates in the courtroom. In the cases of Pelkey and Dewald, the courtโs posture is a clear pushback on inappropriate use, irrespective of the technology; in the case of DoNotPay, FTC and various state bar admonishments were concerned with the low-quality, unregulated, misleading use of AI, but did not reject the technology per se. None of these incidents demonstrate that the use of AI in the legal context is ipso facto objectionable.
How AI is Being Used in the Courtroom (by the Courts)
The people want AI, and so do the courts. AI, in the form of โclassicalโ machine learning, has existed in our courts for well over a decade. But since the launch of ChatGPT in November 2022, what many consider the start of โAIโ as we know it today, judiciaries, both foreign and domestic, have displayed an eagerness to integrate AI into their courtrooms.
Bateman starts with examples of Estonia and China, citing each countryโs intent to use โAIโ to resolve civil and small-claims cases in their courts. Estonia is an odd case. The introduction of an AI judge in Estonia was originally reported by Wired in 2019. Yet, three years later, Estoniaโs Ministry of Justice issued a press release claiming the Wired report was misleading, saying โthere hasnโt been that kind of project or even an ambition in Estonian public sector.โ
China, on the other hand, has rolled out the technology. Chinaโs Supreme Peopleโs Court (SPC) launched the Mobile Court Program back in March 2019. The program is scoped to low-stakes civil disputes, where both parties voluntarily opt in to a pre-trial hearing run by AI. The โAIโ in question is an animated female judge that sits behind a bench and speaks to the litigants. The judge asks pre-recorded, pre-determined questions, the litigants answer, and the judge processes the answers and continues until the questioning ends. At the end, the AI drafts a filing for a human judge, who considers the facts of the case and makes a final decision.
The SPC today operates these courts in limited areas (Hangzhou, Beijing, Guangzhou, Chengdu) and claims an impressive 75 percent reduction in judge pre-trial document review and a 70 percent success rate in AI judges generating adjudicatory documents. By now youโve likely concluded that this โjudgeโ is not actually adjudicating, i.e., considering competing claims, evidence, and ruling. Instead, the AI judge acts as a paralegal by collecting case information, which meaningfully speeds up a human judgeโs ability to rule on the case.
As we argue, much of AIโs value in U.S. courtrooms will come from this type of civil case handling that is mostly rote, procedural, bureaucratic adjudication. Chinaโs model of opt-in, low-stakes cases where humans ultimately make the rulings, are the three elements needed to implement this stateside.
In the U.S., the National Center for State Courtsโ โLandscape of Civil Litigationโ study estimates that civil cases where the judge never needs to decide facts (dismissals, settlements, default judgments) made up over 61 percent of 925,000 state court cases. Pew Research estimates in debt-collection suits, which are the single largest category of state-civil filings, courts issue default judgments 70 percent of the time. AI in this context means document parsing, information extraction, data lookups, and, at most, a friendly AI avatar โjudgeโ that asks litigants questions from a script. This is hardly โgiv[ing] a machine a gavel,โ as Bateman fears.
Other examples from Bateman include a Colombian judge drafting part of his judgment with ChatGPT, and an Indian judge using AI to summarize case law. The takeaways are the same as in China: Judges use AI to acquire information (research) to aid their decisionmaking process and automate rote legal paperwork (text processing, paperwork filling, document drafting), without outsourcing their judgments. Since Judge Padillaโs use of AI in 2023, Colombia has arguably led the Western Hemisphere in institutionalizing AI in its courts, including: an administrative resolution establishing binding rules for generative AI use in courtrooms; a constitutional ruling affirming lower-court AI use with specific guardrails; and a partnership with UNESCOโs AI-In-Justice framework. In India, Judge Chitkara set a meaningful precedent by using the AI, not to reinforce his own biases, but to check them, as Bateman admits: โHowever, he [Judge Chitkara] wondered if he was relying too heavily on his own โconsistent viewโ that allegations involving an unusually high level of cruelty should count against granting bail.โ
In the U.S., the technology proliferates in courts at all levels. Since the earliest days of the technology, there has been a steady stream of pro se defendants, and companies enabling pro se defendants, using AI to research their case and prepare for hearings. Attorneys use products like Clearbrief to win cases, which at ~$200 per month is generally accessible compared to the Harvey AIs of the world. At the state level, Virginia Governor Glenn Youngkin is using AI to scan state regulations to identify redundancies and contradictions. California, adopted new rules this July that โembrace policies governing the use of generative artificial intelligence by judges and court employees.โ Meanwhile, the Orange County Superior Court is using AI language translation to improve courtroom accessibility. In Arizona, the same state where Christoper Pelkeyโs family presented their AI victim impact statement, the state Supreme Court just adopted AI-generated avatars that communicate information to the public.
By claiming that โonce a black-box score pegs a defendant โhigh risk,โ a human judge needs rare courage to overrule the mathโ or that using these tools necessarily means โAI heavily assists or controls judicial decisions,โ Bateman unfairly reduces judges to passengers instead of pilots. The above counterfactuals demonstrate that courts who allow and adopt AI are intentionally thoughtful about its use, as they were with all previous technologies before it. Bateman unfairly maligns AI by claiming it is fundamentally deleterious to the judicial process.
There is nothing inherently unique about AI as an information delivery tool being abused as a convenient โsingle source of truth.โ All information sources should be scrutinized for accuracy and bias, regardless of AIโs involvement. (Hence the injunction of grade school teachers for two decades against citing Wikipedia pages as sources, but to review the primary sources cited thereby.) It is worth noting it is uniquely true of AI technologies that, because its core business proposition is accurate and unbiased information, there has been no greater area of investment by AI companies over the last three years than hallucination reduction, model alignment, bias and risk mitigation, web search, citation usage, and other truth-maximizing features.
AI in the Future
Batemanโs arguments against AI misuse in the judicial process are fourfold. Presentation: the delivery of information; representation: AI attorneys and AI-augmented legal assistance; adjudication: use of AI and algorithms for levying judgments; and evidence: fraudulent synthesis thereof. In each bucket we acknowledge the risks and benefits of each, and reach a conclusion that AI is strictly neutral or positive in each of these individual contexts.
Presentation
Using AI to augment how members of the courtroom deliver information is a use case of marginal utility. We create this bucket because of Batemanโs insistence on two major scenarios: Pelkeyโs impact statement and Dewaldโs โAI attorney,โ which are almost entirely harmless applications of presenting information with the help of AI.
If an AI VIS meets the current standards established by our courts, there is no reason to disallow this type of statement. Victims have rights to address the convicted in court proceedings. Inexpensive, realistic AI depictions are an expression of this right. For example, giving a voice to those without one, such as disabled individuals. Or those having trouble finding their voice, like an emotional victim confronting their perpetrator. Itโs also possible for a victim to use an AI tool like ChatGPT to draft and edit their impact statement. Or for an image-generation tool like Midjourney to prepare a cleaned-up photo of a victim to show during their statement.
Allowing individuals to better articulate themselves with the help of AI is almost certainly net-positive for courts. Opening statements, closing arguments, allocution, factual basis for a plea, parole or appellant hearings, or any opportunity for an individual to speak on their own behalf. Violating rules of decorum or court procedure would be similarly enforced as is in the status quo. Although court speech is generally a narrower form of the First Amendment, thereโs no reason to believe AI implicitly triggers prohibited uses that would keep it from the courtroom. For these types of pre-written statements, AI has a place in court.
Representation
We turn to what Bateman coined the โClarence Darrowโ Problem to argue that AI is a solution to unequal representation in legal proceedings. Bateman argues that the creation and use of an โAI avatar with the skills and persona of the greatest attorneys in historyโโenter Clarence Darrowโโraises concerns about fairness and the very nature of legal representation.โ Bateman asks, rhetorically, if we should want robot advocates even if they were allowed. If we are seeking the most fair and impartial justice system, the answer is a resounding โyes!โ
In our new world, the disenfranchised will have a โRobo-Darrowโ (or RD for short) representing them against the counsel of the most well off. RD presents a remedy for actual, extent unfairness. At present, well endowed litigants can purchase the best trained and most persuasive litigators to help them get away with murder (literally). Meanwhile, if youโre an โimpecuniousโ defendant, you must rely on litigators who are overworked, underpaid, and, despite their best efforts, often less persuasive. This is the not-so-adversarial system Bateman is so concerned with upending.
Before now, no one has been represented by AI in court. In this way, an AI attorney certainly raises questions about the nature of legal representation. Bateman expresses concern that RD โcannot be held to the professional standards required of [human] lawyersโit canโt be cross-examined, penalized for contempt, or subjected to disciplinary action if it makes false statements.โ These concerns are certainly valid but ultimately remediable.
This first claim, that RD could not be held to professional standards, is demonstrably false. Evidence of its gradual adoption in low-stake situations, U.S. courts are already holding the technology, and those who use it, to high standards. Nevertheless, AI has come a long way in the last few years, where models can now succeed in autonomously completing 83 percent of human-level legal tasks in recent law benchmarks. By its very nature, AI can be cross-examined (not just in legal matters) in a way that illustrates its full set of capabilities and biases. There is no reason to believe an AI agent trained on all available legal corpora couldnโt develop a novel legal argument or fall apart in cross-examination. To the second claim, concerns about contempt and false statements are instantly remedied by the presence of human counsel that assumes responsibility for the statements made by his AI co-counsel.
At maturity, Robo-Darrow isnโt just qualified but practically free. Presently, AI is already cheap enough for use by pro se litigants and under-resourced public defenders. Across drafting (documents, statements, briefs), reviewing (documents, filings), and note-taking (interviews, cross-examination), a consumer AI like ChatGPT, at $200 per month, could trivially support an individual on these tasks. AI has only gotten orders of magnitude smarter and cheaper over the last three years, meaning our future robot lawyer will not only be highly qualified but absolutely affordable.
Bateman paints two pictures in his consideration of Robo-Darrow: Cyborg Lawyers vs. Inferior Humans and Superior Human Lawyers vs. Inferior Machines. The discerning readers will notice immediately that this presentation of possibilities is a false dichotomy. Two other obvious combinations of cyborg and human lawyers include: Superior Human Lawyers vs. Superior Human Lawyers and Cyborg Lawyers vs. Cyborg Lawyers. The former is cost-prohibitive; the top 100 law firms billed $961 per lawyer-hour on average in 2023. Robo-Darrow, on the other hand, is indefatigable, more knowledgeable, and, if not yet, will be more creative and persuasive than his human counterparts.
The real question is: Do we want to embrace a future in which two Robo-Darrows go at it while a human audience watches on and decides how to rule based on hearing the best possible arguments for both sides of a dispute? We answer unequivocally in the affirmative.
Bateman says that the Sixth Amendment doesnโt merely promise that everyone receives some kind of assistance but adequate assistance. America has long failedโdespite the dogged efforts of under-resourced defense attorneysโto deliver on this rather modest promise. AI will help us realize the Sixth Amendmentโs aspirational guarantee of โthe Assistance of Counsel for his defenceโ by democratizing excellent legal representation.
Adjudication
Now we address the concerns about AI qua adjudicator instead of qua litigator. We have seen Minority Report and militate against the imagined world in which supercomputers issue arrest warrants based on near-certain predictions of the future.
We wholeheartedly agree that human judges must be responsible for meting out punishment, and must exercise their own reason to do so. Pace Bateman, we contend that this human faculty is aided by access to more information, which AI can help furnish. Judges already rely on their and their clerksโ interpretations of statutory and case law. Relying on AI to point to relevant precedents and offer analysis thereof is not fundamentally different.
Consulting an AI chatbot trained on the corpus of legal precedent does not make a judge inferior but superior by facilitating dialectical truth-seeking, which dates back to Socratesโ elenchus and finds modern expression in the scientific method.
Bateman raises the concern that โjudging isnโt just data-process [but] involves empathy, moral reasoning, and value judgments that we expect a human, with lived experience and accountability, to make.โ We have already stipulated that AI would be used to complement the judgeโs reasoning process, not supplant his decision-making ability and responsibility. Still, itโs worth homing in on Batemanโs point about โlived experienceโ: the fashionable way to describe knowledge induced from observation. AI is better at this kind of learning than any person given its access to data, i.e., innumerable observations. (What really distinguishes human cognition from machine learning is our ability to deduce necessary conclusions from a set of premises without having or needing to witness the imagined syllogism carried to its logical conclusion.) This precisely makes AI an invaluable aid to a human adjudicator.
Evidence
Finally, Bateman calls for updating rules like Federal Rules of Evidence 901 and 902 and for state bars to set ethical boundaries preemptively. President Donald Trumpโs AI Action Plan calls on the Justice Department to โissue guidance to agencies that engage in adjudications to explore adopting a deepfake standard similar to the proposed Federal Rules of Evidence Rule 901(c). This kind of anticipatory rulemaking characterizes the a priori regulatory system; the common law tradition is all about the emergence of precedent, standards, and rules. We should not allow AI to undermine the a posteriori approach of the ancient legal tradition.
In the worst case scenario, falsified AI evidence, indistinguishable from actual evidence, is created and admitted into courtrooms with impunity. (This, of course, is perjury: a felony in most jurisdictions.) In this doomsday scenario, anyone (even the poorest defendants) could โmake upโ any evidence they chose to, bending reality to their whim. It goes without saying this would severely cripple the juriesโ ability to discern fact from fiction and properly assign blame.
Batemanโs belief in the โliarโs dividendโ belies his confidence in such a scenario. He endorses Professor Rebecca Delfinoโs proposal to update Rule 901 to โrequire a more structured foundation for digital audiovisual evidenceโฆincluding expert analysis or technological verification of authenticity.โ If such verification is possible, then AI is ipso facto unable to generate multimodal content that is indistinguishable from reality. If it were really indistinguishable, then no amount of experts and no matter the tool could distinguish it! If this happens, we return to a post-truth society. Evidence can no longer be falsified.
This conceit accepts that AI really is disanalogous and its offensive (deceptive) capabilities will somehow outstrip its defensive (verifying) capabilitiesโthat deepfakes enjoy a โliarโs dividend.โ
Even so, we would not be entering new jurisprudential territory. Such a post-truth situation would return us to the legal system that has operated since time immemorial: One without forensic evidence, photographs, video and audio records. The system that has existed for about a century, the Goldilocks zone between no technology and technology so advanced that simulation is indistinguishable from reality, is a Golden Mean. However, this system is still replete with conflicting expert analysis of DNA, video, audio, and other technologically sophisticated forms of evidence. We would not be turning away from a system that eliminated the existence of Type I and Type II errors in judicial proceedings, but from one non-omniscient era to another.
Still, people break the law, which is why we have jurisprudence in the first place. Certainly, as the expected benefit of breaking the lawโin this case, using AI to lie at trialโincreases, and as the anticipated cost of doing so decreases, people will become more likely to do so. In this case, Bateman claims those defendants โwith hedge-fund backing can hire technologists to show the jury a parallel universe where the decedent brandished a phantom pistol [while] poor litigants, without a deepfake lab of their own, will stare at the screen and lose.โ
We submit to you a more likely scenario based on the hyper-competitive nature of the AI market, the increasing quality of the technology, and its decreasing price: a deep-pocketed and unethical defense team presents โevidenceโ that appears true while the poorer and equally unethical plaintiff will present โevidenceโ that appears just as veridical. Both pay OpenAI to generate videos that display opposite recordings of the scene of the crime. We have, as is so often the case in legal proceedings, an evidentiary stalemate.
Bateman acknowledges himself that there is โa push for developing technology-driven authentication solutions,โ but reaches the unsubstantiated conclusion that โdeepfake creators are going to stay a step ahead of detectors.โ This conclusion assumes too much; there already exist companies and services dedicated to detecting and flagging AI-generated content.
Among generic content detection offerings from companies like Google and Microsoft, there are a crop of startups, like Reality Defender, dedicated to text, image, video, and audio detection, especially in sensitive environments like the courtroom. Pindrop and DeepMedia, two other AI detection startups, already work extensively with government organizations to combat sensitive content detection use cases. The claim that AI is always one-step ahead of these detectors is context-dependent and generally incorrect: while detecting AI-generated text is notably difficult, detection rates for voice and image AI is over 90 percent by leading companies. The rule of thumb is that, once a model is released, these companies must accumulate samples in order to train their own adversarial models, which can lag in weeks or even months. Nevertheless, it is not a foregone conclusion that AI-generated content is perpetually unidentifiable, nor that companies fail to detect this content. As one layer in the stack of mitigating the use of AI evidence in courtrooms, detectors help flag these obvious examples.
Conclusion
We must consider what the actual use cases of AI are in the legal environment. We posit that the most likely legal implementation of AI are the following: legal analysis, automated representation, and synthetic victim impact statements. For the foregoing reasons, we believe that these uses will be a democratizing force that will strengthen the adversarial legal system. Ne’er-do-wells that would seek to introduce fake, AI-generated โevidenceโ do so at their peril, as they will be scrutinized by white-hat, truth-seeking, lie-detecting AI. If AI technology reaches a point where it is literally impossible to falsify, then no change to evidentiary rules can protect us from such an epistemic nightmare. But we have not reached that point, and are unlikely to.
About The Author