Artificial intelligence (AI) is rapidly transforming the modern-day legal landscape, offering tools for research, drafting, document review, and jury selection, and even predicting case outcomes. If used correctly and with robust safeguards, some attorneys and litigants may benefit from the use of AI to streamline litigation projects. However, AI also can pose serious risks and dangerous pitfalls when misused or used without adequate quality control measures in place. Recently, for instance, dozens of attorneys have faced sanctions in federal court for filing briefs containing fake (“hallucinated”) case citations or incorrect statements of the law generated by AI. And the Government Accountability Office (GAO) has scolded protesters about the use of AI for drafting bid protests for similar reasons. Outside the parameters of legal research and drafting, litigants are now using AI for more novel and potentially even more troubling purposes. One such example is a recent Arizona criminal case where the family of a crime victim offered an AI-generated victim impact statement for use at sentencing. The use of AI to provide a witness statement raises serious concerns for the accuracy of the information provided and the fairness of the proceedings, begging the question: Does the use of AI in litigation represent true artificial intelligence, or is it artificial interference preventing a just outcome?

Recent Applications

Today, AI is commonly used by lawyers and litigants to streamline otherwise time-consuming legal tasks like legal research, brief writing, and synthesizing or summarizing voluminous document productions. In addition, AI tools now are being integrated into client-facing interfaces such as chatbots for legal intake. In theory, these tools can help assess the merits of potential cases and streamline client onboarding. However, as noted above, AI remains a flawed practice companion. In addition to the possibility of hallucinated case citations and incorrect legal analysis, the use of AI introduces data privacy concerns and risks misadvising individuals due to overly generalized AI conclusions. Ensuring human oversight in these interactions remains critical to maintaining legal integrity.

Earlier this year, an Arizona criminal case may have shifted the legal-AI landscape dramatically. In 2021, in Chandler, Arizona, Christopher Pelkey was shot by Gabriel Paul Horcasitas during a road rage incident. Horcasitas was eventually convicted for the killing. At sentencing, crime victims (or their families, as may be appropriate) generally are entitled to give victim impact statements, i.e., written or oral statements describing how the crime affected their life, which are submitted to the judge to consider during sentencing. For Horcasitas’s sentencing in May 2025, Pelkey’s sister prepared and played for the sentencing judge an AI video which depicted her deceased brother speaking to the camera as if he were offering his own words. To create the video, she used AI programs to combine photographs, videos, and audio clips. She altered portions of his image, such as removing his sunglasses and trimming his beard, and she recreated his laugh. The resulting image of her brother recited a script that she wrote. Experts believe the case represented the first instance where an AI-generated video of the victim was used for purposes of a victim impact statement.

The judge commented on his appreciation for the video, then sentenced Horcasitas to 10.5 years in prison for manslaughter. Although the defense attorney does not appear to have objected to the use of the video at the sentencing hearing (possibly dooming any appeal), questions remain as to whether the video was an appropriate victim impact statement and fair to the defendant. As noted, the AI video was not actually the victim himself; it was an approximation, bearing an altered image and a statement written by someone else. Would the victim actually have given the statement attributed to him? Would he have come across as credible, likable, and admirable as the video made him out to be?

Victim impact statements are not formal evidence, and they are submitted to a judge, not a jury. Therefore, the risk of the ultimate decision-maker giving undue weight to a statement manufactured through AI is somewhat lessened. That said, if AI can be used for victim impact statements—to create or approximate facts, to manipulate emotion, or to drive outcomes—it could open the door to risks of undue influence and unfairness.

Potential Future Applications

If an AI-generated video can be used for a victim impact statement, it is no great leap to expect attorneys will attempt to use AI to assist in similar contexts, if a court allows it. For instance, a litigant could offer an AI-generated video of a witness’s deposition testimony. Under existing rules of evidence, most states allow deposition transcripts of opposing parties to be read into the record without that party testifying live. In some circumstances, third-party witness testimony can be read into the record when that witness is unavailable to testify. But AI-generated video or audio, complete with synthesized voice, tone, and body language, adds a new layer of complexity and risk. Jurors and judges often assess credibility based not just on words, but on a witness’s demeanor and delivery. An AI-generated version might convey emotion or nuance that the real witness never expressed, thereby changing the perceived truthfulness or weight of testimony. This could tip the scales in close cases, threatening the overall fairness of proceedings.

In another trial, a litigant may attempt to use AI-enhanced or -generated versions of evidence to provide a clearer picture of that party’s story of the facts. In a Seattle-based trial, for instance, a criminal defendant attempted to offer an AI-enhanced version of a smartphone video as evidence, claiming the original video was low resolution and blurry, whereas the AI video offered a “more attractive product for a user.” The court ultimately denied admission of the video because AI enhancement is not seen as sufficiently reliable in the relevant scientific community. However, over time, that may change. Inevitably, AI technology will improve to a point where it generally may be considered reliable by industry experts. When that happens, AI enhancement will be susceptible to the same risks as AI-generated witness testimony. Are the actual facts as they are depicted in the video? Or are they manipulated and colored by a litigant’s self-serving narrative of the facts? Therein lies the risk of allowing AI-generated witness testimony or AI-enhanced evidence in litigation. The ability to use AI to manipulate information to enhance a litigant’s storytelling or create evidence that does not actually exist moves across the line from artificial intelligence to artificial interference with the opponent’s right to a fair trial.

Key Takeaways for Litigants 

  • Use AI at your own risk. AI remains a very new technology. While AI tools may, in some circumstances, streamline time-consuming research, writing, or discovery projects or allow individuals to organize their thoughts in a coherent way, many AI tools are unreliable and cannot be trusted to provide accurate information, case citations, points of law, or legal analysis. Lawyers using AI to research or draft submissions to clients, courts, arbitrators, or other tribunals must double-check all AI-generated work product to ensure accuracy and compliance with ethical requirements. Unrepresented litigants, whether in state or federal courts, arbitration, or before tribunals such as GAO, should be extremely wary of the use of AI as well. Unrepresented litigants are not immune from monetary sanctions or the dismissal of their cases as a result of the improper use of AI tools. Ultimately, the loss of a case due to the improper use of AI could cost a litigant far more than the attorneys’ fees saved by using AI as a shortcut. Meanwhile, represented litigants should request that their attorneys disclose the use of AI tools in litigation. The improper use of AI can lead to significant penalties, and litigants should know those risks when engaging counsel.
  • Be ready for AI “evidence” in litigation. As seen across most industries, the use of AI is increasing, and the scope of its use is expanding rapidly. The legal industry is no different, with AI-driven research and drafting programs, discovery synthesis programs, and similar tools beginning to flood the legal market. Even if litigants do not use AI tools themselves, they should expect their opponents will. When AI-generated or -enhanced evidence is offered in litigation, the opposing party must be prepared to vet that evidence, including by inquiring about the method and manner of its creation, the person requesting or participating in its creation, whether there were any other outputs generated, what changes were made to prompts to lead to the final result, and whether the evidence was reviewed and approved by a qualified third party expert. Only where a litigant is fully apprised of the source and content of all aspects of an opposing party’s case, including all AI-generated and -enhanced evidence, can that litigant present their best position.
  • Challenge the use of AI evidence. Litigants also must be prepared to timely object or move to exclude AI-generated or -enhanced testimony or evidence, particularly if that testimony or evidence may be presented in front of a judge or jury. Litigants are strongly positioned to argue that AI-generated testimony is not sufficiently probative of the facts as they occurred and that AI-generated evidence is prejudicial and likely to confuse a judge or jury, justifying its exclusion. For AI-enhanced evidence, litigants should move to exclude such evidence as it is presently considered unreliable in the scientific community. The failure to object or exclude the evidence in a timely manner could lead to an unjust result and the loss of an issue for appeal.

If you have questions about the use of AI in litigation, please contact Matt Feinberg, Kaavya Ramesh, or another member of PilieroMazza’s  Litigation & Dispute Resolution Group.

____________________

If you’re seeking practical insights to gain a competitive edge by understanding the government’s compliance requirements, tune into PilieroMazza’s podcasts: GovCon Live!Clocking in with PilieroMazza, and Ex Rel. Radio.