Stacey Wales gripped the lectern, choking back tears as she asked the judge to give the man who shot and killed her brother the maximum possible sentence for manslaughter.
What appeared next stunned those in the Phoenix courtroom last week: An AI-generated video with a likeness of her brother, Christopher Pelkey, told the shooter he was forgiven.
The judge said he loved and appreciated the video, then sentenced the shooter to 10.5 years in prison — the maximum sentence and more than what prosecutors sought. Within hours of the hearing on May 1, the defendant's lawyer filed a notice of appeal.
Defense attorney Jason Lamm won't be handling the appeal, but said a higher court will likely be asked to weigh in on whether the judge improperly relied on the AI-generated video when sentencing his client.
Courts across the country have been grappling with how to best handle the increasing presence of artificial intelligence in the courtroom. Even before Pelkey's family used AI to give him a voice for the victim impact portion — believed to be a first in U.S. courts — the Arizona Supreme Court created a committee that researches best AI practices.
In Florida, a judge recently donned a virtual reality headset meant to show the point of view of a defendant who said he was acting in self-defense when he waved a loaded gun at wedding guests. The judge rejected his claim.
And in New York, a man without a lawyer used an AI-generated avatar to argue his case in a lawsuit via video. It took only seconds for the judges to realize that the man addressing them from the video screen wasn't real.
Experts say using AI in courtrooms raises legal and ethical concerns, especially if it's used effectively to sway a judge or jury. And they argue it could have a disproportionate impact on marginalized communities facing prosecution.