In what appears to be an issue of first impression, a Washington state superior court judge recently rejected the admission of video exhibits “enhanced by artificial intelligence” for use in a jury trial. In State of Washington v. Puloka, the state of Washington charged Defendant Joshua Puloka with three counts of murder stemming from a 2021 shooting. The shooting was captured on a bystander’s smartphone and the unaltered 10-second-long source video of the shooting had been entered into evidence.
The defense, however, also sought to admit an AI-enhanced version of the video. The defendant’s expert argued that the source video was low resolution, had substantial motion blur, and contained fuzzy images with “blocky” edge patterns. To remedy these issues, the defense expert stated that he had added clarity to the source video though the use of an AI-video editing tool in the Topaz Labs AI program and further processed the video using an Adobe program. The defense’s expert stated that the Topaz Labs AI program used technology to “intelligently scale up the video to increase resolution,” as well as to add sharpness, definition, and smoother edges to objects in the video.
The state challenged the proffered AI-enhanced video, asserting that it failed to meet the admissibility standard set forth in Frye v. United States – a standard requiring that evidence using novel scientific theories or principles must have achieved general acceptance in the relevant scientific community. According to the state’s expert – a certified forensic video analyst – the AI tools the defense used made accepted forensic analysis of the video impossible. The state’s expert provided a litany of issues with the AI-enhanced video:
- the video added 16 times the number of pixels as existed in the original video, using an algorithm and enhancement method unknown and unreviewed by any forensic video expert,
- added information that was not in the original files,
- removed artifacts on individual images, and
- altered shapes and colors in the video.
The state’s expert also testified that the Scientific Working Group on Digital Evidence, whose members represent state, local, and federal law enforcement agencies engaged in forensic video examinations, had issued warnings regarding the use of AI enhancement tools in the courtroom.
After hearing oral argument from each side, the court rejected the defendant’s proffer of the AI-enhanced video, finding that the proposed evidence failed to meet the Frye standard. The court first noted that because using AI tools to enhance video introduced in a criminal trial was a novel technique, the defendant had the burden to show that the method was accepted in the relevant community. Finding that the relevant scientific community was the “forensic video analysis community,” the court held that the defense had failed to meet its burden.
Specifically, the court held that the Topaz Video AI enhancement tools, which use machine-learning algorithms, have not been peer-reviewed by the forensic video analysis community, are not reproducible by that community, and are not accepted generally in that community. The court further noted that the defense had not offered any state or federal appellate decisions that had examined or approved of AI-enhanced videos in a criminal or civil trial. Nor, the court noted, had the defense offered any articles, publications, or secondary legal sources approving the introduction of AI-enhanced video evidence in a criminal or civil trial. The court also noted that the defendant’s expert himself admitted that he did not know what videos the AI model was trained on, did not know whether such models employed generative AI in their algorithms, and agreed that such algorithms were “opaque and proprietary.”
The court further found that AI-enhanced video failed to satisfy Washington Rule of Evidence (ER) 702, under which evidence may only be admissible if it is reliable and will assist the trier of fact, as well as ER 403, which states that evidence is not admissible if its probative value substantially outweighs the danger of unfair prejudice. The court found that the AI-enhanced video did not show with integrity what actually happened, but instead used opaque methods to represent what the AI model thought should be shown, and that there existed a significant risk of a time-consuming trial within a trial about the non-peer-reviewed process the AI model used.
The court’s ruling in Puloka is another example of the judiciary’s skepticism of allowing AI-enhanced or generated evidence to help determine the outcome of a case. In Puloka, the court noted a few times the opaqueness surrounding the AI model’s inputs. Unless courts are provided with more information on such inputs, they will likely continue to err on the side of omission.