In 2019, a couple was engaged in a custody fight in a British courtroom. A soon-to-be-ex husband stood accused of making threats. His soon-to-be-ex wife had an audio clip to prove her accusations. The recorded evidence had been faked. The woman had used online tutorials and software that were widely available to doctor the audio to make it sound like the husband had made threats. While this story took place in Britain, the issue of deepfakes has global reach. Lawyers in New York are just as likely to come across fake audio and video evidence in their cases.
Her deception was discovered by the husband’s lawyer and he was able to discredit her recording. In this case, the lawyer had a team of experts study the metadata on the recording to find evidence that she had manipulated the audio file. This particular fake was considered a “cheapfake” rather than a “deepfake”. Both types use machine learning to generate convincing video and audio simulations of people.
These technologies are becoming more sophisticated and harder to detect. Hollywood has used the technology for years to transform actors or bring dead stars back to life. In political battles, deepfakes have been used to make political figures look incompetent or impaired.
While these technologies can be hard to detect, most modern law firms retain teams of technical experts who have specialized training and technology to find the evidence of faked or altered audio and digital evidence. A custody battle is just one example of the use of deepfakes in court. Deepfakes could make their way into criminal defense cases just as easily. The widespread availability of sophisticated software to create fake video and audio clips makes it relatively easy for someone to create fake evidence in a criminal case.