Evidence in court traditionally consists of paper documents and the oral evidence of witnesses. But with the rise of portable technology almost everyone can now take a picture, shoot a video or record a voice clip. These contemporaneous records of events are increasingly being taken into court and used as key pieces of evidence. But is seeing really believing? Litigants and legal advisors need to be aware that things are not always as they seem.

What are deepfakes?

Deepfakes are highly convincing fakes that could convince even the savviest viewer. They can be made through use of AI to create videos or voice clips based off real, existing images or clips of a person speaking. The end product can be highly convincing 'evidence' of something that never actually happened.

Technology such as this has been used to create entertaining viral videos such as that of Barack Obama going off brand in a rant about Donald Trump and recommending Jordan Peele films. However, the increasing availability of the technology needed to create deepfakes means an increasing danger of them slipping into evidence.

A family lawyer, Byron James, has recently drawn attention to the use of deepfakes in litigation. A voice clip was lodged with the court of a threatening message apparently left by his client. Despite having the same accent, tone and use of language as his client it was ultimately proven to be a deepfake. The client had never left the message.

Another risk comes from the potential for ultra-realistic masks to fool witnesses - even from close range. An article on The Conversation looking at use of masks draws attention to research which indicates that witnesses are not very good at spotting when a mask is being worn - whether looking at photographs or in real life The article also highlights the example of a man arrested in the US after being identified in CCTV footage by his own mother. It turned out that the real culprit had been wearing an ultra-realistic mask and the arrested man was not involved in the crime.

Spotting the fakes

While there is an increase in AI firms working on 'Deepfake detectors' and online security systems offering some protection there doesn't yet seem to be a one size fits all way to tackle the rise of deepfakes.

So, in the absence of a technological solution what might be the tell-tale signs?

  • Maybe the person on the phone or video says something a little strange? Do they use a turn of phrase you wouldn't expect? In a recent French case a fraudster using a hyper realistic mask was only uncovered after a minor linguistic slip up - the use of "vous" rather than "tu".
  • Perhaps they promise something that never arrived? A UK executive was recently convinced by a deepfake of his CEO's voice who called him to tell him to send $250,000 to a fraudulent account. The fraud was only discovered when the executive realised the money the CEO said would arrive by way of reimbursement never appeared.
  • Do the facts add up? In the above case the executive's suspicions were first raised when he realised that he had been called by an Austrian number. His CEO was based in Germany.

The use of deepfakes in court actions is always likely to be rare but ever-improving technology means it is something that litigants and their legal advisors will need to consider looking out for in appropriate cases.

Contributor