We have blogged previously on different alternative dispute resolution (ADR) methods and the growing use of online dispute resolution methods. Now we look at how the evolution of artificial intelligence (AI) may impact ADR.

Although this change is promising and potentially will lead to ADR being more cost effective and efficient for businesses, it also presents a number of challenges.

AI Assistants v AI Agents

AI Assistants are designed to assist and support the capabilities of humans. They can carry out repetitive tasks, such as document management or categorisation and can produce useful summaries of long or complicated documents. While AI Assistants can theoretically make these processes more efficient, some oversight is required to ensure that the risks involved when using AI are managed effectively.

AI Agents go one step further, and, in theory, can be instructed by a human in tasks such as conducting legal research, analysing documents and predicting case outcomes. Although these systems could improve the efficiency of ADR, the reliability of the answers or advice given should be approached with caution as AI models still lack the ability to reason and interpret human emotion, and have no understanding of reality or the underlying meaning behind the information being processed. This can result in inaccurate or biased information being produced if there are issues with the source material the AI model was trained on.

AI in ADR

It has been suggested that there are three categories of AI use cases in ADR, being "administrative, procedural and practice-related".

Administrative

AI may be useful in helping to review documents, summarise statements of case or perform administrative tasks more effectively.

However, one of the concerns associated with AI technology is that, on occasion, it has been known to produce inaccurate or misleading information, which is presented in a manner which makes it seem highly accurate and credible to lay users. For example, in Harber v Commissions for His Majesty’s Revenue and Customs [2023], a self-represented appellant used an AI system to obtain case reports in support of her case. However, the AI system produced a series of cases which did not exist. In essence, they were "hallucinations", fabricated by the AI model. Although it was decided that the appellant had not intentionally misled the Tribunal, her case materials were disregarded and, ultimately, she lost her appeal.

Further, you may recall the headline last year regarding the two New York lawyers who relied on six case citations in their legal brief which had been hallucinated by the AI chatbot ChatGPT. It was reported that the judge found the lawyers had acted in bad faith and imposed a fine of $5,000.

Therefore, AI can be a useful assistive tool, but still requires the oversight of legally trained professionals, who have the requisite legal training and knowledge to decipher whether the information produced is accurate.

Procedural

We are likely to see ADR institutions updating their policies or guidelines to cover the acceptable use of AI. For example, guidelines have been published on the Use of Artificial Intelligence in Arbitration (by the Silicon Valley Arbitration and Mediation Center), covering issues such as safeguarding confidentiality, disclosure and non-delegation of decision-making responsibilities to AI tools.

Practice-related

Could AI eventually be used to make decisions about cases? For example, would individuals or businesses be willing to sign up to AI-led mediations with no human mediators involved? Such mediations may be more cost effective and could reduce the scope for human bias (including unconscious bias).

AI can also be used by participants to test their potential arguments and line of defence. The AI can simulate the role of the other side and challenge a party's argument or approach, highlighting any areas of weakness or where rebuttals may be required.

However, the lack of empathy offered by a machine could be an issue here. A key part of the mediation process is engaging in discussions and offering to compromise in the interest of resolution. Successfully reaching an agreement through mediation is often heavily dependent upon the attitude of the parties and their willingness to compromise. The parties' body language, tone, facial expressions and choice of language inevitably play a large part in bringing parties closer to reaching an agreed position.

Although AI can imitate human emotions and sentiments, it does not have the judgment of a human or have any conception of truth or emotion. Interestingly, different LLMs are weighted differently, with some being more aggressive in a negotiation and others more collaborative. This poses a further question in terms of the use of AI in mediation – who would get the ultimate say in which model is to be used and what its role or stance in the mediation process should be?

Conclusion

As AI technologies advance, their integration into ADR processes will likely become more pronounced. AI is also a useful tool that could be used to save time and expense by allowing lawyers to test out their defences or positions and to identify potential responses or positions from the other side. Emerging regulations and standards are shaping how AI is used, focusing on maintaining fairness and transparency. However, at the moment, it does not replace human involvement in ADR processes.

If you would like to explore the role of AI in international arbitration in more detail, please check out our podcast on this issue. Ken MacDonald of Brodies LLP and David Parratt KC discuss the potential benefits of harnessing the technology to make enforceable determinations as well as the risks of doing so.

Finally, if you wish to discuss the use of AI in ADR, or dispute resolution generally, please contact a member of the IP, Technology & Data team or your usual contact at Brodies.

Contributors

Nakita Kaur

Trainee

Hannah Clark

Senior Solicitor

Monica Connolly

Legal Director

Damien Behan

Innovation & Technology Director