There can be no doubt that 2023 was the year artificial intelligence (AI) exploded into mainstream public consciousness. In the wake of OpenAI’s ChatGPT, similar offerings from Google, Amazon, Meta, Microsoft and many more arrived. Every product was now “powered by AI” or had AI added to it — even if sometimes it wasn’t immediately clear why.
Waves of technology hype are nothing new (blockchain and metaverse being recent examples), and tech companies and investors are always keen to encourage a buzz about the latest hot new thing.
We’ve seen tech bubbles come and go, and those of us of a certain age would be forgiven for thinking that AI feels very much like the dotcom boom and bust cycle. It’s worth remembering, though, that the dotcom bubble burst didn’t mean the end of internet businesses — it thinned the herd, and those with the strongest ideas and use cases went on to dominate and change the landscape.
Waves of technology hype are nothing new, and tech companies and investors are always keen to encourage a buzz about the latest hot new thing
Law firms have actually been using AI in different forms for many years now, but generative AI (genAI), and the large language models (LLMs) it is based on, seems like it has the potential to change work, sectors and businesses in fundamental ways. For the legal sector, this may be the first major wave of tech since the introduction of email or document management to significantly transform how legal work is managed.
GenAI works brilliantly with language (courtesy of its training on billions of pages of text via the underlying LLM) and is good at replicating content, style and format. It’s highly proficient at summarising, rephrasing or correcting existing text, and it’s great for brainstorming to get beyond the blank page. It has the potential to increase efficiency and productivity through automating routing text production and tasks.
The role of human judgement in responsible AI
However, there are also the obvious (and well publicised) drawbacks, in that it may produce a new answer each time the same question is asked, it can confabulate or hallucinate facts or data, and copyright issues with LLM training sets are emerging (and being litigated).
But firms adopting AI need to consider not only accuracy, but also transparency and accountability — for example, letting clients know when AI has been used and ensuring that humans validate what it produces. Microsoft has called the AI it has woven into its products ‘Copilot’ for a reason — they talk about the ‘human in the loop’ — and stress that while the system may suggest, the human needs to decide.
Two phrases I have heard that neatly encapsulate this are that AI is ‘power steering for the mind’ and that while ‘machines learn, humans discern’. That is, these are tools that will help people rather than outright replace them.
After the initial rush of (justified) excitement and examples of what LLMs could do — and the inevitable high-profile cases where it has gone wrong — we have seen more thoughtful responses from organisations looking at practical use cases.
Is your data infrastructure truly ready?
More interestingly, we’ve also seen discussions shift towards another buzzword from a decade ago, where the hype perhaps didn’t quite deliver in the legal sector: big data.
Law firms sit on a wealth of unstructured data, and AI promises to do wonderful things with it — after all, that’s how LLMs are trained. But before we start unleashing AI on our data, we need to consider the implications.
A firm’s files include not only the clean final document that’s sent to the client but the numerous drafts that led to that final version, legacy versions of similar documents that are now out of date, and lots of matter-specific details that may be unique to a particular set of circumstances.
Putting all of that in the blender and seeing what comes out may not be advisable. Indeed, Accenture’s CEO recently said that most companies won’t be able to deploy AI at scale because they don’t have appropriate data infrastructure and controls in place.
Consider the environmental footprint of AI
With large volumes of digital data — and the GPUs (high powered computer chips) required to train and run the technology — there emerges something of a quandary.
LLMs need vast amounts of electricity to power AI computing resources, and water to cool them. Firms that are focused on sustainability are faced with the prospect of AI helping to evolve their business, but at the cost of increasing their negative impact on the environment.
OpenAI CEO Sam Altman recently addressed this issue at Davos, saying that new energy sources, such as nuclear fusion, were required to feed energy-hungry AIs. On a more positive note, research has shown that narrowly trained models used for specific purposes, and primarily on text, such as those likely to be used in the legal field, are greatly more efficient than general purpose models such as ChatGPT.
Humans tend to overestimate the short-term impact of technology and under-estimate the long-term impact, and regardless of the hype there is inevitably a lot of work required to identify and extract value from technology.
In conclusion…
AI is not a magic bullet that will solve all the challenges and inefficiencies in the legal sector. It is a powerful tool that — if used wisely and responsibly — can enhance the quality and speed of legal services, as well as create new opportunities and value for clients and lawyers. We are now entering the pragmatic phase that always follows the peak of inflated expectations, where we start to identify real value and do the hard work to extract it.
AI requires careful consideration of the ethical, social and environmental implications that come with its adoption. Law firms need to be proactive and strategic in deciding how and when to deploy AI, and how to balance the trade-offs between innovation and risk. They also need to invest in the skills and infrastructure that will enable them to leverage AI effectively and sustainably, while maintaining their core values and professional standards.
By doing so, firms can position themselves as leaders in the digital transformation of the legal world, and remain trusted advisers to clients in the age of AI.
Contributor
Innovation & Technology Director