Artificial Intelligence ("AI") is set to have an impact on almost every aspect of the UK's economy, and on our day-to-day lives as we increasingly make use of forms of machine learning and generative AI such as ChatGPT. AI is also regarded as bringing with it wide-ranging risks and challenges, which governments need to address to guard against breaches of privacy, misinformation, and the violation of human rights.
What is the UK's "pro-innovative" approach?
The UK Government has recently published a White Paper on its "pro-innovative approach" to regulating AI. The UK Government is proposing a decentralised approach, which would mean that existing regulators, including the UK Information Commissioner's Office, Competition and Markets Authority, Financial Conduct Authority and the Ofcom would be responsible for enforcing proposed new rules on AI. To address what it describes as the risk of inconsistent enforcement, the Government is proposing that it should have centralised oversight powers to ensure consistency of implementation.
The UK's approach is said to be aimed at fostering further competition within the sector and is based on five core principles. The five principles described in more detail here are:
- Safety, security and robustness;
- Appropriate transparency and explainability;
- Fairness;
- Accountability and governance; and
- Contestability and Redress.
The principles are intended to guide and inform responsible development of AI. Providers of AI will not, as a result of these proposals, be under any new legal obligations in relation to their AI technologies however the intention is that they should have regard to the five principles when making any decisions relative to AI. Further guidance and risk templates will be issued in the next 12 months to allow providers of AI to understand how these 5 principles should guide their development of AI.
The regulatory bodies affected will be expected to determine compliance according to these five principles. While the UK's proposed approach allows for flexibility, the regulatory landscape remains unclear, particularly in comparison to the European Union. The current intention is not to impose statutory obligations on businesses when developing or procuring AI although the UK Government has not ruled out introducing legislation in the future to regulate AI.
Definition of AI
The UK Government's White Paper does not propose an overarching definition of AI. Instead, AI is defined by reference to two characteristics: "adaptive" and "autonomous". This is in contrast to the EU's approach, which has defined AI far more comprehensively and specified typical outputs of AI, such as "content (generative AI systems), predictions, recommendations or decisions, and / or influencing the environments with which the AI system interacts". The UK's alternative approach is defended as recognising that the AI sector is fast-changing, and a strict definition has the potential to become outdated very quickly.
How does this contrast to the EU approach?
The EU is developing a "horizontal" and "risk based" legislative framework, which will impose obligations on providers of AI according to the deemed level of risk of the AI. The proposed Regulation will establish four levels of risk: unacceptable risk, high risk, limited risk, and minimal risk. Different rules are to apply depending on the level of risk a system poses to fundamental rights, and it is proposed that there will be a complete ban on the use of real-time automated facial-recognition systems in publicly accessible places for the purpose of law enforcement.
The EU's proposed framework will include strong enforcement powers to ensure compliance, including, fines of up to 30 million euros or six percent of global annual corporate turnover depending on whichever is higher. Unlike the UK, a new centralised "AI Council" will be constituted to advise the Commission.
The potential for regulatory divergence between the UK and EU market could be challenging and businesses operating across multiple jurisdictions will need to plan ahead and be aware of the changes to ensure compliance with any new AI rules being introduced across the globe.
Next Steps
While the UK approach is undoubtedly more flexible and acknowledges the innovative nature of this type of technology, it creates some legal uncertainty. The UK's Policy could be difficult to navigate without clear guidance as to what amounts to an AI provider, and their obligations to ensure safe use of the technology.
The period for responding to the consultation has now closed and the Government is expected to publish its roadmap for the regulation of AI.
The EU rules should come into force by the beginning of 2024 and businesses should begin preparing for the increased regulation within EU markets by identifying early, into which category of risk their AI models will fall. The Regulation will affect not only providers in the EU but also outside the Union where "the output produced by the system is used within the EU".
If you would like to discuss anything raised in this blog, please contact Christine O'Neill or your usual Brodies contact.
Contributors
Chair & Partner