1. Introduction

Like the industrial revolution and the dawning of the age of the Internet which came before it, we find ourselves in the midst of the latest era-defining, technological breakthrough - as the all-encompassing nature of artificial intelligence ("AI") continues to infiltrate every walk of life. AI currently contributes £3.7 billion to the UK economy and it is predicted that the AI market in the UK alone could be worth over $1 trillion (USD) by 2035. However, just as the rapid advancement of AI technologies has sparked excitement of the possibilities which the technology offers, it has also raised concerns in relation to its ethical implications and potential legal and safety risks.

As a result, policymakers and regulatory bodies are now navigating the complex issue of how, and to what extent, AI should be regulated. How can lawmakers strike the delicate balance between encouraging and fostering innovation whilst also ensuring that those that develop and use AI technologies are held accountable in order to protect individuals' fundamental rights and maintain public trust?

The UK has actively participated in global discussions on AI ethics and regulation, notably through its hosting of the AI Safety Summit in November 2023. However, in its response in February 2024 to its consultation "a pro-innovation approach to AI regulation" the UK Government stated that although it believes AI technologies "will ultimately require legislative action in every country" this will only be "once understanding of risk has matured" - meaning the UK has no intention to specifically regulate AI technologies in the near future.

This is directly at odds with the approach being taken by the EU. On 13 March 2024, the European Parliament approved the EU Artificial Intelligence Act (the "AI Act"), the world's first comprehensive law regulating AI.

In this blog we explore the approaches taken by both the UK and EU and question if the introduction of the AI Act will result in regulatory divergence as the UK attempts to flex its post-Brexit muscles, or if the practical application of the AI Act will in fact result in regulatory harmony.

2. UK approach

The UK does not, at present, intend on introducing new specific legislation to govern the use of AI. As mentioned in our previous blog, in its White Paper the UK Government proposed a pro-innovative approach in relation to the regulation of AI, underpinned and informed by the following five principles:

  1. Safety, security, and robustness
  2. Appropriate transparency and explainability
  3. Fairness
  4. Accountability and governance
  5. Contestability and Redress

In its response to the White Paper, the UK Government has stated that this approach has been taken in order to avoid any "unnecessary blanket rules that will apply to all AI technologies, regardless of how they are used" in order to remain flexible and agile as technologies develop over time.

  • No centralised regulator

    There are also no plans for a centralised AI regulator in the UK. Instead, the principles will be interpreted and implemented by existing expert regulators (ICO, CMA, FCA etc.) utilising existing legislation. However, the implementation of the principles will not be legally binding on expert regulators (albeit it is anticipated that the UK Government will place a legal duty on regulators to give due consideration to the principles). The UK Government noted that this approach has been implemented in order for any regulations introduced to be both proportionate and context based. Regulators will be supported by the AI and Digital Hub, which is currently being piloted, and they are being asked to publish updates outlining their strategic approach to AI by 30 April 2024.

    The Department for Science, Innovation & Technology ("DSIT") has issued initial guidance 'Implementing the UK's AI Regulatory Principles' in order to inform regulators how to implement the principles and offering examples of best practice, but it will be interesting to see if the application of the principles will result in a degree of cohesion or discord between the different regulators.

    Despite the UK not having a centralised AI regulator, the UK Government has proposed the introduction of a new Central Function which will form part of DSIT, in an attempt to drive coherence by conducting cross-sector risk assessments, addressing regulatory gaps, and promoting knowledge exchange.

  • No formal definition of "AI"

    The UK Government has not adopted a formal definition of AI. Instead, it has identified two defining characteristics of "adaptivity" and "autonomy" which will guide regulatory bodies to form their own definitions which relate to their sectors. Therefore, in theory there may be multiple definitions of AI across different regulatory bodies. This will potentially allow definitions which are contextually more accurate to be established, however, this will no doubt increase the complexity and difficulty for businesses and individuals to understand what is meant by AI when they are subject to regulation by different bodies.

  • Private Members' Bill

    In November 2023, Lord Holmes of Richmond introduced a private members' bill 'the Artificial Intelligence (Regulation) Bill' – which, notably, provides a definition of AI. The Bill calls for the creation of a central AI authority to carry out a gap analysis of regulatory responsibilities and act to co-ordinate and ensure that current regulators adhere to their regulatory obligations. The Bill also proposes the creation of 'AI responsible officers' within businesses to be responsible for ensuring safe, ethical and non-discriminatory deployment of AI.

    On 22 March 2024, the Bill received its second reading in the House of Lords with cross-party support; however there appears to be much less support for the Bill amongst MPs, and it seems unlikely that it will progress to the statute books.

3. EU approach

Unlike the UK, the approval of the landmark AI Act places the EU at the forefront and ahead of the pack in terms of the regulation of AI.

Applying to providers, deployers and users of AI systems, the AI Act contains a very broad definition of AI system as "a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".

Risk-based approach

The EU has taken a risk-based approach in relation to the application and scope of the AI Act by allocating specific risk categories to each kind of AI system. The Act defines four levels of risk for AI systems: unacceptable, high, limited, and minimal / no risk.

Certain critics of the AI Act have argued that the separation of risk categories between different AI models may lead to a "decoupling of the market" in which the value of AI technologies differs according to their risk category. Similarly, it may be difficult to categorise certain generative-AI systems which can be utilised for several purposes and functions.

  • Limited & Minimal Risk: Those systems deemed to have limited risk such as chatbots will be subject to minimal transparency obligations and those that are regarded to be of minimal risk such as AI-enabled spam filters are to remain largely unregulated (although compliance with the following voluntary codes of conduct is encouraged).

  • High Risk: AI systems that negatively impact safety or individuals' fundamental rights will be deemed high risk and require further assessment prior to being made available on the market. These will include systems that fall under the EU's product safety legislation and AI systems that will be required to be registered in an EU database – comprising of AI systems deployed in the management or operation of critical infrastructure, education, employment, essential private and public services, law enforcement, legal interpretation, and border control management.

    It has been reported that there is confusion amongst AI developers as they attempt to determine if their AI systems qualify as high risk, as the obligations placed on providers of high risk systems are broad and extensive. Users of high risk systems are also required to ensure they are using the AI systems according to its instruction of use. Therefore, businesses are already being faced with incurring the costs of carrying out compliance and qualification assessments in order to comply with the AI Act. On the other hand, the lack of certainty around exactly which AI systems will be deemed high risk may lead to developers and/or users being unaware of, or failing to comply with, the relevant requirements.

  • Unacceptable Risk: AI systems that manipulate human behaviour are deemed to have an unacceptable risk and are to be banned. This includes systems such as voice activated toys that encourage dangerous behaviour in children, AI systems that classify people based on their behaviour or socio-economic status, or AI systems that use biometric identification such as facial recognition in real-time or remotely (other than in a limited number of cases and with judiciary approval).

  • When does the AI Act come into force?

    The AI Act will become applicable to Member States under a gradual phased approach - becoming fully applicable 24 months after enactment However, bans on prohibited practices will be applicable after 6 months of the AI Act coming into force, codes of practice will be applicable after 9 months, general-purpose AI rules after 12 months (or 24 months for systems already on the market), and high risk systems will be provided a 36-month grace-period prior to the relevant requirements becoming applicable.

  • What about generative-AI?

    The AI Act also places requirements on providers of general-purpose or generative AI systems and models, such as OpenAI's GPT4, including the requirement to:

    • put in place a policy to respect EU copyright law;
    • provide information and documentation to providers of AI systems who intend to integrate general-purpose AI models into their own AI systems; and
    • make publicly available detailed summaries about content used for training the AI system in the form of the template provided by the AI Office (as discussed below).
  • New EU AI office & board

    In another divergence of approach from the UK, the European Commission established the EU AI Office on 24 January 2024, which will be responsible for the implementation of the AI Act.

    In addition, the AI Act also formally establishes an AI Board which is to ensure the consistent approach of the AI Act throughout the EU and will issue ancillary recommendations, opinions, codes of conduct / practice, and technical standards as required.

    The EU AI Office will be responsible for the enforcement of the AI Act in relation to general-purpose AI models, but EU member states shall be responsible for the enforcement of the AI Act in relation to other AI systems. However, critics have questioned the effectiveness of this and noted that consistent enforcement amongst all member states will be challenging, as previously seen with the GDPR. The powers of the AI Office have also not yet been clearly established and it remains to be seen how each member state will enforce the AI Act, and whether they will establish their own AI Authorities or empower existing authorities to enforce the EU AI Act.

  • Fines for non-compliance

    Just as AI systems are categorised, penalties are also tiered under the AI Act, as follows:

    • Tier 1 - up to €35,000,000 or 7% of a business's annual worldwide turnover may be issued in the case of non-compliance with prohibitions under the AI Act.
    • Tier 2 - up to €15,000,000 or 3% of a business's annual worldwide turnover for cases of non-compliance with obligations under the AI Act.
    • Tier 3 - up to €7,500,000 or 1% of a business's annual worldwide turnover may also be issued in the case of the supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities.

4. The right approach?

In its recent research paper "public attitudes to data and AI: Tracker survey (Wave 3)", DSIT reported that as the UK public's understanding and experience of AI has grown, so has feelings of pessimism towards AI. Those surveyed considered that AI used in the healthcare, banking and finance, policing and military sectors should be regulated in order to avoid negative outcomes. Although the UK Government has established various initiatives and guidance on the use of AI, the decision not to legislate due to AI still being (relatively) in its infancy and relying instead on non-binding principles, existing legislation and current regulators has been criticised by many.

However, it must be noted that thousands of amendments were required to be made to the EU AI Act just in the period between its first publication as a Bill to its passing. These amendments were required in order to keep pace within the advancement of AI occurring in real time. Critics of the EU approach (and those who prefer the UK approach) would therefore likely argue that, given how quickly AI technology is advancing, having rigid legislative provisions at this stage, is both counter-intuitive and impractical.

Extra-territorial scope of the AI Act

A final point worth considering, is that although the UK has taken a divergent approach to the EU as part of its post-Brexit powers, the AI Act has extra-territorial scope (meaning that it applies to providers inside or outside the EU that place AI systems on the EU market, and providers and users of AI systems inside or outside the EU where the output is used in the EU). Therefore, UK businesses which develop and provide AI systems to, and/or use AI systems to produce output to be used within, the EU, will still be required to adhere to the requirements of the AI Act, even in a post-Brexit world.

Although critics of the UK approach may fear that the 'pro-innovation' stance will mean the UK becomes a wild west for the development and use of AI, the likelihood is that many AI developers, and those that deploy and use AI, will have a wider international outlook and may therefore strive to adhere to the highest-standard of regulation as a matter of best business practice and in order to ease compliance costs.

So, does this mean that the UK's flagship pro-innovation stance will not, in fact, deliver the benefits which it promises to UK developers after all? Time will tell.

If you would like to discuss any of the points raised in this blog in more detail, or the legal considerations around AI generally, please get in touch with Alison Bryce, Ally Burr, Amelia Wilson, or your usual Brodies contact.

Contributors

Ally Burr

Associate

Alison Bryce

Partner

Martin Sloan

Partner

Amelia Wilson

Solicitor