The UK Information Commissioner's Office (the "ICO") has warned businesses to be wary of breaching data protection laws when using artificial intelligence ("AI") models. In his keynote speech at the TechUK Digital Ethics Summit in December, the Information Commissioner, John Edwards warned companies of the risks of not complying with data protection law when engaging generative AI technology.
While studies have shown that over 80% of businesses already employ some form of AI within their company, the way in which it is used and its interaction with personal data is constantly evolving. Whether used to better the customer service experience, or to optimise internal processes within a business, it is vital that businesses understand the personal data risks that AI can bring with it and ensure compliance with UK GDPR.
The ICO's warning to businesses
While acknowledging the benefits that AI can bring to both businesses and consumers, Mr Edwards warned that the ICO could and would impose fines on organisations that fail to comply with data protection. He further noted that public confidence in AI would diminish if businesses were found to be breaching data protection rules.
This speech follows the ICO's updated guidance in March 2023 regarding data protection and generative AI. This guidance outlines the steps that businesses should take to ensure that they comply with the data protection laws. Whilst this guidance forms part of a global movement to regulate AI, it is yet to be seen when and how the ICO will investigate and enforce such AI GDPR breaches by businesses.
The ICO recently fined Clearview AI more than £7.5 million for collecting data from the web to create an online facial recognition database although it was ultimately overturned as it was held that Clearview AI was not subject to UK GDPR.
In October 2023 the ICO also issued a preliminary enforcement notice against Snap (operator of Snapchat) over potential failures to properly assess privacy risks to children in relation to Snap's AI chatbot "My AI".
Relevant data protection rules
Any company that processes personal data needs to ensure they process this data in adherence with the data protection principles under the UK GDPR. This extends to the processing of data via AI technology and businesses should ensure that they are using any data they process via AI fairly and responsibly.
Organisations should therefore:
- ensure they have a legal basis
- know whether they are acting as a controller, processor or joint controller
- carry out a data protection impact assessment
- ensure that they comply with your obligations in relation to transparency, security and the purpose limitation
- if they are using AI for automated decision making, consider how they will comply with their obligations under Article 22 of UK GDPR (including the right to meaningful information about how decisions are made)
If you are using a third party tool then you’ll want to think about what data you are sharing and know how the platform will use that data (for example, for training purposes). For example, does your privacy notice enable you to share personal data with the platform? Are you able to carry out diligence on how the platform works?
Takeaways
Privacy rights are central to using AI tools responsibly and lawfully. AI relies on large data sets in order to be accurate and the more data processed by these models, the likelier the output produced will be useful to any business. It is clear that a tension exists between the data required for AI tools to be successful, and the data protection rules which place limits on personal data processing. While governments around the world begin to grapple with the regulation of AI, the UK and EU are keen to ensure adherence to the data protection principles while encouraging further use of AI.
Businesses must consider a number of factors if they wish to implement AI tools within their business. An AI playbook can help as part of a wider strategy to help them manage legal risks of AI.
As part of the playbook, businesses should identify the key risks to their organisation and review their systems to determine the extent to which AI captures and processes personal data. They should also develop a risk assessment process for assessing new use cases and technologies. This should include diligence on the supplier of the AI technology.
Finally, businesses should prepare an AI policy for employees and provide training to ensure that employees understand their employer's approach to AI and when and how they can use it in their day to day work. This includes guidance for staff on approved use of AI and what information can be uploaded to the AI tool.
If you would like any advice or support relating to any of the issues raised in this blog, including help with a response to the current consultation, please contact Martin Sloan, Rachel Lawson or your usual Brodies contact.