It seems like rarely a day goes by at the moment without a story about AI. Following last week’s order by the Italian data protection authority temporarily suspending processing of personal data by ChatGPT, the UK’s Information Commissioner’s Office has published a blogpost reminding developers and users of large language models (LLMs) and generative AI of their duties under data protection law.

The blogpost, from the ICO's Executive Director of Risk provides an overview of the key things to bear in mind when using or developing generative AI systems. 

None of these issues are new, and are good practice for any new activity involving the processing of personal data, but the blogpost does provide a useful reminder of the issues to bear in mind.

While the ICO has not issued a order to temporarily suspend use of ChatGPT, it is notable that the the issues in the ICO's guidance largely align with the key concerns raised by the Italian DPA in respect to data collected and used by OpenAI in relation to the development of the ChatGPT platform.

The ICO's guidance on using LLMs and generative AI systems

As the ICO says, if personal data is being processed, then data protection compliance is not optional. Organisations should bear in mind the principle of data protection by design and default.

Organisations should therefore consider the following points:

  • ensure you have a legal basis
  • know whether you are acting as a controller, processor or joint controller
  • carry out a Data Protection Impact Assessment
  • ensure that you comply with your obligations in relation to transparency, security and the purpose limitation
  • if you are using AI for automated decision making then you also need to comply with your obligations under Article 22 of UK GDPR (including the right to meaningful information about how decisions are made)

If you are using a third party tool then you’ll want to think about what data you are sharing and know how the platform will use that data (for example, for training purposes). 

For example, does your privacy notice enable you to share personal data with the platform? Are you able to carry out diligence on how the platform works? 

It is notable that ChatGPT's FAQ currently state that users cannot delete specific prompts from your chat history and ask users not to "share any sensitive information in your conversations."

Developing an AI playbook and guidance

Data protection compliance is just one issue to bear in mind when considering deploying or utilising ChatGPT or other AI systems in your organisation. Other issues include creation and ownership of IP, and liability for and accuracy of content. We have also seen concerns raised in relation to information security. 

In addition to understanding these risks, organisations also need to think about how they ensure that staff are provided with training and clear guidance on the use of LLMs and generative AI, so that staff know whether and how they can use these tools and whether there are restrictions on certain types of use or on sharing certain information (for example, personal data or confidential information).

As part of any risk assessment, organisations should consider the risks associated with each use case. We are working with clients to help them develop AI playbooks and guidance. If you would like to discuss, please get in touch.

Contributor

Martin Sloan

Partner