The Information Commissioner's Office (ICO) has published a report on AI tools in recruitment following an audit it carried out between August 2023 and May 2024. While the ICO acknowledges that use of AI can improve the efficiency of the recruitment process, the report found a number of privacy risks. The ICO's report includes recommendations on how organisations can proactively address and manage these risks to ensure their recruitment processes are compliant with data protection law.

In this blog, we look at the potential challenges when using AI tools for recruitment, in particular the risk that unchecked use of AI tools could lead organisations to discriminate against potential candidates, over-collect personal data, and fall short of their transparency obligations.

AI in recruitment

AI can be used in a number of ways to automate and simplify the recruitment process. For example, AI can be used in relation to applicant screening and shortlisting, personality profiling, aptitude tests using pre-programmed algorithms and criteria, and automated background checking.

The ICO audit covered a range of AI use cases but did not include tools which:

  • process biometric data, such as emotion detection in video interviews; or
  • the use generative AI, such as chatbots and SI generated adverts and job descriptions.

The ICO's report found that AI tools can compromise candidate privacy and fairness during the recruitment process if they are not properly evaluated and managed.

The ICO's findings

The main challenges noted in the report for organisations using such tools for recruitment relate to discrimination or bias against candidates, inaccurate assumptions or inferences, excessive collection of personal data beyond what is necessary and failure to meet transparency obligations.

  • Discrimination and bias - the report found discrimination and bias could arise as some AI recruitment tools lacked regular accuracy testing and filtered out candidates with certain protected characteristics, which increases the risk of inaccurate and prejudiced candidate evaluations. This can in turn perpetuate the human bias that the tool is seeking to avoid.
  • Accuracy - some AI tools were also reported to assume inaccurate characteristics about a candidate, such as their gender or ethnicity, based on the candidate's name or other candidate information.
  • Excessive collection of personal data - the report also found that some AI tools struggled to regulate the volume of data they collected and consequently gathered more data than was necessary for recruitment purposes (the individual's name, contact information, career experience, skills and qualifications are usually all that is required). Some AI tools were even found to scrape data and photographs from online sources such as social media websites without the candidate being aware, while others retained the information indefinitely in order to build large databases of potential candidates or repurpose it to train, test and maintain their own AI tools.
  • Transparency and status of the provider - the report expresses concerns that given the level of control AI providers have over how their technology works and what data is collected, they often incorrectly label themselves as processors of personal data rather than controllers or joint controllers, and contractual agreements between AI providers and organisations procuring their tools for recruitment are often vague and unclear, which, in turn, can result in organisations not being fully aware of their obligations in respect of the data they are handling.

Consequences of failing to evaluate AI tools

Both providers of AI tools and employers that use them have obligations to comply with data protection law when developing, implementing and using such tools, particularly in the context of recruitment. Failure to do so can result in reputational damage, enforcement action and mistrust from potential candidates who lose faith in the integrity and security of an organisation's recruitment processes.

Organisations who procure AI tools for recruitment cannot assume that the tool they are using is automatically fit for purpose and compliant with data protection law. The expectation is that organisations will proactively take steps to ensure the tools they use for recruitment are fully appropriate and compliant.

Tips to ensure compliant use of AI tools

The ICO's report provides some recommendations for organisations intending to engage providers of AI tools for use in recruitment (ICO's key questions to ask before procuring an AI tool for recruitment). These include:

  • ensuring there is a lawful basis for processing, and documenting that basis in contracts and privacy notices;
  • conducting a Data Protection Impact Assessment early in the process and keeping it up to date as the processing and its impacts evolve (read our earlier blog on an employer's obligations in relation to DPIAs);
  • ensuring the AI tools process data fairly (for example by monitoring accuracy, bias and the use of special category data);
  • taking steps to ensure the AI provider has measures in place to mitigate bias and ensure the tool is being used transparently;
  • fully informing candidates that AI tools may be used to process their data (for example, in a privacy notice);
  • collecting only the minimum possible data required;
  • clearly defining the scope and nature of their relationship with the AI provider, with each party's role (as processor, controller or joint controller) and responsibilities clearly set out, and reviewing the contracts with AI providers periodically; and
  • giving AI providers explicit processing instructions, with details of the data being processed, how and why it is being processed and what the output will be, how the data will be stored, how long it will be retained for, who it will be shared with, and relevant safeguards.

While there is no legal requirement to comply with the guidance, adhering to the ICO's recommendations may help to minimise the risk of organisations breaching their obligations under UK data protection law.

It is also possible that an employer's compliance could be relevant in the event of a discrimination claim. It is therefore prudent for organisations to keep these recommendations in mind if they intend to use AI tools for recruitment purposes and ensure that they have proper processes in place to assess and manage risk.

Accountability

The ICO's recommendations highlight an important shift in how the procurement and use of AI tools should be approached generally as well as within the context of recruitment. Organisations who procure and use such tools should not simply rely on the providers to ensure compliance or assume that use of the AI tool is lawful.

It is essential that organisations play an active role in understanding, addressing and managing the compliance risks associated with using AI tools so they do not fall short of their data protection obligations.

We have extensive experience advising clients on their data protection obligations when procuring technology for use in the workplace and recruitment. Should you wish to discuss anything raised in this blog, please contact Martin Sloan, Grant Campbell, or a member of Brodies Employment & Immigration team.

Contributors

Martin Sloan

Partner

Julie Keir

Practice Development Lawyer

Ussamah Nasar

Solicitor