Artificial Intelligence in Facilities Management – the Data Protection Perspective

As we reported previously, recent trends in the Facilities Management ("FM") sector have seen increased collaborations to develop smart technologies, new data monitoring and control software.

Data is the lifeblood of any business and its value in the context of FM is immense. Advances in technology - as well as its proliferation in terms of deployment of 'smart' devices – mean that increasing amounts of data can be collected, reported on and analysed using integrated reporting, advanced analytics and artificial intelligence ("AI") to give an ever more detailed picture on how buildings and equipment work and how individual occupants and visitors interact with them. Using AI, FM service providers can effectively harness that data to offer products and services that optimise efficiencies and help customers meet their wider goals in areas such as sustainability.

Issues of 'ownership' and the right to use that data are likely to increasingly feature as hot topics in FM agreement negotiations between service providers and customers. Where the data relates to identifiable individuals it will also be 'personal data' and its processing will therefore be a regulated activity under data protection legislation. It is important to be aware of the legal implications arising from the collation, analysis and processing of that kind of data.

The ICO issued guidance on AI and data protection (the "Guidance") on 30 July 2020. For FM service providers developing, deploying or utilising AI in relation to their management of buildings, plant, equipment and integrated services, the Guidance provides useful assistance to ensure use of AI is compliant with evolving data protection obligations under the GDPR and the UK Data Protection Act 2018.

AI and personal data protection

The Guidance:

  1. emphasises the importance of evaluating any data protection risks and implications, as early as possible, and maintaining a focus on it throughout the data processing lifecycle; and
  2. recommends best practices when processing personal data using AI, including creating an auditing framework for AI systems.

The Guidance consists of four parts, addressing fundamental data protection principles, namely: (1) accountability; (2) lawfulness, fairness and transparency; (3) security and data minimisation; and (4) individual rights of data subjects.

The Guidance can be found in full on the ICO's website.

Why is this important for FM?

From the outset and throughout the AI lifecycle, FM service providers are responsible for assessing the data protection implications of each processing operation using AI.

Procurement, Pre-deployment and AI Development

Any personal data processing via AI systems must be lawful, fair and transparent. During the design process or procurement of AI systems, FM service providers should identify the appropriate purposes (and lawful basis) for processing personal data at each stage of the AI lifecycle. The ICO also recommends documenting data input sources, outputs which are "statistically informed guesses" (rather than facts) and, especially where AI generates automated decisions, any outputs which are statistically inaccurate or discriminate when making predictions (or otherwise compromising the outputs).

Part 2 of the Guidance sets out which of the six lawful bases are likely to be appropriate, or not, in the AI context. Consent poses a particular problem in these circumstances because most AI-powered devices are passive or hidden - the choice to opt-in cannot be genuinely or freely given. Similarly, as many visitors to the building will not have a contract with the FM service provider, contractual performance is unlikely to be relevant in most cases. FM service providers will, therefore, frequently need to rely on "legitimate interests". In order for that to work though, the processing must be necessary and proportionate when balanced against the individuals' rights and legitimate privacy expectations. This will always be fact-specific, so the reasoning behind the legitimate interest analysis should be clearly recorded.

AI Training, Predictions and Machine Learning – is the processing necessary, proportionate and relevant?

The principle of data minimisation means that personal data should only be processed where relevant and necessary for the purpose of the processing. There is a clear tension here between data minimisation and the need to harvest huge volumes of data to train meaningful statistical models and predictions for AI and machine learning.

Whilst the Guidance recognises that large volumes of personal data may be needed to train statistically-accurate AI models, Part 3 of the Guidance recommends that excessive volumes of data should not be used. In other words, only use the minimum quantities of personal data required to ensure that the AI outputs are accurate. Striking a balance between the means used and the intended aim is key.

Several techniques are recommended in the Guidance regarding data minimisation for large volumes of personal data. These include making data subjects less identifiable using "perturbation" (techniques which distort or modify data elements relating to the individual while maintaining underlying aggregate relationships of the database); synthetic data (artificially manufactured data); and federated learning (collaborative learning from shared prediction models where there is no direct sharing of personal data).

AI Deployment and ongoing use – how secure is the personal data you process?

Where smart buildings gather data from various sources, the risk of loss or misuse of large amounts of personal data increases, so FM service providers must also consider what measures are appropriate to mitigate or remove this risk. As ever, effective data security in transit and at rest is a necessity.

Deployed AI can increase the potential vulnerabilities of underlying software / systems. As AI enabled offerings may involve many devices, with a range of manufacturers, the AI supply-chain often utilises third-party or open-source code, and / or requires third-party software be installed to run the AI model and analyse outputs. A security defect in any one component could compromise the personal data anywhere in the system.

As outlined in Section 3 of the Guidance, AI systems developed and / or deployed must be kept up to date to ensure compliance with evolving industry standards. They should be audited and reviewed regularly for weaknesses in security anywhere in the "pipeline". The Guidance further recommends isolation of the AI development environment from the rest of the organisation’s IT system. Contractual arrangements with third-party suppliers should be reviewed to ensure FM service providers are promptly notified of security problems whenever they are discovered.

Practical Steps

In general. any FM provider processing personal data using AI should conduct a data protection impact assessment to get a better sense of the impact and risk. Other voluntary assessments, like algorithm impact assessments, will also be advisable.

Thereafter, audits and regular reviews of AI strategy, security measures and underlying contractual arrangements should be undertaken to ensure compliance with emerging trends and legal developments in relation to the use of AI.

If you require advice on any of the requirements discussed in this blog, please do not hesitate to contact our team.

Our data protection team deliver regular GDPR and Data Protection updates and the next online event takes place on 12 November 2020, click here to register.

Contributors

Andy Nolan

Partner

Jennifer Murphy

Senior Associate