With more AI (artificial intelligence) being introduced into our workplace and our lives generally, more regard has to be paid to the fact that its use is not without risk. Whilst this blog is not envisaging issues as seen in the latest addition to the Terminator film franchise, it is inevitable that businesses will see an increase in AI related disputes as time goes on.
Increased use of AI
AI powered drones have been seen as a solution to a number of problems, such as delivery of medical supplies to remote locations, provision of traffic management solutions and use in the construction industry to map building sites. Indeed, we saw last week Police Scotland's unveiling of AI recognition drones to help find missing persons.
However, if something goes wrong, for example if a drone crashes in the course of business, who is liable for the damage and loss caused?
Cause of the damage and loss
In the first instance, the cause of the crash would have to be ascertained, if possible. Questions to be answered may include: was the drone operating entirely autonomously when it crashed? Was there a design flaw? Was there an issue in the operating software/ programing?
Once the cause is known, it is possible that one or a number of parties could be at fault i.e. the operator of the drone, the employer, the manufacturer, the designer, or the software provider etc.
Liability
In the UK, AI as a whole does not currently have its own body of regulation, although legislation has been passed in certain sectors. For example, the Automated and Electric Vehicles Act 2018 sets out that where an accident is caused by an autonomous vehicle, the insurer is directly liable for the damage.
In other situations, established principles of law will require to be applied to demonstrate liability, most likely under the law of contract or tort (known as delict in Scotland). In the case of the former, the party suffering the damage and loss may be able to found upon a breach of an express or implied contractual term, such as one relating to the safety of the defective product. In the case of latter, they would have to establish that one of the possible defendants owed and breached their duty of care towards them.
The "black box" issue
However, if an AI system is fully autonomous and made its own decision through machine based learning which resulted in the crash, it may not be easy or even possible to ascertain the cause. The question of liability also becomes more difficult - can it be established that the damage was foreseeable if the cause cannot be ascertained?
It remains to be seen how the judiciary will treat this area of law.
What businesses can do to minimise risk
In the meantime, in order to minimise the risk of future litigation, when utilising AI it is important for businesses to ensure:
- routine inspections and maintenance are carried out and logged to ensure AI is operating correctly and risks are identified such as algorithmic bias;
- developments in the law are reviewed and implemented if necessary;
- records are kept which document the AI's process of decision making (as far as is possible) in order to show that methods of best practice are used and it is operating within industry standards; and
- contracts with manufacturers and programmers appropriately allocate liability with the use of warranties, indemnities and limitations.
Contributor
Trainee