This is the third article in our four-part series on Artificial Intelligence (AI). In this instalment, we look at liability issues that may arise when developing or deploying AI solutions.
All businesses are concerned about liability when things don’t go according to plan. AI risk management strategies will vary significantly for businesses depending on how they use AI – businesses building AI models will have very different liability concerns to those incorporating third party AI models within their products and services.
Who might be liable if things go wrong?
As we explored in our first article, AI tools are appealing to businesses because they can cut costs and drive efficiencies, however, businesses must be aware that AI, if not implemented carefully, can potentially create liability that could result in reputational damage and third-party claims.
The question of liability is complex, especially as there are many parties involved in the development and use of AI, such as data providers, designers, programmers, and end users. This complicated matrix means that it is likely to be difficult to establish what harm has occurred and who bears responsibility.
The UK’s existing liability rules require a legal personality to attribute responsibility for harm caused, so responsibility for AI’s act or omission would lie with its creator, supplier, or user.
What liability can arise?
Some examples of liability that may arise are strict liability under product liability law, fault-based liability for negligence, or liability arising through a contract. Different types of liability may also arise for businesses developing AI by using publicly available data sets containing personal data and intellectual property rights.
To attribute liability, it is necessary to show that an act or omission led to the harm caused. AI will present challenges for existing rules on liability because its complexity and opacity means that may be harder for claimants to show how an act or omission associated with AI caused harm.
It may also be challenging for businesses to bring claims against AI providers supplying AI models or services using supplier-friendly contractual terms that disclaim liability. This will require businesses to understand the risks they are accepting in relation to matters such as system availability and updates, supplier data access, cyber-security protection, and possible intellectual property infringement. Businesses should carefully check contracts when integrating AI models or services to understand and, as far as possible, negotiate appropriate terms to address AI-specific risks in the allocation of rights, responsibilities, and liability.
It is important for businesses to identify how liability might arise through their use (or intended use) of AI. In making this assessment, businesses should carefully consider which international laws apply to their (and, if applicable, their customer’s) use of AI in different jurisdictions.
What does the future hold from a regulatory perspective?
Regulatory changes are on the horizon. The EU has proposed the AI Liability Directive alongside the AI Act, both of which are aimed at harmonising rules for non-contractual compensation claims and seek to tackle the opacity of AI systems by introducing a presumption of causality where a causal link between the harm and the AI is reasonably likely.
The UK is likely to diverge from this EU approach, instead favouring a light touch ‘pro-innovation’ approach as detailed in the Government’s AI white paper. We will continue to keep a close eye on these developing areas of regulation.
How can we help you?
If your business is developing AI, using it for internal business functions, or incorporating AI into products and services, speak to a member of our team who can advise on the legal risks that may arise.