Artificial Intelligence in focus – managing the legal risks presented by AI

For the fourth and final instalment in our AI article series. Joe Fitzgibbon and Thomas Mackie explore the measures organisations can take to mitigate the risks and benefit from the opportunities created by AI.

First published in: Insider.co.uk

 

14 December 2023

This is the fourth and final article in our four-part series on Artificial Intelligence (AI). In this instalment, we explore how to manage the legal risks that arise when developing or deploying AI solutions.

In this series, we have highlighted some of the opportunities and legal risks that arise when businesses adopt AI solutions in their internal business functions and incorporate AI into their products and services.

Adopting clear governance structures

An obvious first step to identifying and managing legal risks is the adoption of clear governance, reporting, and oversight procedures. While this may seem overly formal, those responsible for the adoption of AI products in their organisation should proactively implement structures through which they can successfully integrate and safely manage AI.

Good governance procedures should reflect existing processes for significant technology adoption with the input of key stakeholders such as technical engineers and members of operations, finance, legal, privacy, and HR. Those stakeholders should clearly distinguish between how the business uses AI within its internal processes and how it is embedding AI in its external products and services.

Contracts – business as usual?

In our third article, we highlighted the importance of reviewing contractual terms carefully to manage risks when using AI. Contracts are a core lever for ensuring that issues identified in risk assessments are managed effectively.

Agreements with suppliers of products and services incorporating AI should contain clear liability provisions that assign responsibility for issues relating to the AI capabilities of the products being offered, and (ideally) warranties that any AI models have not been trained on infringing material or in contravention of data protection rights. Suppliers should also be required to carry adequate insurance for AI-related harm.

Where businesses incorporate AI into their products or services, external facing terms and conditions should include clear limitations of liability in case of AI-related harm (where legally possible), appropriate intellectual property provisions, and service level provisions.

The nature of these agreements should accurately reflect the legal risks associated with a business’ actual or intended use of AI.

How do we manage the legal risks arising from AI?

As this series has explored, AI carries various risks, some of which are still being fully understood. To manage the legal risks that may arise, businesses should think carefully about how they are using AI and consider taking some of the following steps:

  1. Embed clear governance structures with those responsible for AI in your business.
  2. Take steps to manage the use of personal data, as outlined in article two.
  3. Identify the types of liability that your use of AI may attract, as outlined in article three.
  4. Identify the outputs AI is creating in your business (e.g. software code, designs, or written materials) and assess whether these attract intellectual property protection.
  5. Map how your organisation is using AI through third-party service providers, API integrations, web, and mobile devices.
  6. Train staff on the uses and limitations of AI where relevant to their roles.
  7. Review insurance policies to establish whether AI-specific risks are covered.

These are just a few of the steps you should take to manage legal risks when developing and implementing AI solutions that process personal data.

This article was co-authored by, Thomas Mackie Trainee in our Corporate team