What data protection laws apply?
AI is fuelled by the large volumes of data necessary for training accurate AI models. Where AI processes personal data, businesses will need to comply with laws such as the UK Data Protection Act, the UK General Data Protection Regulation, and any other international privacy laws that may apply to the personal data being processed.
How is your business using personal data and AI?
As discussed in our first article, AI is already used in many everyday business operations. Businesses must ask questions about how they are using (or intend to use) AI and personal data, including:
- Are we using AI to process personal data of our staff?
- Do our products and services use AI to process personal data of our customers?
- Are we using personal data to train AI models?
Ensuring compliance with data protection laws
Once businesses identify how they are using AI, they must take steps to comply with data protection laws. This includes identifying an appropriate legal basis for data processing, providing privacy notices, and adhering to privacy principles (such as transparency, fairness, purpose limitation, accuracy, and data minimisation).
The core characteristics of AI, including its ability to process vast volumes of data and integrate itself within products and services, can make it easy for AI to conflict with data protection laws.
We have already seen some examples of global data protection regulators limiting the use of AI – for example, in Italy, the regulatory authority temporarily banned Chat GPT due to concerns over its incompatibly with Italian data protection laws. This is a clear example of the need to ensure the compliant use of personal data within AI solutions meets the expectations of regulators.
If AI is implemented and monitored in a controlled and secure manner, there is no reason why it cannot comply with privacy laws. In the UK, the Information Commissioner's Office (ICO) continues to take a pragmatic and supportive approach to AI to demonstrate that privacy laws and highly functional AI can co-exist provided the right safeguards are put in place.
How do you ensure a compliant approach?
The ICO has issued guidance which provides businesses with a steer on how to implement AI whilst meeting the current data protection standards mandated by the GDPR. Whilst AI poses unique challenges to a number of the core data protection principles, these challenges can usually be overcome by the adoption of these key steps:
- Embed privacy by design in your use of AI - align your internal structures; roles and responsibilities; training requirements; policies; and incentives to your overall AI governance and risk management strategies.
- Identify a legal basis for processing personal data in AI solutions. These may differ during the development and deployment phases of AI.
- Conduct a data protection impact assessment (DPIA) to identify risks in the development and deployment of AI solutions and how these affect the rights of individuals.
- Adopt a risk-based approach – assess the risks to the rights and freedoms of individuals that may arise when you use AI and implement appropriate and proportionate technical and organisational measures to mitigate those risks.
- Display clear and transparent privacy notices that explain how you are using AI to process personal data.
- Identify any automated decision-making processes. This type of decision-making will only be allowed in specific circumstances with the support of appropriate safeguards.
These are just a few of the steps you should take to ensure legal compliance when developing and implementing AI solutions that process personal data.
If you would like to know more about how to manage legal risks around implementing AI solutions in your business, please contact Joseph Fitzgibbon in Shepherd and Wedderburn’s media and technology team.
This article was co-authored by Alannah O’Hara, Trainee Solicitor in our media and technology team.