AI in the food and drink sector – opportunities and challenges

The use of AI is becoming more prevalent in the food and drink sector, presenting both innovative opportunities and questions around unintended consequences. To keep pace with these developments, businesses should consider whether they have the knowledge and resources to lawfully implement AI. 

11 September 2024

Agriculture robotic and autonomous car working in smart farm, Future 5G technology with smart agriculture farming concept

Artificial intelligence (AI) presents both opportunities and uncertainties for businesses across various sectors. Food and drink (F&D) businesses are no exception, and they are looking ahead to a decade of continuous technological growth. Industry leaders are taking advantage of AI to reduce cost and increase the scale, efficiency, and sustainability of their yield. 

Manufacturers and retailers are seeking to keep pace with the evolving market. To do this, they must be equipped with the knowledge and resources to embrace the new possibilities, while also tackling any challenges ahead. 

This article considers some of the ways in which F&D businesses can lawfully adopt AI within a variety of business functions, while also being mindful of potential unintended consequences and risks. 

Uses of AI in the F&D sector 

AI is present in numerous parts of the F&D industry. Some of the primary areas where businesses are using it to improve their services include sustainability, supply-chain resilience, quality control, customer experience, and compliance.

In the short term, some examples of AI’s current uses include facilitating picking and sorting produce; managing warehouse staff and inventories; website development; and sector data analysis. Data analysis support can be game-changing in areas such as demand forecasting, sourcing, and regulatory review. 

In the long term, AI is enabling groundbreaking advancements. For example, AI is facilitating regenerative agriculture, developing alternative proteins, and even gene editing. 

Implementing AI technologies lawfully

There is no worldwide consensus on whether to and how to regulate AI. In the UK, while the previous Prime Minister stated that he would not “rush to regulate” AI, the new Prime Minister, Sir Keir Starmer, has indicated his intention to regulate AI more stringently and establish a Regulatory Innovation Office. 

On 17 July 2024, in The King’s Speech, it was said that the government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models”. Businesses will be closely watching to see what impact this proposed legislation will have on their operations.

In the meantime, F&D businesses may wish to evaluate whether they are equipped to integrate AI. Considerations will depend on the business and AI tool in question, but some key points to deliberate are outlined below.

Engage with internal stakeholders

Many businesses will already have the foundational building blocks for integrating AI, such as a sophisticated IT department and internal systems for data protection, security, and risk. 

When contemplating implementation, it is important to engage with key stakeholders across the business at the earliest opportunity. This may comprise, for example, leads in IT, data protection, risk, communications, marketing, and legal counsel. 

Businesses may also consider recruiting individuals or consulting external experts who are trained to work with the system or AI more generally. 

Consider impacts on customers

Depending on the technology, its use may have knock-on effects for customers. For example, customer data might be used or there could be automated customer decision-making. Therefore, businesses should evaluate whether compliance frameworks and privacy notices must be adapted to account for customer impacts. 

Consider impacts on workers

Certain AI technologies could cause consequences for the workforce. Businesses may feel the need to recruit specialists, or re-skill some or all employees to adjust to the adoption of AI.  

Another commonly referenced impact is that, in extreme cases, adopting certain technologies could lead to redundancies if particular roles are no longer required. The potential impacts on workers require careful consideration by stakeholders on a case-by-case basis.

Managing the risks associated with implementing AI technologies

Implementing AI tools could lead to unanticipated consequences. The best way for any business to manage risk is by being thoroughly aware of what they are engaging with and consequent interactions with other regulated areas of the business. Some examples of ways to manage common risks are outlined below.

Vendor contracts

Manufacturers may have vendor contracts in place which were made at a time when AI use was not foreseen. To ensure that the business is not exposed to any risks, it is advisable to review all contracts with vendors and clarify whether and how their vendors are using AI. Businesses might consider updating contracts accordingly. 

Intellectual property

At the core of managing AI risks relating to intellectual property (IP) is understanding what the business owns. This requires consideration of the IP being entered into and produced by the AI tool, and the IP ownership throughout that process. This is particularly important when it comes to output technologies like chatbots. Businesses should also be mindful about inputting commercially sensitive or confidential information into such technologies. 

Patent protection is another important consideration when using AI to facilitate research and development of products. In the UK, for example, it has recently been authoritatively determined by the Supreme Court that a product or process is not patentable if it was “invented” by AI alone. 

If a business engages a third party to provide an AI system, it is also advisable to seek confirmation that the third party owns the IP in the system.

Corporate transactions

The National Security and Investment Act 2021 provides that certain corporate transactions and investments, which involve a target and are carrying out specified AI activities, must be notified in advance to the Investment Security Unit. This can have implications on the timescales for completion, and cost, of deals. Failing to follow the legislation can result in criminal and financial penalties. 

Data protection

As mentioned, the use of customer data can be associated with the adoption of some AI tools. It is imperative that any data protection risk is managed, and customers and employees are provided with confidence that their data is being used in compliance with data protection law. An effective way of managing this risk is to carry out a data protection impact assessment. 

Insurance

Another way for businesses to manage risks associated with using AI is to make sure that they are insured and that any insurance in place covers AI risks, whether that be through a general or specific policy. 

 

If you need advice about implementing AI technologies, and managing risks, please get in touch with Joe Fitzgibbon or Ruairidh Leishman.

This article was co-authored by Trainee Solicitor Catherine Templeton