Please enable javascript in your browser to view this site

UK/US AI safety partnership

As the UK and US draw closer in alignment on their approaches to AI safety, more countries have moved to consider the competition implications of the emerging technology’s boom

UK and US forged an agreement on collaboration between AI Safety Institutes

On 1 April 2024, the UK and US Governments announced a Memorandum of Understanding (MOU) on collaboration and partnership in their work on AI safety. The MOU was signed by the US Department of Commerce and the UK Department for Science, Innovation and Technology on behalf of the country's new AI Safety Institutes. The agreement is largely in line with commitments announced by both Governments at the UK AI Safety Summit held at Bletchley Park in November 2023. The main thrust of the agreement is a commitment to work collaboratively on the development of testing procedures for high risk AI models and systems. Additionally, the institutes will perform a joint testing exercise on a publicly accessible AI model and explore the possibility of personnel exchanges. The UK has already committed to forming a similar partnership with the Government of Singapore, which has also taken a pro-innovation approach to facilitating the development of powerful AI models within voluntary safety parameters. Global political and industry leaders are also expected to gather for the AI Seoul Summit, co-hosted by the UK and South Korea, in May 2024 to provide updates on progress towards the commitments made at Bletchley Park. 

Governments are keen to appear capable of handling AI, although regulating for safety remains unlikely

While the testing procedures developed by the UK and US AI Safety Institutes will only be mandated for the most powerful AI models, both Governments have been actively considering broader regulation on AI safety – though little progress has been made. Legislation has been proposed in Parliament and in the US Congress to create horizontal safety obligations for AI systems of lesser strengths akin to the EU’s AI Act. However, their passage is unlikely given the lack of support from governmental and political leadership on those frameworks. Instead, the Governments have paid particular attention to the expertise and ability of regulators and agencies to address issues from AI applications that may fall within the scope of existing regulation. As part of the US Executive Order on AI, all executive agencies will be required to hire specialised staff to manage AI-related work, including the use of AI by the agency itself. Similarly, regulators in the UK have been charged with publishing an outline of AI-related risks in the sectors they oversee, as well as a report on their own capacities to address AI. 

Regulators have been more active in considering AI competition concerns 

While the EU remains in limited company in passing binding regulation on AI safety, a number of jurisdictions have forged ahead with regulatory efforts related to competition concerns in AI-relevant markets. In the UK, the Competition and Markets Authority (CMA) has ongoing studies into both the market for foundation models, as well as the cloud services market, which provides a key upstream input for the development and adoption of AI systems and is dominated by the same big tech firms investing heavily in AI. The Federal Trade Commission (FTC) in the US also launched a spanning probe into the partnerships formed between big tech firms, such as Google and Microsoft, and AI firms, including OpenAI and Anthropic. The EC as well as the Canadian Competition Bureau have taken more generalised approaches to consulting broadly on a number of competition matters related to AI. It remains to be seen if these jurisdictions will ultimately take a more proactive approach to preventing the concentration of AI markets by the biggest and largely American tech firms as compared to the prior development and domination of other digital markets.