Please enable javascript in your browser to view this site

Regulating AI: How does the UK’s approach stack up?

The UK has proposed light-touch regulation for AI, designed to build trust and drive adoption. As other countries take a harder line, will its flexible approach strike the right balance between promoting innovation and protecting against risk of harm?

  • The UK leads Europe in terms of private investment in AI, raising £3.8bn in 2021. With a national strategy in place, the Government has now set out how it proposes to regulate AI, with rules designed to promote innovation, empower businesses and increase consumer trust.

  • Policymakers looking to increase oversight of AI have not necessarily taken the same approach. While the EU is seeking to set the global standard for regulation via the AI Act, some countries in Asia Pacific (Australia, Japan, Singapore) have preferred voluntary rules and to leverage existing legislation.

  • The EU is looking to define and set rules for AI systems according to a ‘pyramid of risk’. The UK’s approach is also risk-based but has been pitched as more adaptable and pro-innovation, with decentralisation affording regulatory bodies the power to oversee the use of AI within their own sectors.

  • While pursuing flexibility and proportionality, responsibility and ethics should remain top of mind for the UK. This means finding a balance between stimulating innovation and investment while protecting against potential harms to ensure AI develops in a positive, societally beneficial way.

UK unveils its proposals for an AI rulebook

Artificial Intelligence (AI) is increasingly a focus area for policymakers who want to maximise its potential benefits while ensuring that it is employed safely and responsibly. The UK Government has invested £2.3bn in the technology since 2014 and in September 2021 launched a 10-year strategy to position the country as a ‘world leader’ in AI. The plan has three main objectives: to increase the number of discoveries that happen and are exploited in the UK; to benefit from the highest productivity growth from AI; and to establish the “most trusted and pro-innovation” system for AI governance in the world. To achieve this, the Government wants a higher number of people working in AI, more access to data and the diffusion of AI across all sectors and regions – a nod to its own levelling up agenda.

Since publishing the strategy, the Government has been working to develop its approach to regulating AI. On 18 July 2022, it unveiled an AI policy paper, which contains proposed rules designed to promote innovation, empower businesses (by removing unnecessary barriers) and increase public trust in the technology. The draft rulebook also aims to address future risks and opportunities, and to provide a consistent framework that would help to avoid regulatory overlaps, inconsistencies and blindspots. The Government believes this will offer clarity for business and investors, and enable consumers to be confident that AI systems are safe and robust. The UK’s intended approach is based on six key principles that regulators must apply, with flexibility to implement these in ways that best meet the use of AI in their sectors:

  1. Ensure that AI is used safely, with a proportionate approach to managing risk;

  2. Ensure that AI is technically secure and functions as designed to instil confidence and support research and commercialisation;

  3. Make sure that AI is appropriately transparent and explainable to improve understanding around how the technology makes decisions;

  4. Embed considerations of fairness into AI, with high-impact outcomes expected to be justifiable and not arbitrary;

  5. Define legal persons’ responsibility for AI governance, whether corporate or natural; and

  6. Clarify routes to redress or contestability, particularly in situations where the use of AI can have a material impact on people’s lives.

Other countries are formulating AI strategies and in some cases regulatory frameworks

The UK is not alone in developing a strategy and making investments to drive the application of AI across society and industry. For example, by 2030, China is expected to have invested $1tn in domestic AI companies and capabilities. South Korea unveiled its National Strategy for AI in December 2019, committing £110m to bolster the country’s AI industry. In addition, in November 2021, the French Government announced the second phase of its AI Strategy. With total expected funding of more than €2.1bn over the period to 2025, the strategy aims to foster digital skills, while nurturing SMEs and growing France’s share of the global AI market to 10-15%.

On regulation, Europe has led the charge to create rules to govern how AI develops and is employed in future. Through the AI Act, the EU is looking to design a comprehensive framework for the technology. A draft report on the law released in April 2022 triggered over 3,000 amendments in the European Parliament’s internal market and civil liberties committees. The co-rapporteurs have found general agreement on some aspects of the act ahead of the summer recess; however, there are several other issues – e.g. what is considered a ‘high-risk’ application – that will require resolution and consensus before final texts can be voted on in the autumn. The telecoms industry has urged that the act does not hinder them from deploying AI solutions in areas such as predictive maintenance, energy efficiency and customer experience. After being introduced as a bill in June 2022, Canada’s AI and Data Act (AIDA) is also in draft form – and it provides an example of proposed legislation being influenced by the EU’s approach to tech policy.

Meanwhile, many countries are yet to establish legally-binding rules for AI. In the US, a leader in terms of AI investment and technological development, the majority of bills and resolutions introduced so far have originated from individual states or federal agencies. That said, the FTC has reported to Congress on the issue, while the proposed Digital Platform Commission Act would require tech firms’ algorithms to be “fair, transparent and without harmful, abusive, anticompetitive or deceptive bias”. In Australia, the Government released an AI Ethics Framework in 2019 and then an Action Plan in 2021, which seeks to establish the country as a pioneer in developing and adopting trusted, secure and responsible AI. Despite this, Australia does not have specific legislation for AI or automated decision making (ADM) at this time.

A comparison of regimes highlights both similarities and differences

Policymakers have not necessarily taken the same approach to governing the development and use of AI. For instance, certain countries in Asia Pacific appear to prefer voluntary rules and to leverage existing legislation for this purpose. In 2017, Japan issued non-binding R&D guidelines to promote the benefits and reduce the risks of AI. More recently, the Ministry of Economy, Trade and Industry issued updated AI Governance Guidelines, rather than explicit regulation, to provide advice on system development and operation, risk analysis and more. Singapore has also avoided a prescriptive model, with guidelines and non-binding frameworks for users of AI to adopt as appropriate. Similarly, Australia’s Ethics Framework is voluntary, although the March 2022 issues paper on AI and ADM from the Digital Technology Taskforce may signal a change of tack.

The EU has opted to take a relatively harder line, with its proposed legislation looking to define and set rules for AI according to four different risk categories: unacceptable risk; high-risk; limited risk; or minimal or no risk. Many obligations and safeguards set out in the AI Act only apply to forbidden practices, to high-risk AI applications (e.g. deep fakes and systems used for electoral processes or to count electronic votes) and to certain AI systems that require transparency. In addition, the European Commission would have the role of enforcer to avoid inconsistencies within the bloc, with the power to impose financial penalties. However, as the Parliament works to finalise its text, there are tensions at play at the member state level. While Germany has supported a privacy-focused set of rules (especially with respect to biometric identification), France has pushed for a less prescriptive regulatory environment. France’s former Secretary of State for Digital bemoaned the EU’s tendency for “regulation before innovation” and noted that its “risk-only approach” undermines the ability to create much-needed champions.

The UK is also aiming to be staunchly pro-innovation while being risk-based – albeit without an EU-style ‘pyramid of risk’. It considers that its approach will promote growth and avoid barriers, thereby aligning with the stable and supportive regulatory environment envisaged in July 2021’s ‘Plan for Digital Regulation’. The UK’s proposals would take a less centralised approach to AI oversight than the EU, with different regulators taking responsibility rather than a single governing body. The Government argues that this better reflects the growing use of AI across sectors and that it would create proportionate and adaptable regulation to support AI’s rapid adoption. Still, Canada’s draft act is a clear case of the ‘Brussels effect’, with the bill proposing a novel regulatory framework for so-called high-impact AI systems. These must be developed and deployed in a way that identifies, assesses and mitigates the risks of harm and biases, while some more concerning systems may be prohibited altogether.

The UK is in a strong position but there are risks to its approach

Negotiations are in their early stages in some countries and approaching a crunch point elsewhere. While Canada's draft act targets high-impact systems and the prohibition of “material harm”, these and other key terms still need to be defined. In the EU, lawmakers are currently working through proposed amendments to the draft text. The Parliament wants to reach a common position by November, although negotiations could prove painstaking. At the same time, the bloc is also under pressure to catch up with the US and China, which are leading the way in terms of investment, research and attracting talent. If the EU passes the AI Act, it will create pressure for other countries (including Canada) to implement their own laws for something that has not historically been regulated. One such country could be Brazil, where the Senate has noted that it would be looking to the EU for legislative inspiration.

The UK Government will be hoping that its regulatory approach will support an already thriving AI sector and boost levels of private investment, which reached £3.8bn ($4.65bn) in 2021 (behind only China and the US). Despite being in a strong position, the UK’s framework will encounter challenges and a number of issues may be contentious and therefore require careful consideration. For example, the decision to build on the OECD’s own AI principles appears sensible, but international collaboration will be required to ensure consistency in their application. Similarly, with the Government encouraging sectoral regulators to consider lighter touch options (e.g. guidance, voluntary measures and sandboxes), there will need to be coordination between Ofcom, the FCA, the CMA and others to avoid a confusing and piecemeal application of the AI principles across markets.

With the Government considering implementing the six principles on a non-statutory basis, this could mean they may not engender the trust they are designed to achieve, which would have implications for some use cases. As a bottom line, while simplicity, flexibility and proportionality are reasonable goals, the UK must be mindful of ensuring that responsibility, safety and ethics remain top of mind. Finding a sense of balance between stimulating innovation and protecting against the risk of potential abuses or biases would help the UK continue to attract investment and startups, while ensuring AI develops in a positive, societally beneficial direction.