Please enable javascript in your browser to view this site

Loosening the regulatory grip on AI

Plans to regulate AI are increasingly being rolled back as policymakers look to avoid any strict guardrails that could hinder AI development

Policymakers around the world have begun to move away from strict approaches to regulating AI, while still prioritising online safety

Australia, the EU and the UK are all looking to roll back their initial legislative plans or rules for AI, signalling a wider trend towards a more relaxed regulatory environment for the technology. Both Australia and the UK had originally pledged significant regulatory packages akin to the EU’s AI Act, but the former abandoned these plans in favour of a national AI plan in December 2025, while the latter has slowly moved away from any such policymaking. Even in the EU, where the AI Act has been a one-of-a-kind rulebook for the technology since its 2024 adoption, amendments are already on their way, with the EC’s AI Omnibus agreed in the early hours of 7 May 2026 by the EU Parliament and Council following intense trilogue negotiations. The Omnibus will delay the enforcement of the AI Act’s rules for high-risk AI applications by over a year to December 2027 and will ban AI systems that generate sexualised deepfakes, following concerns raised about X’s Grok AI tool. German Chancellor, Friederich Merz, also successfully pushed for the exemption of industrial AI applications from the Act’s scope. China remains one of the few jurisdictions maintaining a stricter approach to AI, and has introduced a host of new rules over the past year, such as in relation to online safety in interactive AI services. It has also proposed the establishment of the World Artificial Intelligence Cooperation Organisation (WAICO), which would work to set international governance standards for AI.

The Trump Administration has called for a national-level approach to a number of key AI policy areas

On 20 March 2026, in the US, the Trump Administration issued a national legislative framework for the “most pressing” AI policy topics. The framework sets out six key objectives covering online safety, the cost of data centres to taxpayers, copyright issues, freedom of speech, deregulation to remove “barriers to innovation”, and AI skills. The protection of children in relation to AI is a clear priority of the framework, with other issues such as AI’s impact on free speech and the need to upskill the US workforce in AI given less attention. On copyright, the framework stresses the importance of respecting creative works, but maintains that the text and data mining involved in AI training is not in breach of copyright law. The Administration also calls for assurances that taxpayers do not experience increased electricity costs as a result of new data centre deployments. The framework’s calls for a removal of “barriers to innovation” aligns with the federal administration’s approach to the wider digital ecosystem, and is particularly resemblant of Brendan Carr’s (Chair, FCC) “Delete, delete, delete” agenda in which the Federal Communications Commission (FCC) is looking to remove regulation considered to hinder innovation. In a similar vein, the framework focuses on the preemption of state AI laws throughout, as it proposes a national-level approach to avoid “burdensome” state rules.

Online safety is prioritised as the framework’s first pillar, with the Trump Administration calling on Congress to introduce age assurance requirements

The framework’s “Protecting Children and Empowering Parents” section calls on Congress to enact measures to strengthen online safety such as establishing “commercially reasonable, privacy protective, age-assurance requirements” for any AI platforms that are likely to be accessed by minors (under-18s). The framework highlights the use of parental age confirmation as a potential age assurance method, but leaves any further interpretation of this to Congress. Concerns have been raised more broadly about the effectiveness of existing age assurance methods such as the use of physical documentation, facial age estimation, data scraping or some form of digital ID, with none of these seemingly providing a perfect solution. Data privacy concerns have also been highlighted as a key issue that is prominent in almost all of these examples. The framework also issues broader calls to Congress to strengthen the protection of children online, particularly relating to data collection, targeted advertising and deepfakes.

State-level AI legislation will be preempted if it is deemed overly burdensome on the development or use of AI

The framework’s seventh section, which is not included as one of its key objectives, focuses on the preemption of “cumbersome” state-level AI laws. It calls on Congress to preempt state AI laws that impose “undue burdens” to avoid a fragmented approach across the country, and to ease the development of AI without overly burdensome rules. It also establishes three key areas for preemption:

  1. It argues states should not be permitted to regulate AI development, because it is an “inherently interstate phenomenon” with implications for foreign policy and national security;

  2. It says states should not unduly burden Americans’ use of AI; and

  3. It argues states should not be permitted to penalise AI developers for a third party’s unlawful conduct involving their AI models.

Some states have already begun to legislate AI, with Utah creating its own AI Policy Office and setting up regulatory sandboxes for AI development in the healthcare and education sectors. While the use of these sandboxes seems to align with the framework’s objectives around the avoidance of burdensome AI rules, it is unclear whether it would be preempted by any further federal legislation. Other states such Connecticut, Oklahoma and Texas are reportedly pursuing similar state-level rules.