In line with the US’ more relaxed approach to AI regulation, the Government has further restricted local policymakers’ ability to rein in the technology
President Donald Trump has signed an Executive Order condemning state AI regulation in favour of a federal-level approach
On 11 December 2025, President Donald Trump signed an Executive Order (EO) on ‘Ensuring a National Policy Framework for Artificial Intelligence’. The EO establishes that the US should adopt a minimally burdensome, national-level regulatory framework for AI and effectively demands the repeal of existing state-level AI regulation. Prior to this EO and in recent weeks, Congress failed to pass a version of state-level AI regulation preemption into law. Broadly, the EO lays out that the national level framework should prioritise the protection of children, censorship prevention, copyright protections and community safeguarding. Sections three and four of the EO direct the creation of the AI Litigation Task Force, which will be responsible for reviewing and challenging state laws that conflict with national policy, including by attempting to regulate commerce across state lines, in court. Section eight of the EO sets out a brief plan for a federal-level legislative framework for AI that will replace state-level regulation moving forward, though it does not reference any of the number of pending bills in Congress already aimed at creating such a regime. The EO does allow for state AI laws that cover child safety protections, permitting for AI infrastructure and state government procurement and use of AI, unless other aspects of these laws are otherwise deemed unlawful. Most of these provisions are set to take effect within 90 days of the EO’s signing and will be carried out by various agencies and regulators under the jurisdiction of the Department of Commerce.
BEAD funding will be held back from states with AI laws that are considered too onerous
At a time when the Federal Communications Commission (FCC) is already considering preempting state AI laws that may act as a barrier to network deployment, the EO further instructs the National Telecommunications and Information Administration (NTIA) to consider any states with ‘onerous’ AI laws as ineligible for non-deployment funds granted through the Broadband Equity Access Deployment (BEAD) programme. In addition to providing funding for network deployment in unserved rural communities, BEAD programme terms originally allowed states to commit funds to broadband adoption activities and other connectivity infrastructure priorities. The Trump Administration has already restructured the BEAD programme to significantly cut deployment costs and remove public policy objectives written into programme terms, including affordable tariff requirements. The addition of this AI law restriction will likely only raise the barriers to states receiving their congressionally allocated BEAD funding. This section also calls on departments across the executive branch of government to assess their grant programmes to determine whether funding could be held back from states that have enacted onerous AI laws.
The US’ preference for less restrictive AI regulation mirrors newly announced approaches in Australia, the EU and the UK
This EO continues the US’ abandonment of attempts to regulate AI, which began with the Trump Administration's January 2025 revocation of President Biden’s 2023 EO that proposed a more safety-focused approach to AI regulation. This shift has been seen internationally too, with governments in both Australia and the UK recently confirming that they no longer plan to develop horizontal AI regulation akin to that of the EU’s AI Act. Even in the EU’s case, the recent Digital Omnibus package has also proposed the rollback of some key AI Act provisions. Across these jurisdictions, policymakers have attributed these changes to the need to encourage AI innovation that is not hampered by regulatory barriers. This change in approach has only emerged recently, with these jurisdictions all originally supporting international declarations on ensuring AI safety as recently as 2024 at the AI Seoul Summit in South Korea and in 2023 at the AI Safety Summit in Bletchley Park, UK. While these countries have relaxed their approaches to AI regulation, China has seemingly taken an opposing approach, having recently announced its plans to lead international AI regulatory standards through its proposal for a World AI Cooperation Organisation (WAICO). With the US making clear that it is focused on beating China in the AI race, it is counterintuitive to see the latter aiming to compete by further regulating AI rather than deregulating it.
