Please enable javascript in your browser to view this site

US: Voluntary commitments on AI

While currently relying on a self-regulatory approach, the US plans to pursue legislation, but without a clear timetable for when, it lags other markets such as the EU

Commitments from OpenAI, Google, Microsoft and others: On 21 July 2023, the US Government announced that it has secured eight voluntary commitments from seven leading AI companies to manage the risks posed by the fast-moving technology. The firms in question (Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI) have agreed to work immediately towards the safe, secure and transparent development of AI – three principles the White House sees as critical to ensuring it evolves in a responsible way. The Biden-Harris administration also considers that companies at the forefront of emerging technologies have a responsibility to ensure their products are safe. To make the most of AI’s potential, the Government is therefore encouraging industry to uphold the highest standards so that innovation does not infringe upon Americans’ fundamental rights. For now, however, it is unclear precisely how the signatories will be held to account.

Pledges aim to ensure safety and security, and build trust amongst the public: The firms have committed to:

  • Internal and external security testing of their AI systems before release, guarding against some major sources of risk;

  • Sharing information, including best practice, across industry and with governments, civil society and academia;

  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights (the most essential part of an AI system);

  • Facilitating third-party discovery and reporting of vulnerabilities;

  • Developing robust technical mechanisms to ensure that users know when content is AI-generated, such as a watermarking system;

  • Public reporting of their AI systems’ capabilities, limitations and areas of appropriate and inappropriate use;

  • Prioritising research on the societal risks that AI can pose, including on avoiding harmful bias and discrimination, and protecting privacy; and

  • Deploying advanced AI systems to help address challenges such as cancer prevention and mitigating climate change.

The Government flags that there is “much more work underway”: The White House has already consulted on the voluntary commitments with a range of countries (including Canada, Germany and the UK) and will work with allies to establish a strong international framework to govern the development and use of AI. It has also promised – albeit without clarifying timelines – further decisive action at the national level, where the Biden-Harris administration is currently developing an executive order and will pursue bipartisan legislation to help the US “lead the way in responsible innovation”. In October 2022, the Government published a non-binding AI Bill of Rights; however, Congress remains divided over introducing formal regulation. While the US claims to be moving with urgency to seize the promise and manage the potential challenges of AI, progress from a policy standpoint lags behind markets such as the EU, while achieving political consensus on prospective new rules could make for a long road ahead.