Please enable javascript in your browser to view this site

Google calls for regulation of AI

The company’s CEO, Sundar Pichai, called for international alignment and agreement on “core values” in a piece for the Financial Times.

Background: Artificial Intelligence has steadily moved up regulators’ agenda over the last two years. In particular, the previous European Commission adopted an AI initiative in 2018, and set up a High-Level Expert Group on Artificial Intelligence made up of industry experts and academics. This group issued ethics guidelines on AI, and policy and investment recommendations on ‘trustworthy AI’. The newly appointed EC is set to follow the same path, and has hinted at regulating specific AI applications such as facial recognition. In the US, the government has adopted guidelines on AI, which call for a flexible approach to avoid stifling innovation.

Google calls for regulation: In a piece written for the Financial Times on 20 January 2020, Google’s CEO Sundar Pichai recognises the need for AI to be regulated: “there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.”. Pichai notes existing rules such as the GDPR can provide a strong foundation, and that good regulatory frameworks will have to consider safety, explainability, fairness and accountability. The approach will have to be proportionate, and balance potential harms with social opportunities.

The need for international cooperation: Perhaps unsurprisingly, Pichai also calls for regulators to aim for international alignment so that global standards can work. For that to happen, there needs to be agreement on core values. What is more surprising in Pichai’s article is his remark that market forces should not be left to decide how this technology can be used, in order to make sure technology is harnessed for good and made available to everyone.