Please enable javascript in your browser to view this site

Google asks the EC to thread carefully in regulating AI

The company’s response to the EC’s consultation on Artificial Intelligence warns against a prescriptive approach for ‘high-risk’ AI.

Background: The European Commission is running a consultation on its proposals to regulate Artificial Intelligence. The Consultation is open until 14 June 2020, and is based on a White Paper on AI in which the EC proposes to differentiate its approach between low-risk and high-risk AI, with the latter being subject to safeguards such as transparency over the data used to train the AI systems, human oversight, and tracing. Low-risk AI would be left to a largely self-regulatory approach, through a voluntary labelling scheme.

Google warns the EC against the idea: On 28 May 2020, Google submitted its response to the consultation. It warns the EC against creating what would amount to an impact assessment process for high-risk AI, which would risk duplicating review procedures that already govern many higher risk products, and add needless complexity. Instead, Google proposes to focus on the outcomes of the technology to evaluate processes.

Sandboxes should be created for early R&D: If the EC decides for an ex-ante conformity assessment regime, Google suggests that condential testing and piloting of an AI application be allowed prior to any conformity assessment, within the bounds set by existing sectoral regulation. If such pre-assessment testing is not permitted, organisations may take an exceedingly precautionary stance when considering investments in new products, which could hinder innovation. Google also calls for a grandfathering clause for products already on the market, and for clear guidance about when any repeat assessment procedures are warranted, e.g. for products which receive significant updates.