Please enable javascript in your browser to view this site

Europe’s AI Act moves forward

The Parliament aims to reach a common position by November. Many unresolved issues could mean it will take longer

Parliament won’t reach a common position until November: The co-rapporteurs of the Artificial Intelligence Act for the European Parliament have finalised their draft report for the Act. This will form the basis for the discussion that will take place within the coming months. In May, the discussion of the report will begin in the rapporteurs’ respective committees (the Committee on the Internal Market and Consumer Protection, and the Committee on Civil Liberties, Justice and Home Affairs). There is still a long way to go before Parliament reaches a common position, since the two committees aren’t expected to vote on their final texts until October, followed by a plenary vote in November.

Additional safeguards are added to the original proposal: The draft report highlights the areas on which MEPs have already found broad agreement. The starting point is the very definition of AI, which the rapporteurs seek to keep as broad as possible to avoid the risk that some AI use cases could slip through the cracks and remain unregulated. The report leaves the original definition broadly unchanged, but removes the specification that AI is linked to human-defined objectives. The scope of what constitutes high-risk AI (which as such would be subject to strict safeguards under the Act) is also broadened. The list of high-risk applications was extended to predictive policing and systems designed to interact with children, medical triage, insurances, deep fakes, and algorithms with impact on democratic processes (e.g. those used for electoral campaigns or to count electronic votes). Under this proposal, public authorities would face more substantial obligations and transparency requirements when using high-risk applications. Finally, the report envisages a two-layered approach to governance, where the European Commission would take over from national authorities for cases with wider societal impact.

Forget the record speed of the DMA, we’re in for painstaking negotiations: While the rapporteurs have agreed on some aspects, there are many others where a deal is yet to be reached. One such example is how the responsibility is shared between providers and users of AI systems, and on the way in which conformity assessments for high-risk systems will be carried out. The original proposal relies on companies conducting self-assessments, which some MEPs consider insufficient whereas others are worried about adding burdensome requirements for businesses. There are also disagreements as to whether biometric recognition systems should be banned (the current proposal allows them for limited cases), and on the scope of regulatory sandboxes which conservative MEPs would like to expand. All this makes for a legislative process likely to last a lot longer than for the Digital Markets Act, which was approved in record-time. Once Parliament finalises its text, it will have to negotiate with the Council where similar tensions are at play. Here, France is leading a vast group of countries advocating for a flexible and pro-innovation approach, whereas Germany is championing a privacy-focused set of rules.