The European Union’s AI Act, a risk-based plan for regulating artificial intelligence applications, has overcome what appears to be the final significant hurdle to adoption. Member State representatives voted today to confirm the final text of the draft law, following a political agreement reached in December after lengthy negotiations. The Coreper vote affirms the draft rules, marking a crucial step in the process.
The proposed regulation outlines prohibited uses of AI, categorizing them as unacceptable risks, such as employing AI for social scoring. It introduces governance rules for high-risk uses, where AI applications might pose threats to health, safety, fundamental rights, environment, democracy, and the rule of law. Additionally, transparency requirements are applied to applications like AI chatbots. However, ‘low-risk’ AI applications are excluded from the law’s scope.
The unanimous backing of all 27 ambassadors of EU Member States in the vote brings relief to Brussels, especially considering opposition led by France. This opposition aimed to avoid legal constraints that could impede the rapid growth of domestic generative AI startups like Mistral AI, preventing them from becoming national champions that could challenge the dominance of US AI giants.
With the successful vote, the regulation now moves back to the European Parliament for a final vote on the compromise text. Given that the major backlash came from a few Member States, including Germany and Italy, the upcoming votes are expected to be a formality. The EU’s flagship AI Act is likely to be adopted as law in the coming months.
Upon adoption, the Act will come into force 20 days after publication in the EU’s Official Journal. A tiered implementation period will follow before the new rules apply to in-scope apps and AI models, with a six-month grace period before the banned uses of AI listed in the regulation take effect (likely around fall). The rules on foundational models (general-purpose AIs) will not apply until 2025, and the bulk of the remaining rules won’t take effect until two years after the law’s publication.
The Commission has already initiated the establishment of an AI Office to oversee compliance with a subset of more powerful foundational models deemed to pose systemic risk. It has also announced measures to support homegrown AI developers, including retooling the bloc’s supercomputer network to facilitate generative AI model training.