The European Union has made significant efforts to regulate artificial intelligence (AI) to ensure it is trustworthy and ethically aligned. However, critics argue that the current regulatory framework still falls short in adequately safeguarding society. The European Economic and Social Committee (EESC) highlights several risks associated with AI, calling for a more comprehensive approach that prioritizes human oversight and includes input from social partners in AI deployment.
One of the main concerns raised by the EESC is the potential for AI to cause job losses and widen inequalities. Automation and algorithmic decision-making could undermine job security, reduce worker autonomy, and increase workplace stress. Moreover, if AI is used for surveillance in the workplace, it could negatively impact mental health by imposing difficult-to-challenge performance metrics.
AI systems may also perpetuate discrimination, especially in areas like hiring, promotions, and layoffs. Biases in training data or algorithms can lead to unfair practices, which are exacerbated by the opacity of AI systems that make it difficult for individuals to contest decisions that affect their careers. In addition, AI’s significant energy consumption raises environmental concerns, and its potential use in criminal activities underlines the need for strong safeguards to protect critical infrastructure.
The European Union’s AI Act, which categorizes AI applications into four risk levels, is the first legal framework of its kind. It imposes strict requirements on high-risk systems to ensure safety and respect for fundamental rights. While regulations like the GDPR offer some protection, they do not fully address the specific challenges AI presents in the workplace. The AI Act does not adequately protect workers’ rights regarding algorithmic management, and the Platform Work Directive, which targets gig workers, leaves gaps for other sectors of the workforce.
The EESC advocates for a human-centric approach to AI, which balances technological progress with the protection of citizens’ rights. This model encourages dialogue with civil society stakeholders and promotes training and upskilling initiatives to meet the challenges posed by AI in the workforce. The EESC also stresses the importance of ensuring that AI in public services supports human oversight, transparency, and informed consent, with strong cybersecurity measures to protect personal data from breaches and attacks. Furthermore, the EESC calls for coordinated investment in AI development across the EU, emphasizing the need for secure infrastructure and resilience in combating digital threats, misinformation, and the misuse of social media.