The European Commission's Stakeholder Feedback on AI Definitions and Prohibited Practices
- Filippos Lamnidis
- 5 days ago
- 4 min read
The European Commission's recent analysis of stakeholder feedback on AI definitions and prohibited practices sheds light on the ongoing efforts to create a well-balanced regulatory framework for artificial intelligence (AI) in Europe. This consultation, which engaged a wide range of stakeholders from tech developers to civil society groups, highlights both the promise and the challenges of regulating AI technologies in a way that protects individual rights without stifling innovation.
Defining AI Systems: Clarity is Key
A major concern raised by stakeholders was the need for clearer definitions of what qualifies as an AI system. The EU’s current definitions are seen as too broad, potentially including technologies that don’t actually exhibit the characteristics of AI. For example, many traditional software systems—like basic automation tools or rule-based decision-making systems—don’t learn or adapt after deployment, yet they could fall under the EU’s existing definition of AI.
Many respondents, particularly from the tech industry, argued that the current language could unintentionally regulate software that doesn’t present the same risks or challenges as true AI systems. The concern here is that technologies such as simple data processing scripts or rule-based automation systems, which have been in use for decades, would be unnecessarily subject to heavy regulation.
To address these concerns, stakeholders called for a more precise distinction between AI and traditional software. The suggestion was to focus on features like adaptiveness, autonomy, and the ability to learn from data—hallmarks of true AI systems. This would ensure that only systems with the capacity for significant learning or behavioral change would fall under AI regulations, while simpler technologies would be excluded.
Prohibited Practices: Emotions, Biometrics, and Social Scoring
Another area of significant concern in the consultation was the set of practices that the EU’s AI Act seeks to prohibit. These include the use of AI for emotion recognition, biometric categorization, and social scoring. Many respondents voiced strong objections to these applications, particularly in areas like hiring or surveillance.
Emotion recognition technologies, for instance, were flagged for their potential to invade privacy and be used for manipulative purposes. There are concerns that such systems could be used to read people's emotions in a way that exploits their vulnerabilities, whether in marketing or even in workplace settings. For instance, emotion AI might assess job candidates during interviews, potentially reinforcing bias or discrimination, especially against people with disabilities or others who might not express themselves in typical ways.
Similarly, biometric categorization—where AI is used to analyze and categorize individuals based on biometric data like facial recognition—was seen as a major ethical concern. While such technologies might be useful for certain applications, they also risk being used for mass surveillance, leading to privacy violations and discrimination. The use of biometric data without proper safeguards could result in systemic bias, especially if the technology is used in policing, immigration enforcement, or public spaces.
Another deeply concerning issue is social scoring, where AI systems evaluate people based on their behavior or inferred characteristics, such as creditworthiness or even social behavior. This has the potential to result in unfair discrimination, especially when these systems are used to assess things like employment eligibility or access to services. The fear is that such systems could lead to decisions being made based on narrow and potentially biased criteria, reinforcing existing inequalities in society.
Protecting Innovation While Ensuring Accountability
While there is broad agreement on the need for AI regulation, many stakeholders highlighted the importance of not stifling innovation. Small and medium-sized enterprises (SMEs) in particular expressed concern about the complexity and cost of complying with the AI Act, fearing that the regulatory burden could be too heavy for smaller organizations to bear. The fear is that overly strict rules might disproportionately affect smaller innovators, preventing them from entering the market or developing new technologies.
At the same time, there was widespread support for the creation of clear, practical guidelines that would help organizations understand their obligations without excessive red tape. For example, many respondents called for the European Commission to provide examples of what constitutes "acceptable" versus "unacceptable" use of AI, particularly in high-risk areas like healthcare, law enforcement, and consumer protection. Real-world case studies and specific scenarios would help businesses and organizations navigate the complexities of the law and ensure they are operating within legal and ethical boundaries.
A Harmonized Approach to AI Regulation
Another key issue raised by stakeholders was the need for the AI Act to align with existing EU regulations, particularly the General Data Protection Regulation (GDPR) and other data protection frameworks. The feedback pointed to potential overlaps and contradictions between the AI Act and GDPR, especially when it comes to the use of biometric data, facial recognition, and emotion recognition technologies.
Respondents called for more clarity on how the AI Act and GDPR intersect, particularly in relation to consent, data protection, and privacy rights. This would help ensure that AI technologies are used responsibly, without violating individuals' fundamental rights to privacy and data protection. A harmonized approach would make compliance easier for organizations and avoid the potential for conflicting regulations.
Moving Forward: Striking the Right Balance
The consultation process has revealed that while there is significant support for robust AI regulations, there are also valid concerns about the potential impact on innovation, particularly for SMEs. Stakeholders have called for a more balanced approach that provides clear guidelines on what constitutes harmful or prohibited practices, while also fostering an environment where AI innovation can thrive.
The European Commission now faces the challenge of refining the AI Act to ensure that it provides legal certainty for both businesses and individuals while protecting fundamental rights. By taking into account the feedback received from the consultation, the Commission has an opportunity to create a regulatory framework that can guide the ethical development and deployment of AI technologies in Europe.
Conclusion
As AI technologies continue to evolve, so too must the regulations that govern them. The European Commission's consultation feedback has provided valuable insights into the complexities of defining and regulating AI systems. It is clear that there is a need for clarity, precision, and practical examples in the AI Act to ensure that it protects individuals’ rights without overregulating technologies that don't pose the same risks. Moving forward, the Commission must continue to engage with stakeholders, ensuring that the regulation strikes a balance between safeguarding privacy and human rights, while supporting innovation in AI technologies.
