AlgorithmWatch: As of February 2025: Harmful AI applications prohibited in the EU

Source URL: https://algorithmwatch.org/en/ai-act-prohibitions-february-2025/
Source: AlgorithmWatch
Title: As of February 2025: Harmful AI applications prohibited in the EU

Feedly Summary: Bans under the EU AI Act become applicable now. Certain risky AI systems which have been already trialed or used in everyday life are from now on – at least partially – prohibited.

AI Summary and Description: Yes

**Summary:**
The text discusses the implications of flawed and biased AI systems, particularly in the context of the EU’s AI Act, which aims to regulate high-risk AI practices. It highlights the dangers of discriminatory algorithms, sets forth prohibited practices, and identifies regulatory gaps regarding national security exemptions. The text underscores the necessity for robust compliance and the potential consequences of violations within the EU framework.

**Detailed Description:**
The provided content outlines critical issues surrounding AI systems, notably their tendency to produce flawed, biased, and erroneous outcomes, which can lead to significant social harms. Here are the major points highlighted in the text:

– **Social Impact of Biased AI:**
– AI systems have led to wrongful imprisonments due to erroneous recognition, exemplifying how discriminatory technology can have life-altering consequences for individuals.
– The Dutch child care benefit scandal illustrates the broader implications of biased algorithms, showcasing how innocent individuals can suffer financially and emotionally.

– **EU AI Act’s Introduction:**
– The EU’s AI Act represents a landmark effort to regulate AI technologies by outlining unacceptable practices that threaten public safety, health, and human rights.
– The Act applies not only to systems used within the EU but also to those developed outside if they affect EU citizens.

– **Prohibited AI Practices (Article 5):**
– Manipulative AI systems that deceive individuals, such as harmful voice-activated toys.
– Exploitation of vulnerabilities in people or groups lacking informed consent.
– Social scoring practices disconnected from their data context.
– Unregulated face recognition systems, including mass scraping from public sources.
– Live face recognition by law enforcement under strict conditions.

– **Partially Banned Systems:**
– An assessment is made on predictive policing techniques based on personality traits, which is only partially banned.
– Categorization systems attempting to infer sensitive personal traits based on biometric data are restricted, with exceptions for law enforcement.
– Emotion recognition in workplaces and educational settings is allowed under specific circumstances.

– **National Security Loopholes:**
– The text raises concerns over exemptions for national security use cases, allowing certain AI practices to evade the regulations intended to protect human rights and privacy.
– It points out that systems considered incompatible with EU values could still be exported to other countries, circumventing local regulations.

– **Remedies for Rights Violations:**
– Individuals believing their rights have been violated can file complaints with designated market surveillance authorities, like Germany’s Bundesnetzagentur.
– The effectiveness of these authorities in enforcing compliance remains uncertain, illustrating the need for diligent oversight.

In summary, the text serves as a significant commentary on the intersection of AI technology, regulation, and human rights, particularly within the European Union framework. The implications for compliance professionals are clear: understanding the specifics of the AI Act is crucial for ensuring adherence to emerging legal and ethical standards in AI development and deployment. Security stakeholders must remain vigilant in navigating potential loopholes and advocating for responsible AI practices.