The European Union's AI Act has entered full enforcement as of March 2026, imposing the world's most comprehensive AI regulations. Companies worldwide must comply if they serve EU customers.
Risk-Based Classification
- Unacceptable risk (banned): Social scoring, real-time facial recognition in public spaces, manipulative AI
- High risk: AI in hiring, credit scoring, healthcare, law enforcement — requires audits and transparency
- Limited risk: Chatbots and content generators — must disclose AI involvement
- Minimal risk: Spam filters, video games — no requirements
Key Requirements for High-Risk AI
Companies must maintain detailed documentation of training data, conduct regular bias audits, implement human oversight mechanisms, and register in the EU AI database. Violations can result in fines up to 7% of global revenue.
U.S. tech companies are scrambling to comply. Many have created separate EU-compliant versions of their AI products — a practice critics call "regulatory arbitrage."