EU AI Act

The world's first comprehensive law on artificial intelligence, setting global standards for AI safety and accountability.

Effective: August 2024
High-risk systems: 2026
Global impact

Implementation Timeline: The AI Act is being phased in gradually. Critical provisions for prohibited AI practices are already in effect, while requirements for high-risk AI systems apply from August 2026.

The AI Act in Plain English

The EU AI Act is the world's first comprehensive law regulating artificial intelligence. Think of it as a safety framework that ensures AI systems are developed and used responsibly, protecting people from potential harm while fostering innovation.

The law works on a simple principle: the riskier the AI application, the stricter the rules. It bans AI practices that are considered unacceptable (like social scoring or manipulative AI), heavily regulates high-risk AI systems (like those used in hiring, healthcare, or law enforcement), and provides lighter rules for lower-risk applications.

How the AI Act Works: Risk-Based Categories

Prohibited AI

Completely banned AI practices:

  • • Subliminal manipulation
  • • Social scoring systems
  • • Real-time biometric identification in public
  • • Emotion recognition in schools/workplaces

High-Risk AI

Strict requirements for:

  • • Hiring and HR systems
  • • Healthcare diagnostics
  • • Credit scoring
  • • Law enforcement tools
  • • Critical infrastructure

Limited Risk AI

Transparency requirements:

  • • Chatbots and virtual assistants
  • • AI-generated content
  • • Deepfakes
  • • Must clearly inform users they're interacting with AI

Minimal Risk AI

Mostly unrestricted:

  • • Video games
  • • Spam filters
  • • Basic recommendation systems
  • • Simple automation tools

What This Means for You

For Businesses

  • • Assess your AI systems against the risk categories
  • • Implement compliance measures for high-risk AI
  • • Document AI decision-making processes
  • • Train staff on AI governance
  • • Consider AI impact assessments

For Developers

  • • Design AI systems with compliance in mind
  • • Implement robust testing and validation
  • • Ensure transparency and explainability
  • • Build in human oversight mechanisms
  • • Maintain detailed development records

For Citizens

  • • You have rights regarding AI decisions about you
  • • You can request explanations of AI decisions
  • • You can file complaints about AI systems
  • • You'll know when you're interacting with AI
  • • Better protection from harmful AI practices

Implementation Timeline

August 2024 Active

Prohibited AI practices banned. General governance and transparency obligations begin.

August 2025 Upcoming

General-purpose AI model requirements (like ChatGPT, Claude) take effect.

August 2026 Future

Full requirements for high-risk AI systems come into force. This is when most businesses will need to be fully compliant.

August 2027 Future

Final deadline for AI systems already in use before the Act came into force.

Penalties: What You Risk

Prohibited AI Violations

Up to €35M

or 7% of global annual turnover

High-Risk AI Non-Compliance

Up to €15M

or 3% of global annual turnover

Documentation Failures

Up to €7.5M

or 1.5% of global annual turnover

Common AI Act Questions

Does the AI Act apply to my small business?

It depends on what AI you use, not your company size. If you use AI for hiring, customer credit assessment, or other high-risk applications, you're covered regardless of business size. However, small businesses often use lower-risk AI that faces minimal requirements.

What if I only use AI tools like ChatGPT or similar services?

If you're just using general AI tools for writing, research, or basic tasks, you typically face minimal obligations. The heavy compliance burden falls on the AI system providers (like OpenAI), not end users. However, if you use these tools for high-risk applications like hiring decisions, additional rules may apply.

How do I know if my AI system is "high-risk"?

The AI Act provides specific lists of high-risk applications, including: AI used in hiring and worker management, credit scoring and loan decisions, biometric identification, critical infrastructure management, healthcare diagnostics, law enforcement, and education/training evaluation.

Do I need to comply if I'm not based in the EU?

Yes, if your AI systems affect people in the EU. Like GDPR, the AI Act has extraterritorial reach. If you provide AI services to EU residents or your AI systems are used in the EU, you need to comply with relevant requirements.

Complete AI Act Compliance Suite

We're developing comprehensive tools and guides to make AI Act compliance straightforward and actionable for businesses of all sizes.

Step-by-Step Compliance Guide

Detailed implementation roadmaps for each AI risk category, with actionable checklists and timeline guidance.

AI Risk Assessment Tool

Interactive tool to quickly classify your AI systems and understand which requirements apply to your business.

Gap Analysis Framework

Comprehensive assessment framework to identify compliance gaps and prioritize remediation efforts.

Be First to Access Our AI Act Tools

Join our mailing list to get early access to comprehensive AI Act compliance tools, guides, and updates when they're released.

No spam, unsubscribe anytime. We'll only email you about AI Act compliance resources.

Expected Release Timeline

Q4 2025: Risk Assessment Tool
Q1 2026: Compliance Guides
Q2 2026: Gap Analysis Framework

🤝 Still Feeling Overwhelmed?

EU cybersecurity laws can be complex. Our free tools and guides work great for most people, but if you're dealing with something particularly challenging or have tight deadlines, we're here to help.