On this page
What Is the EU AI Act?
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. It regulates AI systems sold or used within the European Union based on the level of risk they pose to health, safety, and fundamental rights.
For mid-market companies, the Act means that any AI system touching EU residents - whether you are based in Europe or not - must meet specific transparency, documentation, and governance requirements. Non-compliance carries penalties of up to €35 million or 7% of global annual turnover, whichever is higher (Article 99). That makes the EU AI Act one of the most consequential regulations since GDPR.
Unlike voluntary frameworks, the EU AI Act is binding law. It applies to providers (those who develop or place AI systems on the market), deployers (those who use AI systems in a professional capacity), and importers or distributors who bring AI products into the EU. If your company uses AI-powered tools for hiring, credit scoring, customer service, or internal operations - and any of those tools interact with EU data subjects - you are likely in scope.
EU AI Act Risk Classification: The Four Tiers
The EU AI Act organises AI systems into four risk categories. Your compliance obligations depend entirely on which tier your AI systems fall into. Understanding this classification is the first step toward building a compliant AI governance program.
Prohibited AI Practices (Title II, Article 5)
Certain AI applications are banned outright under the EU AI Act. These include:
- Social scoring - AI systems that evaluate or classify individuals based on social behaviour or personal characteristics, leading to detrimental treatment unrelated to the context in which the data was generated.
- Real-time biometric identification in publicly accessible spaces for law enforcement purposes, with narrow exceptions for specific serious crimes.
- Manipulative or deceptive AI - systems that deploy subliminal, manipulative, or deceptive techniques to distort behaviour and cause significant harm.
- Emotion recognition in workplaces and education - AI that infers emotions of employees or students, except for medical or safety reasons.
- Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases.
Mid-market companies should audit all AI tools in use to confirm none fall into these prohibited categories. Even third-party vendor tools can create liability if they perform prohibited functions on your behalf.
High-Risk AI Systems (Title III, Articles 6-49)
High-risk AI systems carry the heaviest compliance burden. Under Annex III of the Act, these include AI used in:
- Employment and worker management - recruitment screening, CV filtering, interview evaluation, task allocation, performance monitoring, and termination decisions.
- Credit and insurance scoring - AI systems that assess creditworthiness or set insurance premiums for natural persons.
- Education and vocational training - systems that determine access to education, evaluate learning outcomes, or detect prohibited behaviour during examinations.
- Essential services - AI used in healthcare diagnostics, critical infrastructure management, law enforcement, border control, and judicial processes.
- Biometric identification and categorisation - remote biometric systems not covered by the prohibition tier.
For each high-risk system, Article 9-15 require providers to implement a risk management system, maintain technical documentation, ensure data governance, provide transparency information to deployers, enable human oversight, and meet standards for accuracy, robustness, and cybersecurity. Deployers must monitor the system in operation and report serious incidents to authorities.
This is where mid-market companies face the greatest operational challenge. Many common business tools - AI-driven applicant tracking systems, automated customer risk profiling, AI-assisted performance reviews - fall squarely into the high-risk category.
Limited Risk AI Systems (Title IV, Article 50)
Limited risk systems are subject to transparency obligations. The primary requirement is that users must be informed they are interacting with an AI system. This tier covers:
- Chatbots and conversational AI - users must be told they are interacting with a machine, not a human.
- Deepfakes and AI-generated content - any synthetically generated or manipulated image, audio, or video must be labelled as such.
- Emotion recognition and biometric categorisation systems that are not classified as high-risk must still disclose their operation to affected individuals.
For most mid-market companies, customer-facing chatbots and AI content generation tools will fall into this category. Compliance is lighter but still mandatory.
Minimal Risk AI Systems
AI systems that pose minimal or no risk - such as spam filters, AI-enabled video games, or basic recommendation engines - are largely unregulated under the Act. The EU encourages voluntary codes of conduct for these systems but does not impose binding obligations.
However, even minimal-risk systems benefit from governance documentation. If a regulator questions whether a system is truly minimal risk, having an internal assessment on record protects your organisation. The policy engine in Areebi can help you document these classifications systematically.
Key Obligations for Mid-Market Companies
Mid-market companies (typically 200-2,000 employees) face a particular challenge with the EU AI Act. They often deploy dozens of AI-powered tools across HR, finance, operations, and customer service - but lack the dedicated regulatory teams that large enterprises maintain. Here are the core obligations that matter most.
1. AI System Inventory and Classification
Article 6 requires that every AI system be assessed against the risk classification framework. Before you can comply, you must know what AI you are running. This includes third-party SaaS tools with embedded AI features. A thorough AI risk assessment is the foundation of compliance.
2. Technical Documentation (Article 11)
High-risk AI systems require detailed technical documentation covering the system's intended purpose, design specifications, training data characteristics, performance metrics, and known limitations. This documentation must be kept current and made available to national authorities upon request.
3. Data Governance (Article 10)
Training, validation, and testing datasets for high-risk systems must meet quality criteria. Companies must examine data for biases, ensure representativeness, and document data provenance. This intersects directly with GDPR data protection requirements - personal data used for AI training requires a lawful basis and must respect data subject rights.
4. Human Oversight (Article 14)
High-risk AI systems must be designed to allow effective human oversight. This means a qualified person must be able to understand the system's outputs, decide not to use the system in a particular situation, override or reverse outputs, and intervene in or halt operation. For mid-market companies, this creates a training and staffing requirement - someone in the organisation must be competent to oversee each high-risk system.
5. Conformity Assessment and Registration
Certain high-risk systems listed in Annex III must undergo conformity assessment procedures before being placed on the market or put into service (Articles 43-46). Deployers of high-risk systems must register their use in the EU database established under Article 71. This registration requirement applies even if you are using, not developing, the AI system.
6. Post-Market Monitoring and Incident Reporting
Providers must implement post-market monitoring systems (Article 72) and report serious incidents to the relevant national authority within 15 days (Article 73). Deployers who detect a serious incident must notify both the provider and the national authority. For mid-market companies, this means establishing clear internal escalation procedures.
See Areebi in action
Get a 30-minute personalised demo tailored to your industry, team size, and compliance requirements.
Get a DemoEU AI Act Compliance Timeline
The EU AI Act entered into force on 1 August 2024, with obligations phased in over a staggered timeline. Mid-market companies should plan against these key dates:
| Date | Milestone |
|---|---|
| 2 February 2025 | Prohibitions on banned AI practices take effect (Article 5). Companies must have ceased use of any prohibited AI systems. |
| 2 August 2025 | Obligations for general-purpose AI (GPAI) models take effect (Chapter V). Providers of GPAI models with systemic risk face additional transparency and risk assessment requirements. |
| 2 August 2026 | Full application of most provisions, including high-risk AI system requirements (Title III), transparency obligations (Article 50), governance rules, penalties, and enforcement mechanisms. This is the primary compliance deadline for mid-market companies. |
| 2 August 2027 | Extended deadline for high-risk AI systems that are safety components of products already subject to existing EU product legislation (Annex I), such as medical devices, machinery, and vehicles. |
The 2 August 2026 deadline is the critical date for most mid-market companies. With that date approaching, organisations should already be well into their compliance programmes. Waiting until mid-2026 to begin is a significant risk - building the required documentation, governance processes, and monitoring infrastructure takes months, not weeks.
Areebi's governance platform accelerates this timeline by providing pre-built policy templates, automated risk classification workflows, and audit-ready documentation tools designed specifically for mid-market deployments.
EU AI Act Compliance Checklist for Mid-Market
Use this checklist to assess your organisation's readiness against the EU AI Act's core requirements. Each item maps to a specific obligation in the regulation.
- Complete an AI system inventory - Catalogue every AI tool, model, and automated decision-making system in use across the organisation, including embedded AI features in third-party SaaS products. Start with a structured AI assessment.
- Classify each system by risk tier - Map each system against the four-tier framework (prohibited, high-risk, limited, minimal). Document the rationale for each classification decision.
- Eliminate prohibited practices - Verify that no system performs social scoring, unauthorised biometric identification, manipulative targeting, or workplace emotion recognition. This deadline has already passed (February 2025).
- Prepare technical documentation for high-risk systems - For each high-risk system, compile intended purpose, architecture details, training data documentation, performance benchmarks, known limitations, and instructions for use.
- Implement data governance procedures - Ensure training and operational data meets quality standards. Document data provenance, bias assessments, and alignment with GDPR requirements.
- Establish human oversight mechanisms - Designate qualified personnel for each high-risk system. Define escalation procedures, override capabilities, and intervention protocols.
- Register high-risk systems - Complete registration in the EU database for high-risk AI systems as required by Article 71.
- Build incident reporting procedures - Create internal workflows to detect, assess, and report serious incidents within the 15-day window mandated by Article 73.
- Conduct conformity assessments - For systems requiring third-party assessment, engage a notified body. For self-assessment systems, complete internal conformity procedures and maintain records.
- Train staff - Article 4 requires that all personnel involved in the operation and use of AI systems have sufficient AI literacy. Document training programmes and completion records.
- Align with existing frameworks - Map EU AI Act requirements against your existing SOC 2 and HIPAA controls where applicable. Many documentation and monitoring requirements overlap, creating efficiency opportunities.
- Establish ongoing monitoring - Put in place post-market monitoring processes that continuously assess AI system performance, drift, and compliance status.
How Areebi Helps Mid-Market Companies Meet EU AI Act Requirements
The EU AI Act demands capabilities that most mid-market companies lack internally: centralised AI inventories, automated risk classification, policy enforcement, audit-ready documentation, and continuous monitoring. Building these systems from scratch is expensive and time-consuming.
Areebi is purpose-built to close this gap. The platform provides a secure AI governance layer that sits across your organisation's AI usage, giving you visibility and control without slowing down operations.
Automated Risk Classification
Areebi automatically maps your AI systems against the EU AI Act's four-tier risk framework. When employees adopt new AI tools or existing tools add AI features, Areebi flags the change and initiates classification workflows - ensuring nothing slips through without an assessment.
Policy Enforcement
Using Areebi's policy engine, you can define organisation-wide rules that mirror EU AI Act requirements: mandatory human review for high-risk outputs, data retention limits, transparency disclosures for chatbot interactions, and restrictions on prohibited use cases. Policies are enforced programmatically, not through memos that get ignored.
Audit-Ready Documentation
Every AI interaction, policy decision, classification assessment, and incident report is logged in an immutable audit trail. When regulators request evidence of compliance, you can generate comprehensive reports in minutes - not the weeks it takes to compile documentation manually.
Cross-Framework Alignment
Most mid-market companies do not face the EU AI Act in isolation. They are also managing GDPR, SOC 2, HIPAA, or sector-specific requirements. Areebi maps controls across frameworks, so a single governance process satisfies multiple regulatory obligations simultaneously.
Government and Regulated Sector Support
For companies operating in highly regulated industries or working with government clients, Areebi provides deployment options that meet the strictest data residency and security requirements, including on-premises and sovereign cloud configurations.
Mid-market companies need to move from ad-hoc AI usage to governed AI operations. The EU AI Act makes that transition mandatory. See Areebi's pricing for a compliance-ready platform that scales with your organisation.
Free Templates
Put this into practice with our expert-built templates
The CISO's AI Security Policy Checklist
A comprehensive 47-point checklist across 9 security domains to help CISOs build a board-ready AI governance policy. Covers acceptable use, data classification, shadow AI, vendor assessment, compliance mapping, incident response, and more.
Download FreeEnterprise AI Acceptable Use Policy Template
A ready-to-customise 52-provision AI acceptable use policy template covering 8 policy domains. Built for CISOs and compliance teams who need a professional, board-ready policy document that employees actually understand and follow. Maps to HIPAA, SOC 2, GDPR, EU AI Act, ISO 42001, and NIST AI RMF.
Download FreeFrequently Asked Questions
Does the EU AI Act apply to companies outside the European Union?
Yes. The EU AI Act applies extraterritorially, similar to GDPR. If your AI system's output is used within the EU, or if the AI system processes data of EU residents, your company is subject to the regulation regardless of where it is headquartered (Article 2). A US or Australian company that deploys an AI-powered hiring tool to screen candidates in Germany is fully in scope.
What are the penalties for non-compliance with the EU AI Act?
Penalties are tiered by severity. Using a prohibited AI practice carries fines of up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk system requirements can result in fines of up to €15 million or 3% of turnover. Supplying incorrect or misleading information to authorities carries fines of up to €7.5 million or 1% of turnover (Article 99). For SMEs, the regulation allows proportionate caps, but the amounts remain substantial.
When do mid-market companies need to be fully compliant?
The primary compliance deadline is 2 August 2026, when obligations for high-risk AI systems, transparency requirements, and enforcement mechanisms fully apply. However, the prohibition on banned AI practices took effect on 2 February 2025, and general-purpose AI model requirements applied from 2 August 2025. Companies should treat compliance as an ongoing programme, not a single deadline.
How does the EU AI Act relate to GDPR?
The EU AI Act and GDPR are complementary regulations. GDPR governs how personal data is collected, processed, and stored. The EU AI Act governs how AI systems that may process that data are designed, deployed, and monitored. In practice, they overlap significantly: data governance requirements under the AI Act (Article 10) must be satisfied alongside GDPR's data protection principles, and privacy impact assessments often feed directly into AI risk assessments. Companies already compliant with GDPR have a head start but must address AI-specific obligations separately.
Do I need to comply if I only use AI tools from third-party vendors?
Yes. The EU AI Act distinguishes between providers (developers) and deployers (users in a professional context). As a deployer, you are required to use high-risk AI systems in accordance with their instructions for use, implement human oversight measures, monitor operation for risks, report serious incidents, and conduct data protection impact assessments where required (Articles 26-27). You cannot outsource compliance responsibility to your vendor - both parties carry distinct obligations.
Related Resources
About the Author
VP of Compliance & Trust, Areebi
Former compliance director at a Big Four consulting firm. Deep expertise in HIPAA, SOC 2, GDPR, and the EU AI Act. VP Compliance and Risk at Areebi.
Ready to govern your AI?
See how Areebi can help your organization adopt AI securely and compliantly.