EU AI Act for SaaS: The 2026 Compliance Checklist
Compliance
The EU AI Act phased into effect in 2025 and 2026. SaaS that touches EU users now classifies its AI features by risk tier. The checklist we work through with clients shipping AI to Europe.
By Arjun Raghavan, Security & Systems Lead, BIPI · January 25, 2026 · 9 min read
The EU AI Act entered phased effect through 2025 and 2026. Prohibited practices became enforceable in February 2025; general-purpose AI obligations followed in August 2025; the bulk of the high-risk requirements landed in 2026. Any SaaS that ships AI features to EU users now operates under it.
The Act is risk-tiered, which means most SaaS does not have to do the heaviest work. But classifying your AI features into the right tier is the first task and the one that gets done badly most often. Get the tier wrong and you either over-engineer or accidentally ship non-compliant.
The four risk tiers
Tier 1: prohibited. Specific uses that the Act bans outright. Social scoring by public authorities, real-time biometric identification in public spaces (with narrow exceptions), emotion inference in workplaces and education, deceptive subliminal manipulation. If your product does any of this, redesign or pull from the EU.
Tier 2: high-risk. AI systems used in safety components of regulated products, or AI in specified domains: hiring, education access, critical infrastructure, law enforcement, migration, justice. Heavy obligations: risk management, data governance, technical documentation (Annex IV), logging, human oversight, accuracy and robustness testing, registration in the EU database.
Tier 3: limited risk (transparency obligations). AI that interacts with humans (chatbots), generates synthetic content (deepfakes), or recognises emotions or biometrics outside the high-risk domains. Obligation: tell the user they are interacting with AI or that the content is AI-generated.
Tier 4: minimal risk. Everything else. No specific obligations beyond GDPR and existing law.
General-purpose AI obligations (separate)
If you are building on top of a general-purpose AI model (GPT, Claude, Gemini, Llama), the model provider has obligations under the GPAI rules. You as a downstream deployer have lighter obligations: keep technical documentation, comply with copyright, and respect any system card or usage policy the provider published.
Models classified as 'GPAI with systemic risk' (large models above a threshold) have stricter obligations. As a deployer this rarely lands on you, but it changes which models you can deploy to EU users.
High-risk AI: the engineering bar
If your AI feature falls into the high-risk tier, the engineering work is non-trivial. The Annex IV technical documentation alone is substantial: dataset description, training methodology, performance metrics, known limitations, foreseeable misuse, human oversight mechanisms.
Logging is a hard requirement. You must keep logs of the AI system's operation that allow regulators to reconstruct decisions. For an AI system that runs millions of times a day, the storage cost is real. Plan for it.
Human oversight is the most mis-implemented requirement. The Act expects a human who can stop the system, override its outputs, or refuse to use the result. A 'human reviews flagged outputs' workflow does not satisfy this if the human cannot meaningfully intervene. Design the override path before the system ships.
The 2026 checklist
- Inventory every AI feature in your product. Including classifiers, recommenders, and generative features.
- Classify each into the four tiers. Document your reasoning per feature.
- For tier-3 features (chatbots, deepfakes), add UI affordances disclosing AI involvement. This is usually a 1-day engineering task.
- For tier-2 features, kick off the Annex IV documentation effort. Allow a quarter.
- Update DPIAs (Data Protection Impact Assessments) under GDPR to reference the AI Act analysis. The two regimes overlap.
- Update vendor contracts. Foundation-model providers should attest to GPAI compliance; you reference the attestation in your own documentation.
- Train customer-facing teams on what they can and cannot say about your AI features in regulated EU markets.
Closing
The EU AI Act is real, enforceable, and being applied. Most SaaS will land in tier 3 or tier 4 with comparatively light obligations. A minority will land in tier 2 and need a quarter of disciplined engineering and documentation work. The teams getting it right are the ones who classified deliberately, documented in a wiki their counsel can review, and built the human-oversight mechanism into the product before regulators asked.
Read more field notes, explore our services, or get in touch at info@bipi.in. Privacy Policy · Terms.