Want to get started?
Book a 1:1 session with our team to see how Cassidy can support your goals.
Book demo

The EU AI Act Is Already in Force. Here's What Your Business Actually Needs to Do.

Cassidy Team, May 07, 2026

Since February 2025, every organisation that provides or deploys AI systems in the EU has been legally required to ensure their staff have adequate AI literacy under Article 4 of the Act, with fines of up to €15 million or 3% of global annual turnover for non-compliance. Full enforcement for high-risk AI systems is scheduled for August 2026. The regulation isn't a future problem to plan for. For most businesses, it's an active obligation they're already behind on.

Ben Churchill, co-founder of ThoughtFox, an enterprise AI transformation consultancy that has been helping organizations implement AI responsibly across Europe, sees the gap firsthand. "Most companies we're talking to aren't even aware" of the AI literacy requirement, he says. "And some are starting to realize what the opportunities but also what the risks are."

That gap between awareness and action is exactly what this post is designed to close.

What the EU AI Act Actually Requires

The EU AI Act operates on a risk-based framework. Not all AI is treated equally. The higher the potential harm, the stricter the requirements. But a few obligations apply broadly, regardless of what your AI does.

The most immediate is AI literacy. Under Article 4 of the Act, any organization that provides or deploys AI systems must ensure the people working with those systems have sufficient knowledge to understand what they're doing, what the risks are, and how to use the tools responsibly. This isn't a vague aspiration. It's a legal requirement that has been in force since February 2025.

The Act also prohibits specific AI practices outright: systems that manipulate users through subliminal techniques, tools that exploit individual vulnerabilities, real-time biometric identification in public spaces (with narrow exceptions), and AI used for social scoring. These prohibitions also took effect in February 2025, with penalties of up to €35 million or 7% of global revenue for violations.

For high-risk AI, the full compliance framework — including technical documentation, human oversight requirements, and post-market monitoring — is scheduled to come into force in August 2026. If your organization uses AI in hiring, credit scoring, medical diagnostics, or other high-stakes decisions, that deadline is the one to focus on now.

Does This Apply to Companies Outside the EU?

Yes. The EU AI Act has extra-territorial reach. If your AI systems affect individuals in the EU — customers, employees, users — your organization falls within scope, regardless of where it's headquartered. US companies with European operations or customers are not exempt. This is the same logic that made GDPR a global compliance requirement, and the AI Act follows the same pattern.

What Companies Are Getting Wrong Right Now

Churchill is direct about what he sees in the market. The problem isn't malicious intent. It's that most organizations are excited about AI models and tools but haven't thought about governance yet. They're at what ThoughtFox describes as levels zero to two on a five-level AI maturity scale: a few Copilot licenses, minimal training, and employees who don't fully understand what they've been handed.

The result is what ThoughtFox calls the shadow AI problem. When organizations lock down official AI tools without providing alternatives or training, employees don't stop using AI. They use the free, public versions — inputting sensitive data into models with no enterprise data protection, no governance, and no audit trail. The organization thinks it's managing risk. It's actually creating it.

"The understanding that sometimes AI is getting it wrong — people were using it, when it got it wrong, they kind of threw it away," Churchill notes. "That was a mistake." The right response isn't avoidance. It's structured adoption with the guardrails to make it safe.

A related mistake is retrofitting governance onto AI that's already been deployed. Churchill's advice is consistent: governance and excitement need to happen at the same time. Build it safely from the start, or face the cost — in time, money, and potentially compliance risk — of tearing it out and starting over. 

Why Governance Is an Accelerator, Not a Barrier

The instinct for many business leaders is to see regulation as friction. Churchill and his team at ThoughtFox push back on that framing consistently. "Good governance allows you to unlock," he says. "Without doing that, you not only don't get the full power of the models, you also put yourself at massive risk."

The logic is straightforward. Enterprises — particularly those in financial services, insurance, healthcare, and other regulated industries — are nervous about AI precisely because they're not confident the guardrails are in place. Governance doesn't slow adoption. It's what makes adoption possible at scale. When leaders and legal teams trust that the AI is operating within defined boundaries, on secured data, with audit trails and access controls, the conversation shifts from "should we use this" to "where do we use this next."

Cassidy is built for exactly that environment. Enterprise deployments on Cassidy are SOC 2 Type II certified, GDPR compliant, HIPAA compliant , and CASA certified. Data is never used to train AI models. Knowledge Base permissions carry through to every Workflow, so AI only accesses information each user is already authorized to see. For organizations navigating the EU AI Act's requirements around data handling and human oversight, that architecture matters.

What to Have in Place Before You Scale

Based on ThoughtFox's work with enterprise clients across Europe, here's what Churchill recommends having in place before expanding AI use across your organization:

AI literacy training for every employee who touches an AI system. This is no longer optional under EU law. It doesn't have to be a lengthy program — it has to be meaningful. Staff need to understand what the tool does, where it can go wrong, and what they're responsible for when they use it.

A clear inventory of what AI you're actually using. Most organizations underestimate this. Shadow AI — the tools employees are using without IT's knowledge — is part of the picture too. You can't govern what you haven't mapped.

Defined boundaries for what the AI can and can't access. Role-based permissions, data access controls, and audit logging aren't just good practice. For high-risk AI use cases, they're becoming legal requirements.

A framework for measuring success before you deploy. Churchill is consistent on this point: AI projects fail when there's no clear problem being solved and no measure of what success looks like. Define the business outcome first. Then build toward it.

Governance built in from day one, not bolted on afterward. The cost of retrofitting is high. The cost of a compliance breach is higher. ThoughtFox's approach is to introduce governance and capability building simultaneously — so that neither gets treated as the afterthought.

The Opportunity in Getting This Right

The EU AI Act is the first legislation of its kind from any major economic bloc. It won't be the last. The UK is watching. Individual US states are legislating. A federal AI framework in the US remains uncertain but not unlikely. Organizations that build responsible AI practices now — not because they have to, but because they understand why it matters — will be better positioned as the regulatory landscape continues to evolve.

More practically: trust is a competitive advantage. The enterprises that can demonstrate to customers, partners, and employees that their AI operates within clear, auditable boundaries will win deals that less prepared competitors lose. Governance isn't a cost of compliance. It's proof that your AI is ready to be trusted with real work.

If your organization is operating in the EU and hasn't yet addressed AI literacy requirements or mapped your AI systems against the Act's risk classifications, the window for getting ahead of this is narrowing. Full enforcement for high-risk systems is scheduled to be enforced in August 2026.

ThoughtFox works with mid-to-large enterprises across Europe on AI transformation — from maturity assessments through to full deployment and adoption programs. To explore how Cassidy supports governed, enterprise-grade AI automation, book a demo and see how your highest-value workflows can run securely from day one.

—-

This article is intended as editorial guidance based on ThoughtFox's experience working with organisations on EU AI Act readiness. It is not legal advice. If you're assessing your organisation's specific compliance obligations, we recommend working closely with qualified legal counsel.

Move from idea to production with Cassidy