EU AI Act: Will your company be the next to face the average €20 million fine for violating the regulation?

The European Union Artificial Intelligence Act includes specific provisions to ensure the cybersecurity of high-risk AI systems. We break down key elements of the ruling and offer guidance on how to prepare your organization.
Are you using chatGPT, GitHub Copilot, or building applications on top of Azure Open AI, Amazon Bedrock, or Google Gemini? Are you fine tuning or self building models? Do you serve customers or have employees in Europe? If so, this blog offers guidance to help you prepare for the European Union Artificial Intelligence Act.
A quick overview of the E.U. AI Act
The E.U. Artificial Intelligence Act (AI Act) sets stringent regulations for AI technologies to ensure safety, transparency and fundamental rights.
With the regulation entering into force on August 1, 2024, the Act mandates robust cybersecurity measures and imposes significant penalties for non-compliance (E.U. AI Act – Article 113).
Cybersecurity requirements under the E.U. AI Act
The AI Act includes specific provisions to ensure the cybersecurity of high-risk AI systems.
Article 15 of the Act outlines the need for AI-based products and their implementations by deployers to be designed for accuracy, robustness and cybersecurity throughout their lifecycle.
Key requirements include:
- Implement technical solutions to mitigate attacks and vulnerabilities against the AI model or application in use.
- Automatically log the operations and communications of the AI model or application to ensure the ability to trace system functions and identify potential breaches.
- Protect against unauthorized third-party alterations by addressing the latest vulnerabilities in the AI domain.
- Ensure resilience to faults and errors in the AI model or application.
- Maintain a risk management system to address potential cybersecurity threats (such as: OWASP top 10 for LLMs applications, MITRE ATLAS for AI).
These measures are not just best practices; they are legal requirements under the AI Act. You can read more about these requirements in Article 15 of the legislation.
Who is responsible?
The E.U. AI Act mandates compliance responsibilities for both providers (developers) and deployers (integrators) of AI systems and defines them as follows (E.U. AI Act – Article 3):
- AI deployers (integrators): People or companies that use AI chat, AI-embedded apps, copilots, or AI models (whether open-sourced or licensed) within their products or services.
- AI providers: Individuals or companies that fine-tune existing models or develop their own models and integrate them into their products or place them on the market.
The regulation applies to a wide range of companies (E.U. AI Act – Article 2), including:
- Companies registered in the Europe Union: All entities developing, marketing or using AI systems within the E.U.
- Companies with activity or branches in the European Union: Non-E.U. companies with operations or branches in the E.U.
- Companies with clients or users in the European Union: Any company whose AI systems are used by clients or users within the E.U., even without a physical presence.
If you are a CISO, CIO, DPO, CEO, CTO, leader of AI R&D, or a legal, compliance, or risk officer at a company that incorporates AI chat, AI-embedded apps, copilots or AI models (whether open-sourced or licensed) in its products or services, or develops self-built or fine-tuned AI models, either in Europe or for European customers, we advise you to get familiar with the AI Act to ensure your company stays complaint and secured!
The financial impact of non-compliance with the E.U. AI Act
Penalties for violating the E.U. AI Act vary based on the nature of the infringement:
- High-level penalties: For breaches related to prohibited AI practices, or to non-compliance with high-risk AI systems with severe outcomes, fines can reach up to €30 million or 6% of the total worldwide annual revenue, whichever is higher (specified in E.U. AI Act Article 71(4) ).
- Medium-level penalties: Non-compliance with high-risk AI system requirements can result in fines up to €20 million or 4% of the total worldwide annual revenue, whichever is higher (specified in E.U. AI Act Article 71(5) ).
- Lower-level penalties: Providing incorrect, incomplete or misleading information to authorities can incur fines up to €10 million or 2% of the total worldwide annual revenue, whichever is higher (specified in E.U. AI Act Article 71(6) ).
For example, a company with an annual global revenue of €275 million could face a penalty of €20 million or more for a violation. The financial impact of non-compliance, particularly concerning cybersecurity requirements, is significant.
Here’s how it breaks down:
- High-level violation: 6% of €275 million equals €16.5 million (less than €30 million, hence a €30 million fine).
- Medium-level violation: 4% of €275 million equals €11 million (less than €20 million, hence an €20 million fine).
- Lower-level violation: 2% of €275 million equals €5.5 million (less than €10 million, hence a €10 million fine).
What can you do to secure AI in your organization?
Given the stringent requirements and hefty penalties outlined in the E.U. AI Act, it’s clear that companies must prioritize visibility, monitoring and policy definition for their AI-based products and models. Here are some guidelines that can help you out in your safe AI journey:
If you are using AI chat, AI-embedded apps, copilots or AI models in your product:
- Deploy securely: Ensure AI systems are deployed with secure configurations and are regularly updated according to the latest recommended practices to address the latest attack techniques. Ensure redaction of private and sensitive data is defined and applied.
- Maintain full visibility on AI usage: Perform regular data privacy audits to ensure compliance with regulation, focusing on how AI systems handle personal data. Understand how AI is being used in your organization. Identify the main use cases for which AI is most frequently utilized and examine whether there are any anomalous or sensitive use cases that are managed by AI and require supervision.
- Monitor to detect threats and anomalies: Use monitoring tools to track any AI systems being used (third-party models and applications, both licensed and open-source) to ensure the ongoing safety of personal and sensitive data, detect cyber attacks, identify anomalies and maintain compliance. Ensure your detection plan is updated with any new vulnerabilities published in the domain to stay ahead of new attack groups and evolving techniques.
- Enforce security policy: Clear, well-defined policy provides a roadmap for maintaining cybersecurity standards. These policies should outline best practices, compliance requirements and response protocols. To ensure compliance and safety, automatically enforce the policy across the organization to proactively prevent any undesirable circumstances.
If you are an AI provider:
- Ensure transparency: Develop comprehensive technical documentation covering data sources, algorithms and decision-making processes. Document how data is collected, processed and stored.
- Monitor the model, data and artifacts: Implement automated tools to monitor AI system performance, collecting and analyzing operational data. Continuously monitor data handling processes to ensure compliance with data privacy regulations. Track artifacts and look for anomalies in both staging and production environments to detect attack attempts (Exfiltration, ML Attack Staging, ML Model Access).
- Enforce security policy: Develop internal policies that align with the regulation covering AI development and deployment. Implement policies ensuring data privacy compliance, detailing how personal data is protected throughout AI system lifecycles. Define a response policy for handling anomalies in real-time when they are identified.
Ensuring compliance with the E.U. AI Act is not just about avoiding penalties — it’s about safeguarding your AI systems, protecting your business and building trust with your customers. By prioritizing visibility, monitoring and policy definition, you can navigate the complex landscape of AI cybersecurity with confidence.
- AI
- Exposure Management
- Exposure Management