Satori joins Commvault to power the future of Data & AI Security. Learn more →

Compliance,

Satori

How to be EU AI Act and GDPR compliant

|Marketing Specialist

AI is everywhere. According to Gartner, more than 60% of CIOs have incorporated AI into their plans. Organizations that collect, process, store, or handle personal data are already taking measures to secure the personal identifying information (PII) of their EU residents as required under GDPR. For many of them, there are now, or will soon be, additional requirements under the recently passed EU Artificial Intelligence Act (EU AI Act). In this blog post, we explore these two important pieces of legislation as they relate to data security. 

Overview of GDPR and the EU AI Act

Both the GDPR and EU AI Act are designed to protect individuals’ data and privacy, though in different ways. The EU AI Act complements GDPR, but from a slightly different perspective.

Get the latest from Satori

GDPR: A Fundamental Rights Law

The GDPR, which came into effect in 2018, is designed to protect the personal data of individuals residing in the European Union. It requires transparency in the processing of personal data, ensuring accountability and control to protect sensitive PII data. While GDPR doesn’t directly mention AI systems, Article 22 specifically mentions automated decision-making processes. Altogether GDPR is primarily legislation that relates to individuals’ rights.

EU AI Act: A Product Safety Law

The EU AI Act, adopted by the European Parliament in March 2024, officially entered (partially) into force last August. The EU AI Act is not yet fully developed in its practical application and can be unclear in its handling of specific data privacy issues. Instead of creating individual rights, the EU AI Act is based on a tiered risk-level system regulating standards and obligations for developers, deployers, importers, distributors, and authorized representatives of AI systems.

Key Deadlines for EU AI Act Compliance

Here’s a rough timeline of when the EU AI Act will be rolled out, according to their official website:

August 2024: EU AI act “technically” went into effect, but none of its requirements were enforced just yet

February 2025: Prohibited AI Systems

  • AI Systems Involved: AI systems considered to pose an unacceptable risk to fundamental rights. Examples include systems used for social scoring or predicting a person’s risk of committing a criminal offense.
  • Who Needs to Worry: Any organization utilizing these specific AI systems must cease their use.

August 2025: General-Purpose AI (GPAI) Models and Penalties

  • AI Systems Involved: General-purpose AI models, such as those provided by OpenAI or Anthropic.
  • Who Needs to Worry: Providers of GPAI models.

August 2026: High-Risk AI Systems

  • AI Systems Involved: “High-risk” AI systems, which include applications like resume screening software, emotion recognition software, and credit scoring.
  • Who Needs to Worry: Organizations operating high-risk AI systems. Operators of high-risk AI systems placed on the market before this date must comply if their systems undergo significant design changes.

August 2030: High-Risk AI Systems for Public Authorities

  • AI Systems Involved: High-risk AI systems that are intended for use by public authorities.
  • Who Needs to Worry: Providers and deployers of these systems.

December 2030: Large-Scale IT Systems

  • AI Systems Involved: Large-scale IT systems utilizing AI components. This includes AI systems that are components of large-scale IT systems listed in Annex X.
  • Who Needs to Worry: Companies operating these large-scale IT systems.

Complementary Frameworks: How GDPR and the AI Act Work Together

While the GDPR and EU AI Act do have different focuses, in some instances these directives overlap, with one serving as a safeguard for the other. In other instances, they are unrelated. The EU AI Act, which is designed to govern the safe development and deployment of AI technologies, relies on the GDPR to ensure that individual rights are protected, particularly within the data used by AI systems. In return, the transparency obligations of the AI Act can reveal GDPR data privacy violations and ensure that personal data is protected. The EU AI Act explicitly states that it is “without prejudice to existing Union legislation on data protection,” including the GDPR, meaning neither supersedes the other; they are complementary and mutually reinforcing.

Key Provisions and Obligations

Organizations are already taking into account the necessary data security requirements as part of GDPR. But how does this influence their data security under the EU AI Act so that they remain compliant with both? 

Automated Decision-Making (Article 22 GDPR)

As mentioned above, Article 22 of GDPR applies to AI systems. This Article secures individuals’ data privacy, as they have the right not to be subject to decisions based solely on automated processing, including profiling, that produce legal or similarly significant effects. This Article is directly applicable to AI systems that, once trained, make autonomous decisions. In this case, there are provisions to ensure that human oversight is instituted to ensure that data privacy and individual rights are secured. The GDPR aims to protect data subjects from solely automated decision-making, unless specific conditions are met, such as being necessary for a contract, authorized by law, or based on explicit consent. It also emphasizes the right to human intervention, to express one’s point of view, and to contest the decision.

High-Risk AI Systems (Article 14 EU AI Act)

The EU AI Act categorizes AI systems based on a tiered risk system, with high-risk systems subject to the most stringent requirements. The Act mandates that high-risk AI systems are overseen by humans during their operation, incorporating a “human-oversight-by-design” approach. This ensures that individuals’ data privacy is carefully monitored within the AI system. The AI Act provides broader protections by requiring human oversight for all high-risk AI systems, regardless of whether decisions are fully automated, extending and reinforcing GDPR protections. 

Organizations already have strategies to ensure compliance with GDPR regulations. Now, the primary focus is on meeting the new EU AI regulations to remain compliant. Let’s look at how the EU AI Act and GDPR may influence compliance.

Compliance Strategies for Organizations

Organizations using AI systems that include personal data from EU countries will need to consider both GDPR and the EU AI Act to remain compliant and prevent penalties. There are some important strategies to ensure compliance with both pieces of legislation:

  1. Records of Processing Activities (RoPAs):
    • GDPR requires detailed records of data processing activities that help in the discovery and cataloging processes and ensure transparency. We outlined the importance of data discovery for meeting GDPR compliance here.
  2. Privacy Impact Assessments (PIAs) / Fundamental Rights Impact Assessments (FRIAs):
    • Conduct regular PIAs to determine whether processes are impacting individuals’ rights. This is a shared requirement under both GDPR and the AI Act.
    • Conduct regular FRIAs for high-risk AI systems (under the AI Act) to determine whether processes are impacting individuals’ rights. This is a shared requirement under both GDPR and the AI Act, with the AI Act also requiring conformity assessments.
  3. Human Oversight:
    • Implement robust human oversight mechanisms for AI systems, especially those classified as high-risk. This reduces the risk associated with automated decision-making processes that could violate GDPR.
  4. Technical and Organizational Measures:
    • Data security transparency is needed to meet both GDPR and AI Act. This enables organizations to provide human interventions if it appears that individuals’ rights are likely to be harmed.
  5. Data Governance and Management:
    • Implement a robust data governance and management process, ensuring training, validation, and test datasets are relevant, representative, free of bias, and subject to ongoing quality control.
  6. Technical Documentation & Record-Keeping:
    • Maintain comprehensive technical documentation that demonstrates compliance with all relevant requirements and keep meticulous records, including automatic logging of events.
  7. Transparency Requirements for Generative AI:
    • Generative AI models (like ChatGPT) that are not classified as high-risk must still comply with transparency requirements and EU copyright law, including disclosing that content was AI-generated, designing models to prevent illegal content, and publishing summaries of copyrighted data used for training.

Enforcement and Penalties

The enforcement mechanisms for the EU AI Act are designed to avoid the silo effect observed with the GDPR, where different national regulators started operating with little coordination. The EU AI Act introduces a stronger coherence mechanism across the EU, with the European AI Office, established in February 2024, coordinating efforts at the European level and supervising the most powerful General-Purpose AI (GPAI) models. National market surveillance authorities, which Member States are required to designate by August 2, 2025, will supervise and enforce rules for AI systems, including prohibitions and high-risk AI.

Non-compliance with the AI Act can result in significant fines, similar to GDPR penalties, with a progressive sanctioning scale based on the severity of violations:

  • Non-compliance with the prohibition of AI practices (unacceptable risk AI) can result in administrative fines of up to €35,000,000 or 7% of the company’s total worldwide annual turnover for the preceding financial year, whichever is higher. (Some sources cite up to €40M). Not very pleasant.
  • Non-compliance with most other AI Act obligations (e.g., related to data, transparency, or other operator/notified body provisions) can result in administrative fines of up to €15,000,000 or 3% of the company’s total worldwide annual turnover. (Some sources cite up to €20M or 4% for data/transparency related issues).
  • The supply of incorrect, incomplete, or misleading information to competent authorities can result in administrative fines of up to €7,500,000 or 1% of the company’s total worldwide annual turnover. 

EU AI Act Compliance with Satori

Navigating the complexities of both GDPR and the EU AI Act requires a robust data security and governance strategy. Satori’s data security platform is uniquely positioned to help organizations achieve and maintain compliance, especially when dealing with the data critical to AI systems. Here’s how:

1. Comprehensive Data Discovery & Classification for AI Datasets

The EU AI Act, particularly for high-risk AI, mandates high-quality datasets that are relevant, representative, and free from errors and bias. This starts with knowing exactly what data you have and where it resides.

Satori continuously discovers and classifies sensitive data across your entire data environment, from data lakes and warehouses, to production databases, to LLMs and AI applications. This includes identifying PII, PHI, financial data, and other sensitive categories that might be used to train, validate, or test AI models. This visibility is crucial for assessing potential biases and ensuring data quality as required by the AI Act (Article 10: Data and Data Governance).

The Satori dashboard aggregates all the information about your connected data stores, their connection status, data classifications, risk score and alerts.

2. Granular Data Access Control for AI Development and Deployment

The EU AI Act emphasizes “human oversight-by-design” for high-risk AI systems and generally requires appropriate access controls to protect fundamental rights. Ensuring that only authorized personnel and AI systems can access specific data is paramount.

Satori provides centralized, granular data access control that works across diverse cloud data platforms. You can define policies based on user roles, attributes, data sensitivity, and even time-bound access, ensuring that data scientists, ML engineers, and AI applications only access the data they need, when they need it (Principle of Least Privilege). This directly supports the human oversight requirements and helps prevent unauthorized data usage that could lead to AI risks.

Set access policies on users based on role, geographic location, or other attributes. Decide which data stores are involved and which masking policies are set.

3. Real-Time Data Masking & Anonymization for Bias Mitigation

To comply with data quality requirements and mitigate bias, particularly for high-risk AI systems, ensuring that training data is appropriately anonymized or pseudonymized is critical.

Satori offers dynamic data masking capabilities, allowing you to redact, tokenize, or anonymize sensitive data in real-time as it’s accessed. This means AI models can be trained on datasets that maintain statistical properties while protecting individual privacy, reducing the risk of discriminatory outcomes as outlined in the AI Act’s data governance requirements (Article 10).

In Satori, you can apply masking profiles to data detected and tagged by Satori’s data classification engine.

4. Comprehensive Audit Trails and Activity Monitoring for Transparency

The EU AI Act requires logging capabilities to ensure traceability of results for high-risk AI systems. This includes understanding who accessed what data and when, especially for auditing and incident response.

Satori provides centralized, automatically enriched audit logs of all data access activities across your data stores. This robust monitoring ensures complete visibility into data usage by AI systems and the individuals interacting with them. These detailed logs are invaluable for demonstrating compliance during audits, conducting Fundamental Rights Impact Assessments (FRIAs), and investigating any potential data privacy or AI-related incidents.

5. Streamlined Compliance Reporting

Demonstrating compliance with multiple regulations can be a daunting task.

Satori automates much of the reporting process, providing out-of-the-box reports and customizable dashboards that align with GDPR and EU AI Act requirements. This simplifies the process of proving adherence to data governance, access controls, and data quality standards, allowing your teams to focus on innovation rather than manual compliance efforts.

The reports view contains predefined reports that allow you to easily identify data export attempts, malicious access to individual user records and new PII locations. You can also easily create custom reports.

Conclusion

The EU AI Act provides guidelines about regulating AI systems to ensure that individuals’ rights and freedoms are protected. Both GDPR and the EU AI Act provide an approach to ensuring that data privacy is protected. Meeting these requirements is necessary as organizations race to become AI-ready while remaining compliant. 

To learn more about how Satori can help your organization become AI-ready while navigating the complexities of both GDPR and the EU AI Act, book a demo with one of our experts. 

Learn More About Satori
in a Live Demo
Book A Demo
About the author
|Marketing Specialist

Idan is a marketing specialist at Satori, with a focus on social media and digital marketing. Since relocating from Silicon Valley to Tel Aviv in 2021, Idan has honed her marketing skills in various Israeli cybersecurity startups.

Back to Blog