Meet the Satori team at AWS Summit NYC, July 10th 🗽

Satori

Understanding Some of the Potential Challenges of the EU Artificial Intelligence (AI) Act

|Content Specialist

AI is the new shiny black box – everyone uses but there is still a lot of uncertainty about what AI and Gen AI mean for data security. AI systems require large amounts of data to train these systems, but how will organizations protect potentially sensitive (PII) data? 

The EU has taken the first steps towards developing regulations for the development and deployment of AI systems to protect individual information. This creates a lot of uncertainty for organizations that are starting to implement AI and Gen AI analytics. 

  1. How will they maintain compliance with the EU AI Act?
  2. How costly is this compliance likely to be? 
  3. Will it slow them down in their race to develop AI technology?

In this blog post we explore the challenges of meeting the EU AI Act. AI and Gen AI organizations that comply with the AI Act can help organizations protect PII and remain compliant without significantly consuming data teams’ time. 

The EU AI Act

The EU introduced the first piece of legislation to govern AI, the EU AI Act. The purpose of this legislation is to ensure “better conditions” for the development and use of AI technology, in essence, it is a product safety regulation. The EU Parliament adopted the AI Act in March 2024. It will enter into force in 24 months, but some parts are going to be adopted sooner.

 

Some terminology is necessary to understand this regulation. 

  • AI systemmachine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. 
  • Providers (developers)—The majority of the obligations fall on these individuals. This also includes third-country providers where the output is used in the EU and individuals who develop and place the system or model on the market. 
  • Users (deployers)- This group also has some obligations but less than those of the providers. 

 

The EU AI Act follows the EU’s regulatory framework which defines different risk categories and levels of risk. 

Get the latest from Satori

The levels of risk:

1. Unacceptable risk (Chapter II, Article 5): AI systems that directly threaten individuals are banned. This relates to AI which clearly threatens people’s safety, livelihoods, or rights. For example, social scoring systems and manipulative AI—a full list is available here. The use of AI systems for any of these purposes is ‘Prohibited.’ The only exception is the use of biometric identification. 

The obligations under unacceptable risk will be enforced six months after the AI Act is entered into force. 

 

2. High risk (Chapter III): includes the majority of the obligations. These systems are defined as negatively affecting safety or fundamental rights, a “clear threat to the safety, livelihoods, or rights of people.” It includes all Annex III use cases (e.g. non-banned biometrics, education and vocational training, and many more). 

An AI system is considered high risk if:

a. It is used as a safety component of a product, or it is a product, covered by the Union harmonization legislation (Annex I)

AND

b. It necessitates a third-party conformity assessment.

Requirements for providers of high-risk AI systems (Articles 8 – 22)

  • Risk management system
  • Data governance includes ensuring that training, validation, and testing datasets are relevant, sufficiently representative, free of errors, and complete according to the intended purpose. 
  • Technical documentation to demonstrate compliance and provide authorities with the information to assess that compliance. 
  • Record-keeping automatically records events for identifying national-level risks and substantial modifications throughout the system’s lifecycle. 
  • Instructions for use to downstream deployers (for their compliance). 
  • Deployers to implement human oversight.
  • “Appropriate levels” of accuracy, robustness, and cybersecurity.
  • Quality management system to ensure compliance. 

 

The high-risk systems have 36 months after entry into force to meet the requirements. 

 

3. Limited Risk: transparency requirements for AI systems that interact with humans and generate content. Gen AI typically fits within this category. An AI system needs to comply with the following transparency requirements within 12 months after entry into force:

  • disclosing AI generated content
  • preventing it from generating illegal content
  • publishing summaries of copyrighted data for training. 
  • providers must also label text, audio, and video content that is generated using AI.

 

4. Minimal risk: no obligations for AI systems posing low risks.

Potential Struggles with Compliance

Some complexities and uncertainty regarding compliance regulations can increase the costs for data teams to ensure compliance is maintained.

Complex Regulations

The regulations are complex and evolving as AI systems evolve. They can be difficult to understand and implement controls to ensure compliance. 

 

To remain compliant, organizations need to implement a number of policies and requirements including security policies, access requirements, dynamic masking, and other necessary measures to ensure PII is secure.

Data Teams Time

The wide range of controls necessary to remain compliant can significantly hamper data teams. Data teams that need to spend a significant portion of their time ensuring sensitive data is secured are pulled away from other productive projects.

Expected Penalties

Violations of the act will depend on the level of risk violated and how extensive the violation. Some estimates include between €15 million and €35 million (for using AI-enable techniques, or biometric data to infer private information). These penalties not only result in the loss of funds, but also potential damage to the organization’s brand and loss of consumer trust. 

Getting Started with the EU AI Act

The exact technical standards are currently under development, but organizations could still benefit from making sure they are positioned to ensure compliance once these standards are clearer. 

 

  1. A good starting point is to inventory all organizational AI applications. This should be followed by a risk assessment that includes a gap analysis to understand how AI is used within the organization and the existing governance structures, policies processes, risk categories, etc. 
  2. Train employees to identify potential ethical AI system violations. This should include training for employees and the development of policies and escalation procedures. 
  3. Track the categories of prohibited and high-risk AI to ensure that teams correctly assess these across markets and geographies. 

Organizations that start this process are better positioned to gain compliance with these new regulations and reduce data teams’ time on compliance.

 

Conclusion

The EU AI Act is primarily a product safety law that provides for the safe use of AI systems. It is designed to work in tandem with the GDPR, to ensure that individual rights are protected. Satori’s data security platform can help implement safeguards. These include just-in-time access to data, continuously discovering and classifying sensitive data, dynamic masking, and auditing and monitoring across diverse databases, data warehouses, and data lakes.

 

The ability to quickly and easily comply with EU AI Act can reduce the burden on data teams’ time and improve the likelihood of passing an audit. To learn more about how Satori can help you meet the EU AI requirements book a demo with one of our experts. 

Learn More About Satori
in a Live Demo
Book A Demo
About the author
|Content Specialist

Lisa is a content specialist with an academic background, blending strong analytical and communication skills, to develop engaging instructional content. Lisa has held positions in higher education and public policy and environmental think tanks.

Back to Blog