Skip to content
Casper_Icons_Prompt playground_AI Blue

EU AI Act FAQ

On March 13, 2024, the EU AI Act was formally passed into law. Broadly, it aims to introduce a clear and ethical framework for how organizations work with artificial intelligence. The legislation will go into full effect by 2026, but organizations will be required to comply with parts of it as soon as Q4 2024.

Below is a recap of some of the most commonly-asked questions about the EU AI Act today. If you’d like to learn more about how Prove AI is partnering with organizations to develop AI governance frameworks, please contact us.

General FAQs

Yes – thanks to the proliferation of AI-focused legislation around the world, including the EU AI Act and the California AI Accountability Act, organizations are facing a new urgency to define and implement clear governance strategies to ensure their AI data management practices are in line with emerging regulations.

Blockchain has emerged as an ideal database for managing AI data. Its distributed ledger makes it prohibitively difficult for third-parties to tamper with data once it’s been recorded to the blockchain. There is no single point of failure, meaning that organizations can reliably audit highly serialized datasets and readily confirm whether or not data has not been altered – and if it has been, determine when, and by whom. This feature is crucial for maintaining the integrity of data used by AI, particularly in industries where high-risk AI applications are a regular occurrence. 

This enhanced visibility can enable more efficient internal workflows, as well as help demonstrate regulatory compliance. Storing AI algorithms and models on the blockchain also enables third parties to audit and verify the AI’s fairness, safety, and non-discrimination, which are essential aspects of compliance with the AI Act.

These benefits don’t just apply to AI systems that are higher up on the risk-scale. Blockchain-powered smart contracts can be useful for those with “limited risk,” too. Self-executing contracts can automate compliance processes within AI models, enforcing data protection and privacy standards specified by the AI Act, including the requirement for explicit consent for data processing, or the right to explanation for automated decisions. 

Learn how Prove AI can help your business keep up with new regulations.

Any AI expert will tell you that the technology’s transformative benefits come with significant risks. From rising data privacy concerns, to the proliferation of deepfakes and misinformation, to recurring hallucinations among chatbots, businesses and consumers alike are growing increasingly aware of where AI systems today fall short. 

Because of this, policymakers in the European Union (EU) saw the need to create comprehensive guardrails in this evolving market. On March 13, 2024, the EU parliament passed the landmark AI Act. It was the first time in which an extensive set of regulations was approved for the purpose of governing AI-human interactions. 

At its core, the EU AI Act is a document that outlines the levels of risk for AI applications across industry use cases, and establishes standardized measures to monitor and report AI-related threats to human wellbeing and fundamental rights. Now, companies are able to evaluate their own AI products against specific criteria to ensure a safer, more ethical user experience.

According to policymakers, the enforcement of specific provisions will be staggered over the span of approximately two years. The AI Act will likely be enforced in its entirety by mid-2026. For applications that pose “unacceptable risk” to society, however, bans will take full effect within six months. 

Countries in the EU are now in the midst of putting together their own independent AI watchdog organizations so that citizens can file reports and request investigations into potential violations. An office will soon be established in Brussels, Belgium, to oversee and enforce rules pertaining specifically to what the legislation calls general purpose AI (GPAI) models. 

Unacceptable risk: Applications are banned.

This category encompasses AI applications that are considered “threats,” and are prohibited for use. “Manipulative AI,” for instance, refers to applications that involve purposefully deceptive or subliminal techniques to incite behavior changes in vulnerable individuals. The most cited example for this ban so far is the use of voice assistants in educational games or toys that may inadvertently encourage dangerous behavior or negative thoughts in children. In fact, any AI system that exploits individuals for their age, disabilities, or socioeconomic circumstances is specifically prohibited under the Act. Other examples include the use of remote biometric data to categorize individuals based on their race or political views, as well as compiling facial recognition databases from footage taken in public spaces.

High-risk: Applications are strictly regulated.

These AI applications present inherent risks to human safety and wellbeing. AI systems used in foundational industries like healthcare, education, law enforcement, and critical infrastructure management fall under this category. Self-driving vehicles, autonomous medical devices, and the maintenance of electrical grids and water supplies are just a few examples. Developers and users of these models must adhere to regulations that require rigorous testing, comprehensive documentation of data quality and integrity, as well as an accountability framework that involves human oversight. Evaluation is mandatory both before market introduction and throughout the model’s operational lifespan.

Limited risk: Applications require disclosure of AI involvement.

AI systems in this category necessitate transparency to notify users that they are interacting with AI. Examples include applications involved in generating or altering images or videos (e.g., deepfakes). Exceptions are made for free, open-source models with publicly accessible parameters—they fall under a separate category known as general purpose AI (GPAI) systems, and are subject to different transparency requirements for ensuring fair use and data privacy.

Minimal risk: Applications are largely unregulated.

This category includes AI systems for activities like video gaming or spam filtering, with the majority of AI applications falling into this classification. While they aren’t heavily regulated, adherence to a voluntary code of conduct is recommended.

A different category of AI systems, called general purpose AI (GPAI), was newly added to the EU AI Act in 2023. This encompasses open-source foundational models such as ChatGPT. Under the AI Act, GPAI systems will now be subject to certain transparency requirements, including adherence with EU copyright policies and disclosure of content used for model training. Some GPAI models that can access and process more data than others will also have to undergo more specific evaluations and risk assessments. OpenAI’s GPT4 and Google’s Gemini are examples of models that will likely receive extra scrutiny. 

Industries that are already heavily regulated, including healthcare and financial services, will likely see more regulations around AI use in the near term. In time, that list is likely to grow and include nearly every major industry. It’s thus critical for security, risk and compliance leaders to be thinking ahead and identifying AI-powered applications and services that pose the most risk within their current systems. 

Historically, the EU has had strong global influence when it comes to technological regulation. See: GDPR. While we don't know how much influence the EU AI Act will ultimately have globally, it is clear that the legislation is a significant indication that we can no longer overlook responsible AI standards. The groundwork for governance at a global scale is laid out in the EU AI Act. In the same way that there’s legislation requiring compliance to make a car safe for driving on public roads, the EU AI Act ensures that businesses comply with regulations to make AI safe for use. 

The Practical Applications

It all depends on the category of risk that the product falls under. A customer service chatbot used by retailers isn’t considered “high-risk” by the AI Act, for example, but its data usage should be tracked closely to ensure that sensitive customer interactions are kept private. 

On the other hand, the financial sector often deals with evaluations pertaining to an individual’s creditworthiness. AI-enabled “social scoring” mechanisms—such as emotion recognition in schools and the workplace, or predictive policing based on the profiling of personal attributes—are technically banned by the AI Act. While using AI for financial credit appraisals isn’t the same as “social scoring,” they will still likely be labeled a “high-risk” use case.

Companies should spend time understanding where exactly their AI-enabled tools, products, or services land on the AI Act’s risk-scale, and determine the appropriate approach to testing and monitoring for each of their models. 

Repercussions for noncompliance come in the form of fines. On the lower end: AI operators who provide incorrect, incomplete, or misleading information could be fined up to €7,500,000 or 1% of total worldwide annual turnover for the preceding financial year, whichever is higher. The most substantial fines are given for using systems—or placing systems on the market—that fall under the “unacceptable risk” category. These instances are subject to fines of up to €35,000,000 or up to 7% of annual worldwide turnover for companies, whichever is higher.

If your business operates in the EU (and uses EU citizen’s data) you are subject to this legislation. 

If your business doesn’t operate in the EU, it’s important to focus on what’s ahead. Many countries are drafting iterations of legislation that will certainly create more regulation around AI from President Biden’s Executive Order to Canada’s AI and Data Act to a push for AI regulation in Brazil.

The Solutions

The EU AI Act Compliance Checker suggests the following steps:

1. Create and keep documentation for providers integrating AI models, balancing transparency and protection of IP.
2. Put in place a policy to respect Union copyright law.
3. Publish a publicly available summary of AI model training data according to a template provided by the AI Office.