JAAG EXPLAINER N° 7

What is the EU AI Act?

How will it work?

Is it fit for purpose, from a rights perspective?

What the AI Act will do

The European Union’s AI Act is the world’s first-ever comprehensive legal framework on AI.

Its aim is to foster trustworthy AI, by ensuring that AI systems respect fundamental rights, safety, and ethical principles, and by dealing with the risks of very powerful AI models.

The AI Act (Regulation (EU) 2024/1689 provides AI developers and deployers with clear requirements and obligations regarding specific uses of AI. It is part of a wider package of policy measures to support the development of trustworthy AI. Together, they aim to guarantee the safety and fundamental rights of people and businesses and to strengthen uptake, investment and innovation in AI.

The EU AI Act works by obliging organisations that develop, use, distribute or import AI systems in the EU to obey certain rules; they will face high fines if they do not comply (up to €35 million or 7% of global annual turnover).

The Act will come into effect from 1 August 2026; (certain clauses will start in 2025).

Risk

The AI Act is based on the concept of risk. While most AI systems pose little or no risk and can help solve many societal challenges, certain AI systems create risks and can lead to undesirable outcomes.

The approach is based on the level of risk involved in the activity in question: the greater the potential risk, the stricter the rules that have to be followed. So,

  • AI systems for certain uses will be completely prohibited.

  • Certain AI systems will be designated as high-risk AI systems (HRAIS) and subject to extensive obligations, especially for providers.

  • There will be specific provisions governing general purpose AI (GPAI) models. These models are regulated regardless of use case.

  • Other AI systems are considered low risk. These AI systems will be subject only to limited transparency obligations where they interact with individuals.

The Act defines an AI system as: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

Prohibited AI systems

AI system that are prohibited include:

  • Certain AI systems for biometric categorisation and identification, including those for untargeted scraping of facial data from the internet.

  • AI systems that deploy subliminal techniques, exploit vulnerabilities or manipulate human behaviour to circumvent fundamental rights or cause physical or psychological harm.

  • AI systems for emotion recognition in law enforcement, border management, the workplace and education.

  • AI systems for the social scoring evaluation or classification of natural persons or groups thereof over a period of time based on their social behaviour.

High-risk AI systems

High-risk AI systems (‘HRAIS’) are AI systems in areas covered by EU product safety legislation and those intended for certain purposes, particularly in the following domains:

  • AI systems used as safety components in the management and operation of essential public infrastructure e.g. water, gas and electricity supplies.

  • AI systems used to determine access to education institutions or in assessing students e.g. AI systems used to grade exams.

  • AI systems used in recruitment and employment e.g. for placing job advertisements, scoring candidates or reviewing job applications, promotion or termination decisions or in reviewing work.

  • AI systems used in migration, asylum and border control management or in various other law enforcement and judicial contexts.

  • AI systems used for influencing the outcome of democratic processes or the voting behaviour of voters.

  • AI systems used in the insurance and banking sectors.

The list of high-risk AI systems may grow as further high-risk uses emerge.

HRAIS providers will be subject to extensive obligations, including:

  • Risk management system: implementing process(es) for the entire lifecycle of the HRAIS to identify, analyse and mitigate risks.

  • Data and data governance measures: training and testing of HRAIS must be undertaken in accordance with strict data governance measures.

  • Technical documentation: drafting a comprehensive “manual” for HRAIS which contains specific minimum information.

  • Record-keeping: HRAIS must be designed to ensure automatic logging of events including e.g. period of use, input data, and these must be kept by the providers for defined periods.

  • Transparency: HRAIS must be accompanied by instructions for use which include detailed information regarding their characteristics, capabilities and limitations.

  • Human oversight: HRAIS must be designed so they can be overseen by humans, who should meet various requirements e.g. being able to understand the HRAIS (‘AI literacy’) and to stop its use.

  • Accuracy, robustness and cybersecurity: HRAIS must be accurate (with accuracy metrics included in instructions for use), resilient to errors or inconsistencies (e.g. through fail-safe plans) and resilient to cyber-attacks.

  • Quality management system: HRAIS providers must put in place a comprehensive quality management system.

  • Post-market monitoring: HRAIS providers must document a system to collect and analyse data provided by users on the performance of the HRAIS throughout its lifetime.

In addition, HRAIS providers will have these procedural obligations:

  • CE marking: Providers must ensure their HRAIS undergoes a conformity assessment procedure before the HRAIS is supplied and affix a CE mark to its documentation.

  • Registration in EU database: Providers and public bodies using HRAIS must register the HRAIS in an EU-wide database of AI systems.

  • Reporting obligations: HRAIS providers must report serious incidents or malfunctioning involving their HRAIS to a relevant authority within 15 days.

General Purpose AI

The most onerous other requirements attach to general purpose AI (GPAI) and focus on transparency. The obligations for all GPAI include issuing technical documentation, compliance with EU copyright law and providing summaries of the training data.

GPAI that is trained on extensive data sets and exhibits superior performance has the potential for systemic risks and these are subject to additional requirements, expected to include:

  • Stringent model evaluations, including adversarial testing/red-teaming.

  • Assessing and mitigating possible systemic risks from use of the GPAI.

  • Greater reporting obligations to regulators, particularly where serious incidents occur.

  • Ensuring adequate cybersecurity for the GPAI with systemic risk.

  • Reporting on the energy efficiency of the GPAI.

Other AI systems

The only binding requirement for other AI systems (except military or defence; research and innovation), is transparency: providers must ensure that AI systems that are intended to interact with individuals are designed and developed in such a way that individual users are aware that they are interacting with an AI system.

There is a general obligation on all deployers and providers of AI systems to ensure that their staff that deal with the operation and use of AI systems have sufficient AI literacy.

 

But will the AI Act protect individuals’ rights?

European civil society organisations have identified shortcomings in the Act. Overall, they find that it:

fails to effectively protect the rule of law and civic space, instead prioritising industry interests, security services and law enforcement bodies” ... “measures intended to protect fundamental rights, including key civic rights and freedoms, are insufficient to prevent abuses [and] riddled with far-reaching exceptions, lowering protection standards, especially in the area of law enforcement and migration”.

The European Civil Forum has said:

We are disappointed with the deal reached by the European institutions, which will undermine  the safeguards that civil society has for long advocated for, and allow them to be abused. The AI Act as proposed by the European Commission was flawed from the beginning, driven by market concerns rather than peoples’ rights”. 

Access Now commented:

EU officials boast about being global trendsetters when it comes to regulating AI, but with this law they’ve set the lowest bar possible. The new AI Act is littered with concessions to industry lobbying, exemptions for the most dangerous uses of AI by law enforcement and migration authorities, and prohibitions so full of loopholes that they don’t actually ban some of the most dangerous uses of AI”. 

Much of the detail of the implementation of the Act will need to be worked out through guidance and interpretation. Civil society bodies have asked to be actively involved in this process.

Key shortcomings that have been identified include:

Gaps and loopholes.

  • Although the Act prohibits certain AI applications deemed unacceptable with regard to fundamental rights, some key exceptions call into question their effectiveness. For example, the Act allows police to use real-time face recognition in several cases, so critics believes that the Act “does not constitute an accurate safety net against harmful application of biometric surveillance”. 

AI companies can self-assess risks.

  • The Act gives companies and authorities the power to unilaterally decide that their AI system does not pose a significant risk to people's health, safety, or rights, even if they fall into one of the high-risk categories. If a provider does this, all obligations for deployers of such systems will no longer apply. 

Standards for fundamental rights impact assessments are weak.

  • The Act requires deployers of high-risk AI systems to list potential impacts on fundamental rights, but there is no clear obligation to assess whether these impacts are acceptable or to prevent them, where possible. There is no requirement to consult external stakeholders, including civil society and people affected by AI, in the assessment of impact. Deployers of high-risk AI systems will have to publish the summary of the results of impact, this will not apply to law enforcement and migration authorities.

The use of AI for ‘national security’ purposes will be a rights-free zone.

  • The Act automatically exempts from scrutiny AI systems developed or used solely for the purpose of national security, regardless of whether this is done by a public authority or a private company. Thus, governments could invoke national security to introduce otherwise prohibited systems, such as mass biometric surveillance.

Civic participation in implementation and enforcement is not guaranteed.

  • Public authorities or companies will not be required to engage with external stakeholders when assessing fundamental rights impacts of AI. Individuals whose rights have been violated will be able to file complaints, but civil society organisations will be able to represent them only when consumer rights are involved. So, for example, they would not be able to file a complaint on behalf of a group of people whose civic freedoms have been violated by the use of biometric surveillance in the streets. 

 

Sources: