Saidot Guide

An Introduction to EU AI Act

A Practical Guide to Governance,
Compliance, and Regulatory Guidelines

Requirements, Timelines, and Best Practices for Implementation in 2024 

Are you responsible for ensuring your organisation's AI systems comply with the EU AI Act? This flagship legislative framework, with its innovative risk-based approach to regulation, is setting the tone for global AI governance efforts. This guide offers practical insights to help you navigate the new AI Act’s requirements, risk classifications, and best practices for implementation.

Equip yourself with strategies to establish robust AI governance frameworks, conduct risk assessments, implement human oversight, and foster transparency – all while minimising legal risks and penalties. To prepare your organisation for the AI regulation that will shape the responsible development and use of these powerful technologies across Europe and beyond, read on.

Overview

What is the EU AI Act and who does it affect?

What is the EU AI Act?

The European Union (EU) AI Act is the world's most comprehensive regulation on artificial intelligence (AI). Critically, it doesn’t just apply to EU-based organisations. It applies to all organisations that bring AI systems to the EU market as well as those that put them into service within the EU.

What are the goals of the EU AI Act?

After developing and negotiating the technicalities of the Act, EU lawmakers have created a person-centred legal framework designed to safeguard individuals’ fundamental rights and protect citizens from the most dangerous risks posed by AI systems.

The purpose of the EU AI Act is to "improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation".

The Act was created to safeguard users and citizens and foster an environment where AI can thrive responsibly. Furthermore, the Act aims to proactively minimise risks associated with certain AI applications before they occur. This approach underscores a commitment to harnessing AI's benefits while putting human welfare and ethical considerations first.

The Act categorises AI systems based on their associated risks, with rules based on four distinct risk categories:

Risk levels

Unacceptable risk

These systems are banned entirely due to the high risk of violating fundamental rights (e.g. social scoring based on ethnicity).

High-risk

These systems require strict compliance measures because they could negatively impact rights or safety if not managed carefully (e.g. facial recognition).

Limited risk

These systems pose lower risks but still require some level of transparency (e.g. chatbots).

Minimal risk

The Act allows the free use of minimal-risk AI, like spam filters or AI-powered games.

The AI Act introduces specific transparency requirements for certain AI technologies, including deepfakes and emotion recognition systems. However, it exempts activities related to research, innovation, national security, military, or defence purposes. 

Interestingly, creators of free and open-source AI models largely escape the obligations normally imposed on AI system providers, with an exception made for those offering general-purpose AI (GPAI) models that pose significant systemic risks.

A notable aspect of the EU AI Act is its enforcement mechanism, which allows local market surveillance authorities to impose fines for non-compliance. This regulatory framework aims not only to govern the development and deployment of AI systems that will be put into service in the EU, but also to ensure these technologies are developed and used in a manner that is transparent, accountable, and respectful of fundamental rights.

Who will be affected by the EU AI Act?

Both organisations bringing AI systems to the EU market and those putting them into service in the EU will have new obligations and requirements under the Act. 

The EU AI Act specifically outlines specific requirements for:

How is AI defined in the EU AI Act?

Non-technical stakeholders may be confused by the difference between the terms ‘neural networks’, ‘machine learning’, ‘deep learning’, and ‘artificial intelligence’—given that each one is a related but separate term. Therefore, it’s important to understand the basis on which the EU AI Act defines the term ‘AI’.

The EU AI Act adopts the OECD's definition of artificial intelligence as “a machine-based system created to operate with different degrees of autonomy and that learns and adjusts its behaviour over time. Its purpose is to produce outputs, whether explicitly or implicitly intended, such as predictions, content, recommendations, or decisions that can have an impact on physical or virtual environments.” 

Notably, the Act acknowledges the varying degrees of autonomy and adaptiveness that AI systems may show following their deployment.

How does the EU define general-purpose AI (GPAI) models and GPAI systems?

The EU’s definition of general-purpose AI (GPAI) models is based on the key functional characteristics of a GPAI model, in particular, its generality and capability to competently perform a wide range of distinct tasks.

GPAI models are defined based on criteria that include the following:

  • These models are typically trained on large amounts of data through various methods, such as self-supervised, unsupervised or reinforcement learning.
  • GPAI models may be placed on the market in various ways, including through libraries, application programming interfaces (APIs), as direct download, or as a physical copy.
  • These models may be further modified or fine-tuned into new models.
  • Although AI models are essential components of AI systems, they do not constitute AI systems on their own.

Whether an AI model is general-purpose or not, among other criteria, can also be determined by its number of parameters. The Act indicates that models with at least a billion parameters, trained with a large amount of data using self-supervision at scale, should be considered as displaying significant generality.

A general-purpose AI system is created when a general-purpose AI model is integrated into or forms part of an AI system. Due to this integration, the system can serve a variety of purposes. A general-purpose AI system can be used directly or integrated into other AI systems.

For more information, refer to Article 97 of the EU AI Act.

How does the EU AI Act fit into global governance initiatives?

The EU AI Act is the world's first comprehensive legal framework on AI. It is expected to come into force in 2024, with additional obligations and penalties applying 6 to 36 months after this date.

Having built the EU AI Act on existing international standard-setting around AI ethics, the EU aims to facilitate a global convergence on best practices and well-defined risk categories. This regulation is intended as an example for other countries as they begin to align their regulatory approaches.

Requirements

What does the EU AI Act require of organisations?

01

Providers of high-risk AI systems

02

Deployers of high-risk AI systems

03

Providers and deployers of specific AI systems with transparency risk

04

Providers of general-purpose AI (GPAI)

05

Providers of general-purpose AI (GPAI) with systemic risk

01

Providers of high-risk AI systems

What does the EU AI Act require of providers of high-risk AI systems?

The EU AI Act requires providers of high-risk AI systems to establish a comprehensive risk management system, to implement robust data governance measures, technical documentation, meticulous record-keeping practices, and transparent provision of information to deployers. Furthermore, the Act includes an obligation for human oversight, accuracy, robustness, and cybersecurity. The implementation of a quality management system and ongoing post-market monitoring are also important obligations outlined in the regulation.

Which AI systems are high-risk? 

Providers must undergo a rigorous conformity assessment process, obtain an EU declaration of conformity, and affix the CE marking to their systems. Registration in an EU or national database is compulsory, requiring the provision of identification and contact information, as well as a demonstration of conformity and adherence to accessibility standards. Diligent document retention and the generation of automated logs are also mandated.

The Act imposes a duty to:

  • Implement corrective actions
  • Report any serious incidents to authorities
  • Cooperate with investigations
  • Report serious incidents
  • Grant access to data and documents upon request. 

Collectively, these measures aim to foster accountability, transparency, and the responsible development of high-risk AI systems within the EU.

02

Deployers of high-risk AI systems

What does the EU AI Act require of deployers of high-risk systems?

Under the EU AI Act, deployers of high-risk AI systems also face clear obligations. They must implement and demonstrate measures to comply with the system provider’s instructions for use and assign human oversight to relevant individuals, ensuring their competence for the role.

Which AI systems are high-risk?

Verifying the relevance and representativeness of input data is crucial, as is monitoring the system's operation and informing providers of any issues. Transparency and disclosure regarding AI decision-making processes are also required.

In addition to mandatory record-keeping, deployers must provide notifications for workplace deployments, and public authorities must register deployments in the EU Database. All organisations deploying high-risk systems must cooperate with authorities, including participation in investigations and audits. Conducting data protection impact assessments is a statutory requirement, and deployers must obtain judicial authorisation for exempted use of post-remote biometric identification. 

There are additional obligations for deployers who are:

  • Public bodies.
  • (Or) Private entities providing public services.
  • (Or) Banking and insurance service providers.

These groups must also conduct fundamental rights impact assessments and notify the national authority of their results, ensuring the protection of fundamental rights and adherence to ethical principles.

Master AI governance with Saidot

Saidot is a SaaS platform for efficient AI governance to unlock AI's promise responsibly.

It is an advanced AI lifecycle management tool for ensuring safe and ethical AI, founded on cutting-edge knowledge and leading best practices.

Learn more about Saidot

03

Providers and deployers of specific systems with transparency risk

What transparency obligations does the EU AI Act impose on providers and deployers of specific AI systems?

The EU AI Act also outlines several transparency obligations for providers and deployers of certain AI systems:

AI Interaction Transparency: Providers and deployers must ensure transparency when humans interact with an AI system. This includes making users aware that they are interacting with an AI rather than a human and providing clear information about the AI system's characteristics and capabilities.

Synthetic Content Disclosure and Marking Requirement: GPAI providers must disclose and mark any synthetic content, such as text, audio, images, or videos, generated by their systems. This marking must be clear and easily recognisable for the average person.

Emotion Recognition and Biometric Categorisation AI System Operation and Data Processing Transparency: For AI systems that perform emotion recognition or biometric categorisation, providers and deployers must ensure transparency about the system's operation and data processing activities. This includes providing information on the data used, the reasons for deploying the system, and the logic involved in the AI decision-making process.

Disclosure of AI-Generated Deepake Content: Providers and deployers must disclose when content – including image, audio, and video formats – has been artificially created or manipulated by an AI system, commonly known as "deepfakes." This disclosure must be clear and easily understandable to the average person.

AI-Generated Public Interest Text Disclosure Obligation: For AI systems that generate text on matters of public interest, such as news articles or social media posts, providers and deployers must disclose that the content was generated by an AI system. This obligation aims to promote transparency and prevent the spread of misinformation.

04

Providers of general-purpose AI (GPAI)

What does the EU AI Act require of providers of general-purpose AI (GPAI)?

The EU AI Act imposes specific requirements on providers of general-purpose AI (GPAI). These include disclosing and marking synthetic content generated by their systems; maintaining technical documentation detailing the training, testing, and evaluation processes and results; and providing information and documentation to providers integrating the GPAI model into their AI systems.

How does the EU define general-purpose AI (GPAI) models and GPAI systems?

Compliance with copyright law is also required, as providers must ensure that their systems do not infringe upon intellectual property rights. Additionally, the Act introduces an obligation for organisations to disclose summaries which detail the training content used for the AI system.

Cooperation with authorities is another fundamental obligation, requiring providers to participate in investigations, audits, and other regulatory activities as necessary.

05

Providers of general-purpose AI (GPAI) models with systemic risk

What does the EU AI Act require of providers of general-purpose AI (GPAI) models with systemic risk?

In addition to the fundamental obligations that the EU AI Act imposes on providers of GPAI models, it also places requirements on providers of GPAI with systemic risk

What is a GPAI system with systemic risk?

Since systemic risks result from particularly high capabilities, a GPAI system is considered to present systemic risks if it has high-impact capabilities – evaluated using appropriate technical tools and methodologies – or has a significant impact on the EU market due to its reach.

For more information, refer to Article 51 of the EU AI Act.

Requirements

These providers must conduct standardised model evaluations and adversarial testing to assess the system's robustness and identify potential vulnerabilities or risks.

Comprehensive risk assessment and mitigation measures are mandatory, including strategies to address identified risks and minimise potential harm. Providers must also establish robust incident and corrective measure tracking, documentation, and reporting processes – ensuring transparency and accountability in the event of system failures or adverse incidents.

Cybersecurity protection is a critical requirement, which means providers must implement robust safeguards to mitigate cyber threats and protect the integrity, confidentiality, and availability of their GPAI systems and associated data.

How to craft a generative AI policy (+ free template)

To help you get started, we've put together a free template for crafting your GenAI policy, including questions on each of the 12 recommended themes.

Download free template
EU AI Act timeline

When will the EU AI Act come into force?

When will the EU AI Act take effect?

On 13 March 2024, the European Parliament voted overwhelmingly to approve the new AI Act, with 523 votes in favour, 46 against, and 49 abstentions. The implementation of the AI Act will now transition through the following stages:

  • At the national level, Member States will designate one or more national competent authorities, including the national supervisory authority, to supervise the application and implementation of the AI Act.
  • The AI Act will enter into force 20 days after its publication in the Official Journal of the European Union.
  • The ban on prohibited AI practices applies after 6 months of the AI Act coming into force.
  • Obligations concerning general-purpose AI models apply after 12 months of the AI Act coming into force.
  • Requirements and obligations concerning providers of standalone high-risk AI systems apply 24 months after the AI Act comes into force.
  • Transparency obligations apply after 24 months of the AI Act coming into force for providers of AI systems that interact with people or create synthetic content, deployers of emotion recognition, biometric categorisation, or deepfake systems, and deployers of certain AI systems manipulating text.
  • Deployers of high-risk AI systems developed by third-party providers must comply after 24-36 months of the AI Act coming into force.
  • Providers of high-risk AI systems subject to Union harmonisation legislation must comply 36 months after the AI Act comes into force (for more information, refer to Annex I of the EU AI Act).


In the long term, the European Union will work with the G7, the OECD, the Council of Europe, the G20, and the United Nations to achieve global alignment on AI governance.

Key milestones

  • March 13, 2024: The European Parliament voted to approve the new AI Act.
  • February 21, 2024: Launch of the European Artificial Intelligence Office.
  • February 13, 2024: Approval of the outcome of negotiations with member states on the AI Act.
  • January 2024: the European Commission issued its decision to establish the European AI Office.
  • December 9, 2023: Provisional agreement on the AI Act reached by the European Parliament and Council.
  • June 14, 2023: Adoption of the European Parliament's negotiating position on the AI Act.
  • April 21, 2021: The Commission published a proposal to regulate artificial intelligence in the European Union.

What are the latest updates?

Latest update: On 12 July, 2024, the EU AI Act was published in the Official Journal of the European Union, meaning the Act will go into force on 1 August, 2024.

NEW RESEARCH BY SAIDOT

What businesses think of the new EU AI Act and its impact on them

In our new research, EU AI Act is seen as a positive move by European business decisionmakers, but concerns remain around how to comply.
Read our research
Penalties and enforcement

What penalties and enforcement will the EU AI Act introduce?

What are the penalties of the EU AI Act?

The EU AI Act aims to ensure the safe, ethical, and trustworthy development and use of artificial intelligence technologies. Therefore, the Act contains strong enforcement mechanisms if companies or organisations fail to comply with its obligations. 

The Act empowers market surveillance authorities in EU countries to issue fines for non-compliance. There are three tiers of fines: based on the severity of the infraction, ranging from providing incorrect information to outright violations of the Act's prohibitions.

The penalties for non-compliance are: 

  • €35 million or 7% of global annual turnover, whichever is higher, for violating prohibitions
  • €15 million or 3% of global annual turnover, whichever is higher, for violations of the AI Act’s obligations
  • €7.5 million or 1.5% of global annual turnover, whichever is higher, for the supply of incorrect information

When will these penalties come into force?

The EU AI Act enforces compliance through fines, but these penalties will be phased in over time. Here's a simple breakdown:

Most penalties:

  • Apply after 24 months from the date the EU AI Act comes into force.
  • Given that the European Parliament passed the Act on March 13, 2024, and it will come into force 20 days after its publication in the Official Journal, expect penalties to apply from early-to-mid 2026.

Exceptions:

  • Prohibitions: Violations of particularly strict prohibitions, like using social scoring, will be penalised after just 6 months.
  • General-Purpose AI (GPAI): Breaches of obligations concerning GPAI models will incur fines after 12 months.

Summary:

  • Expect full enforcement with the highest penalties to begin in 2026.
  • Some especially strict prohibitions may be enforced as early as 2024 or 2025.
  • This staggered approach gives organisations time to adjust their practices before facing the highest penalties.
High-risk AI systems

Which AI systems and use cases are classified as high-risk under the EU AI Act?

Which AI use cases will be prohibited by the EU AI Act?

Here is a breakdown of some prohibited use cases under the EU AI Act:

  • Subliminal or manipulative systems, or other systems which exploit vulnerabilities.
  • Biometric categorisation using sensitive characteristics (such as race, ethnicity, gender, political or religious beliefs).
  • Social scoring systems that evaluate the trustworthiness of individuals based on their behaviour or personality characteristics.
  • Remote biometric identification in public spaces, with exceptions for specific uses in law enforcement. 
  • Individual predictive policing that assesses the risk of individuals committing crimes based solely on their personal characteristics or past record.
  • Creation of facial recognition databases through the untargeted scraping of facial images from sources like social media without proper notice and consent. 
  • Emotion recognition at work and schools which estimates individuals’ moods based on observable criteria.

Which AI system are high-risk under the EU AI Act?

The EU AI Act classifies certain AI systems as high-risk due to their potential impact on people's lives, rights, and safety. These high-risk systems require stricter compliance measures to ensure their responsible development and use. 

Here's a breakdown of some key high-risk categories:

  • AI Systems Covered by Sectoral Product Safety Regulations: This includes systems governed by existing product safety regulations, such as those for medical devices, vehicles, and toys. AI used in these sectors must meet high safety standards to minimise risks.
  • Biometric Technologies: AI systems involved in biometric identification, categorisation (classification based on biometric data), and emotion recognition are considered high-risk due to privacy concerns and potential for bias.
  • Critical Infrastructure Management: AI systems used to manage critical infrastructure, like energy grids or transportation systems, are high-risk because failures could have significant societal consequences.
  • Education and Training: AI systems used in education and vocational training are high-risk if they have a significant influence on a person's educational or career path. Fairness and transparency are crucial in these applications.
  • Employment and Worker Management: AI used in recruitment, performance evaluation, or other aspects of worker management is high-risk if it can unfairly disadvantage individuals. 
  • Access to Essential Services: AI used in areas like credit scoring, insurance pricing, or emergency call classification is high-risk because it can impact access to essential services. 
  • AI in Public Governance: This includes AI used in influencing elections and voter behaviour, law enforcement, migration and border control, and the administration of justice. These applications require careful safeguards to protect fundamental rights and prevent misuse.

It's important to note that within these categories, exemptions exist for narrow, low-risk applications of AI systems. These exemptions generally apply to situations where the AI has minimal influence on a decision's outcome.

For more details on high-risk categories, refer to Annex III of the EU AI Act.

How to classify risk levels of AI systems

Here's a breakdown of the key factors to consider when assessing your AI system's risk level:

1. Potential Impact on People and Society:

  • Health and Safety: Can your AI system cause physical or psychological harm to individuals or the environment?
  • Fundamental Rights: Could your AI system and its use raise concerns about privacy, non-discrimination, or fairness?
  • Social and Economic Impact: Could your AI system lead to job displacement, social unrest, or manipulation of public opinion.

2. Intended Purpose and Sector:

  • The specific application: How will your AI system be used? Is it involved in high-stakes decisions like medical diagnosis, hiring, or law enforcement?
  • The sector: Certain sectors, like education, finance, or critical infrastructure, carry inherent risks when using AI due to their potential impact on individuals or society.

By considering these factors, you can start to examine how your AI system might fall into one of the following risk levels:

  • Unacceptable Risk: These systems are banned entirely due to the high risk of violating fundamental rights (e.g., social scoring based on ethnicity).
  • High-Risk: These systems require strict compliance measures because they could negatively impact rights or safety if not managed carefully (e.g., facial recognition).
  • Limited Risk: These systems pose lower risks but still require some level of transparency (e.g., chatbots).
  • Minimal Risk: The Act allows the free use of minimal-risk AI, like spam filters or AI-powered games.

Remember: Transparency is crucial across all risk levels.  Even for lower-risk systems, users should be aware they are interacting with AI.

Risk levels

Unacceptable risk

These systems are banned entirely due to the high risk of violating fundamental rights (e.g. social scoring based on ethnicity).

High-risk

These systems require strict compliance measures because they could negatively impact rights or safety if not managed carefully (e.g. facial recognition).

Limited risk

These systems pose lower risks but still require some level of transparency (e.g. chatbots).

Minimal risk

The Act allows the free use of minimal-risk AI, like spam filters or AI-powered games.

Compliance

How can organisations comply with the EU AI Act?

Benefits of compliance with the EU AI Act

Complying with the EU AI Act brings several advantages for companies and organisations. It can:

  • Enhance your reputation: Developing a reputation for responsible AI will help you build trust with consumers, investors, and regulators.
  • Increase your competitiveness: Following the rules allows you to operate smoothly within the EU market.
  • Improve your AI systems: The compliance process can help you identify areas for improvement in your AI's efficiency, effectiveness, and overall quality.
  • Minimise legal risks: By complying, you reduce the chances of facing penalties or legal action.

What do I need to do to comply with the EU AI Act?

While your obligations may vary depending on whether you are a deployer or provider of a high-risk AI system, or a provider of a GPAI system, there is a general framework you can follow to help you ensure compliance at every stage of the product lifecycle.

Here are a series of steps you can use to prepare to meet your obligations under the EU AI Act:

  • Understand Your Obligations: Review your specific requirements, whether you are a:
  • Review Your Transparency Obligations: The EU AI Act requires specific transparency measures from providers and deployers of specific AI systems with transparency risk.
  • Define and Document Use Cases: Clearly define the context and purpose of your AI applications (use cases). Next, categorise your AI systems based on the relevant policies that apply to your organisation.
  • Conduct Risk Assessments: Perform a thorough risk assessment for all your AI systems and their associated use cases. For example, if an AI system is used for loan approval decisions, this would include an assessment of how this could lead to unfair bias or discrimination against certain demographic groups due to issues like historical bias in the training data.
  • Implement Risk Management: Develop a robust risk management system that addresses the potential risks identified in your assessments. The key is to have concrete mitigation strategies outlined for each identified risk surrounding your AI system's use.
    • For example, if an AI system for recruiting software shows a risk of gender bias against female applicants, the risk management plan could include techniques like using debiasing algorithms to remove sensitive attributes from the training data, instituting human oversight to review and approve hiring recommendations, conducting regular model audits to monitor for bias, and establishing clear processes to receive and address user complaints of discrimination.
  • Ensure Data Protection Compliance: Follow the practices laid down by existing data protection regulations, such as the General Data Protection Regulation (GDPR), including user consent and data security. For many organisations, this may include expanding and using existing data privacy teams, deploying their in-house experience to prepare for the AI Act’s obligations.
  • Maintain Detailed Records: Keep comprehensive records and documentation of your AI systems' technical specifications and performance data. This documentation will be crucial for audits, evaluation, and ongoing monitoring.
  • Provide User Transparency: Provide users with clear and easy-to-understand information about how your AI systems work. This includes explaining the functionalities and any potential risks associated with their use.
  • Implement Human Oversight: For high-risk AI systems, establish mechanisms that guarantee human oversight and accountability. This means having humans involved in key decisions and being able to explain how the AI reached its conclusions.
  • Continuous Monitoring: Once your AI systems are deployed, continuously monitor their performance and make adjustments as needed. This ongoing evaluation helps ensure your AI systems function as intended and meet user expectations.

What are the EU guidelines for training data and datasets?

The EU AI Act establishes specific guidelines for the training data and datasets used in AI systems, particularly those classified as high-risk. Here's a breakdown of the key requirements:

High-Quality Data: Training, validation, and testing datasets must possess three key qualities:

  • Relevance: The data must be directly relevant to the intended purpose of the AI system. 
  • Representativeness: The data should sufficiently represent the real-world situations in which the AI system will be used. 
  • Accuracy and Completeness: Datasets should be as free from errors and missing information as possible.
     

Appropriate Statistical Properties: Datasets used in high-risk AI systems must have appropriate statistical properties, including, where applicable, those that apply to the persons or groups of persons in relation to whom the high-risk AI system is intended to be used. These characteristics should be visible in each individual dataset or in a combination of different datasets. 

Data Transparency: Organisations developing high-risk AI systems should be able to demonstrate that their training data meets the aforementioned quality and fairness standards. This may involve documentation or traceability measures.

Contextual Relevance: To the extent required by their intended purpose, high-risk AI systems' training datasets should reflect the specific environment where the AI will be deployed. This means considering factors like geographical location, cultural context, user behaviour, and the system's intended function. For example, an AI used for legal text analysis in Europe should be trained on data that includes European legal documents.

Compliance Checklist

01

Understand your primary obligations

02

Review your transparency obligations

03

Define and document use cases

04

Conduct risk assessments

05

Implement risk management

06

Ensure data protection compliance

07

Maintain detailed records

08

Implement human oversight

09

Continuous monitoring

financial times

Limited-time offer: Comply with AI Act's first rules by Feb. 1st, 2025.

By the deadline, you need to ensure your organisation has no forbidden AI systems in use, and have AI literacy trainings scheduled.

If you have forbidden AI systems in use, your company may be fined €35 million or 7% of global annual turnover, whichever is higher.

Are you short on time but want to avoid the fines? Saidot can help you comply quickly.

So, here's our limited-time offer:

When you subscribe to Saidot's AI Governance Platform now, you'll also get free AI literacy training for your teams.

Saidot AI Governance helps you:

1. Further establish AI literacy with a constantly-curated library of AI models, evaluations, risks, and regulations.
2. Create an AI inventory to ensure your organisation has no prohibited AI systems.

*To redeem the offer, we expect you to commit to Saidot for at least 6 months.

Book a call
Compliance

How can organisations comply with the EU AI Act?

Self-assessing risk level: who qualifies for exemption under the EU AI Act?

If you are a provider and you have determined that your AI system is not high-risk, you should document this assessment before you put it on the market or into service. You should also provide this documentation to national competent authorities upon request and register the system in the EU database. 

In addition, market surveillance authorities can carry out evaluations on your AI system to determine whether it is actually high-risk. Providers who classify their AI system as limited or minimal risk when it is actually high-risk will be subject to penalties.

You can determine whether an AI system qualifies for an exemption from high-risk categorisation based on one or more of the following criteria:

  • The AI system is intended to perform a narrow procedural task.
  • The AI system is intended to improve the result of a previously completed human activity.
  • The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed proper human review; or 
  • The AI system is intended to perform a preparatory task for an assessment relevant to the purpose of the use cases listed in Annex III. 

For more information, refer to Annex III of the EU AI Act.

Registering high-risk AI systems in the EU or national database

What is the EU database?

The European Commission, in collaboration with the Member States, will set up and maintain an EU database containing information about high-risk AI systems. 

How do you register? 

For specific high-risk systems related to law enforcement and border control, registration occurs in a secure, non-public section of the EU database, accessible only to the Commission and national authorities. The information you submit should be easily navigable and machine-readable. 

For more information, refer to Articles 49 of the EU AI Act. 

What information should you register, and who’s responsible? 

The provider or their authorised representative must enter the following information into the EU database for each AI system:

  1. Name, address, and contact details of the provider.
  2. If someone else is submitting the information on the provider's behalf, that person's name, address, and contact details.
  3. Name, address, and contact details of the authorised representative, if applicable.
  4. The AI system's trade name and any additional references for identifying and tracking it.
  5. A description of the AI system's intended purpose, components, and supported functions.
  6. A basic, concise description of the data/inputs used by the system and how it operates logically.
  7. The status of the AI system (on the market, in service, no longer available, recalled).
  8. For high-risk systems, the type, number, expiry date of certificate from the notified body, and that body's name/ID number.
  9. A scanned copy of the certificate mentioned above, if applicable.
  10. The EU member states where the AI system is or was placed on the market, put into service, or made available.
  11. A copy of the EU declaration of conformity.
  12. Electronic instructions for use (not required for high-risk law enforcement, migration, and border control AI systems).

For more information, refer to Annex VIII, Section A of the EU AI Act.

For AI systems deployed by public authorities, agencies, or bodies, the deployer or their representative must enter:

  1. The deployer's name, address, and contact details.
  2. The name, address, and contact details of the person submitting information on behalf of the deployer.
  3. A summary of the fundamental rights impact assessment findings per Article 27.
  4. The URL of the AI system's entry in the EU database by its provider.
  5. A summary of the data protection impact assessment per EU regulations, if applicable.

For more information, refer to Annex VIII, Section B of the EU AI Act.

Who sees this information? 

The information in the EU database will be publicly accessible and presented in a user-friendly, machine-readable format to allow easy navigation by the general public.

However, there is an exception for information related to the testing of high-risk AI systems in real-world conditions outside of AI regulatory sandboxes. This information will only be accessible to market surveillance authorities and the European Commission, unless the prospective provider or deployer also provides explicit consent for it to be made publicly available.

For more information about this exception, refer to Article 60 of the EU AI Act.

For all other entries in the database, the details provided by providers, authorised representatives, and deploying public bodies will be openly accessible to the public. Providers should ensure the submitted information is clear and easily understandable.

Governance

How can organisations implement AI governance frameworks for the EU AI Act?

Creating a framework for AI governance

AI governance should be a systematic, iterative process that organisations can continuously improve upon. To achieve this, companies need to establish a robust, flexible framework designed to meet their compliance needs and responsible AI goals. This framework should include:

  • Well-defined roles and responsibilities for employees to foster effective oversight and accountability.
  • An AI ethics code grounded in widely accepted ethical principles to guide ethical decision-making and deployment of AI systems.
  • Rigorous processes to monitor, evaluate, and report on AI systems throughout their lifecycle, whether the organisation is the provider or deployer.
  • Human oversight mechanisms woven into the design and deployment phases of AI systems, facilitating timely intervention and course correction.

As regulations and technologies evolve, organisations must proactively update their governance frameworks to maintain compliance and alignment with best practices.

What is the European AI Board (EAIB)?

The European AI Board (EAIB) is an independent advisory body that provides technical expertise and advice on implementing the EU AI Act. It comprises high-level representatives from national supervisory authorities, the European Data Protection Supervisor, and the European Commission.

The AI Board’s role is to facilitate a smooth, effective, and harmonised implementation of the EU AI Act. Additionally, the EAIB will review and advise on commitments made under the EU AI Pact.

How to do an AI risk assessment

Comprehensive AI risk assessments are a critical component of responsible AI governance. They allow stakeholders to proactively identify potential risks posed by their AI systems across domains such as health and safety, fundamental rights, and environmental impact. While it is important to realise that there is no single method or ‘silver bullet’ for conducting a risk assessment, the National Institute for Standards and Technology (NIST)’s AI risk management framework is a good example for identification, analysis, and mitigation.

Another good starting point could be to use an organisation’s pre-existing processes to assess general risk and then assign scores to each potential risk posed by an AI system, depending on the impact and likelihood of each risk. 

Typically, the risk assessment process involves first identifying and ranking AI risks as unacceptable (prohibited), high, limited, or minimal based on applicable laws and industry guidelines. For example, AI systems used for social scoring, exploiting vulnerabilities, or biometric categorisation based on protected characteristics would likely be considered unacceptable under the proposed EU AI Act.

Once unacceptable risks are eliminated, the organisation should then analyse whether remaining use cases are high-risk, such as in critical infrastructure, employment, essential services, law enforcement, or processing of sensitive personal data. Lower risk tiers may include AI assistants like chatbots that require user disclosure.

The next step is to evaluate the likelihood and potential severity of each identified risk. One helpful tool is a risk matrix, which helps organisations characterise probability and impact as critical, moderate, or low to guide further action. A financial institution deploying AI credit scoring models, for instance, should assess risks like perpetuating demographic biases or unfair lending practices.

Thorough documentation of risks, mitigation strategies, and human oversight measures is essential for demonstrating accountability. The EU AI Act will also require fundamental rights impact assessments to analyse affected groups and human rights implications for high-risk AI use cases.

Importantly, risk assessments aren't a one-time exercise. They require continuous monitoring and updates as AI capabilities evolve and new risks emerge over the system's lifecycle. Proactive, holistic assessments enable the deployment of AI systems that operate within defined, acceptable risk tolerances.

Accountability and leadership structures

Establishing robust accountability and leadership structures is crucial for effective AI governance. Clear roles and responsibilities should be designated for key aspects like data management, risk assessments, compliance monitoring, and ethical oversight. These structures must transparently show decision-making authorities and reporting lines, enabling timely interventions when needed. They should also be regularly reviewed and updated to align with evolving regulations, best practices, and the organisation's evolving AI capabilities.

To establish these accountability structures, organisations should thoroughly assess their AI initiatives, identify associated risks and stakeholders, and consult relevant subject matter experts. Those appointed to AI governance roles must receive proper training and resources, fostering a culture of shared responsibility and continuous improvement around responsible AI practices.

The guidance aligns with recommendations from the Center for Information Policy Leadership (CIPL), which strongly advises organisations to create robust leadership and oversight structures as the foundation for accountable AI governance. This begins with the senior executive team and board visibly demonstrating ethical commitment through formal AI principles, codes of conduct, and clear communications company-wide.

This is just the start. The CIPL also recommends appointing dedicated AI ethics oversight bodies, whether internal cross-functional committees or external advisory councils with diverse expertise. The key is ensuring these bodies have real authority to scrutinise high-risk AI use cases, resolve challenges, and ultimately shape governance policies and practices.

These leadership structures could include designated "responsible AI" leads or officers, who have the ability to holistically drive the governance program across business units. Meanwhile, with a centralised governance framework, organisations are furnished with strategic direction while allowing flexibility for teams to adapt divergent practices in context. 

However, this recommendation will not be right for everyone, so it’s key to examine your organisation’s individual needs and capabilities. Another way to enhance your AI compliance efforts is to take advantage of and expand any existing privacy teams and roles to include AI ethics, data responsibility, and digital trust. Ultimately, successful frameworks will require multidisciplinary teams that combine legal, ethical, technical, and business perspectives.

Monitoring and incident reporting

Organisations should establish a comprehensive system for reporting AI-related incidents, including data breaches, biases, or unintended consequences. This system should also include mechanisms for prompt identification, documentation, and escalation of incidents, enabling timely and transparent responses. When incidents occur, acting promptly and transparently is essential, as is taking the necessary steps to address and resolve the issues, mitigate potential impacts, and prevent future occurrences.

For example, if we imagine an organisation operating AI-powered autonomous vehicles, its monitoring and incident reporting mechanisms might include deploying real-time monitoring systems to scrutinise the decisions made in navigation, instituting agile incident reporting mechanisms such as anomaly detection alerts, and enacting protocols for immediate intervention in response to safety concerns or system malfunctions, ensuring operational integrity and passenger safety.

Implementing human oversight on AI systems

Organisations should develop clear procedures to address issues or errors identified through human oversight mechanisms, ensuring appropriate interventions and course corrections can be made in a timely manner. It is essential to ensure that there is always a 'human in the loop' when it comes to decision-making and actions taken by AI systems. This means human judgment has the final say in high-impact situations.

For example, a hypothetical healthcare provider using AI-assisted diagnostic tools may have human medical professionals review and validate the AI's recommendations, ensuring that any potential errors or biases are identified and corrected before treatment decisions are made.

Steps to carry out a fundamental rights impact assessment

In order to protect fundamental rights, organisations deploying high-risk AI systems must carry out a fundamental rights impact assessment before use. This applies to public bodies, private operators providing public services, and certain high-risk sectors like banking or insurance.

The aim is to identify specific risks to individuals or groups likely to be affected by the AI system. Deployers must outline measures to mitigate these risks if they materialise.

The impact assessment should cover the intended purpose, time period, frequency of use, and categories of people potentially impacted. It must identify potential harms to fundamental rights.

Deployers should use relevant information from the AI provider's instructions to properly assess impact. Based on identified risks, they must determine governance measures, such as human oversight and complaint procedures.

After the assessment, the deployer must notify the relevant market surveillance authority. Stakeholder input from affected groups, experts and civil society may be valuable. The AI Office will develop a questionnaire template to facilitate compliance and reduce the administrative burden for deployers.

“By integrating Azure AI with Saidot’s innovative AI governance platform, we’re empowering our customers to achieve greater cross-functional collaboration and enabling them to align their AI solutions with their own principles and regulatory requirements.”

Sarah Bird, CPO, Responsible AI at Microsoft

Read full blog
Saidot

Are you ready for the EU AI Act?

How Saidot can help your organisation with AI governance and compliance?

For many organisations, AI governance and compliance can be an overwhelming process. If you need to keep up with the ever-evolving landscape of AI regulations, including the EU AI Act, Saidot Library can provide you with the information you need. As a curated, continuously updated knowledge base on AI governance. It includes:

  • AI policies and regulations.
  • Key information on foundation models.
  • Risks and best-practice mitigation methods.
  • Evaluation techniques to assess the safety and performance of AI models.

If you require more holistic advice and support with your AI governance journey, together, our experts can help you develop and implement AI governance frameworks tailored to your organisation’s specific needs and industry. We offer governance-as-a-service so you can get started with high-quality governance and learn from our experts to set up your first AI governance cases.

Who is Saidot for?

Product & data science teams

Apply systematic governance at every step of your system lifecycle. Understand regulative requirements and implement your system in alignment with them. Identify and mitigate risks, and monitor performance of your AI systems.

Legal & compliance teams

Access a wide range of AI regulations and evaluate whether your intended use cases are allowed under the EU AI Act, for example. Operationalise legal requirements via templates and collaborate with AI teams to ensure effortless compliance.

Get started
Saidot AI risk library

Go to our Help page for more information.

Learn more about Saidot's methodologies, get customer support and step-by-step instructions on using the Saidot platform, and manage your subscription.
Go to Help