top of page
Search

Overview of the EU AI Act

  • Writer: David J. Kinsella
    David J. Kinsella
  • Feb 2
  • 4 min read


Artificial intelligence (AI) is impacting many aspects of daily life and business. As AI systems become more powerful and widespread, governments face the challenge of ensuring these technologies are safe, ethical, and respect fundamental rights. The European Union (EU) has taken a significant step by adopting the EU AI Act, a comprehensive legal framework, designed to regulate AI across member states. Here, we will break down the key elements of the EU AI Act, and explain what the legislation means for developers, businesses, and users.


What is the EU AI Act?


The EU AI Act is a regulatory proposal introduced by the European Commission in order to create rules for the development, deployment, and use of AI systems within the EU. It aims to balance innovation with safety, and respecting fundamental rights. The EU AI Act classifies AI applications based on risk levels with corresponding requirements. The underlying objective behind this legislation is to prevent harm, while encouraging trustworthy AI. The EU AI Act came into force on 1 August 2024 and has a phased application schedule, with bans on prohibited AI practices effective on 2 February 2025, and most provisions becoming applicable on 2 August 2026.


Risk-Based Classification of AI Technologies


One of the core features of the EU AI Act is its risk-based approach. AI technologies are categorized into four groups:


  1. Unacceptable risk: AI technologies that pose a clear threat to safety, livelihoods, or rights, are banned. Examples include social scoring by governments or systems that manipulate human behavior in harmful ways.

  2. High risk: AI technologies associated with high risk require strict compliance with the regulations. Examples include AI used in critical infrastructure, education, employment, law enforcement, and biometric identification.

  3. Limited risk: AI technologies with specific transparency obligations, such as chatbots, must inform users they are interacting with AI.

  4. Minimal or no risk: Most AI applications fall within this category and therefore, face no additional legal requirements.


The above classification helps to focus regulatory efforts where they matter most, ensuring safety without stifling innovation.


Requirements for High-Risk AI Technologies


High-risk AI technologies face the most stringent rules under the EU AI Act, including:


  • Risk management: Developers must identify and mitigate risks throughout the AI technology's lifecycle.

  • Data quality: Training data must be relevant, representative, and free from bias to avoid discriminatory outcomes.

  • Documentation and transparency: Providers must keep detailed technical documentation and provide clear information to users.

  • Human oversight: Systems must allow human intervention to prevent, or minimize risks.

  • Robustness and accuracy: AI must perform reliably under normal and unexpected conditions.


For example, AI technology used to assess job applicants must ensure fairness by avoiding bias against certain groups, provide clear explanations of decisions, and allow human review.


Transparency and User Information


The EU AI Act requires transparency for certain AI technologies, especially those interacting directly with people. Users must be informed when they are engaging with AI rather than a human. This rule applies to chatbots, deepfakes, and other AI-generated content. Transparency helps users make informed decisions, and build trust in AI technologies.


Obligations for Providers and Users


The EU AI Act outlines responsibilities for both AI providers and users:


  • Providers must ensure compliance with the Act’s requirements before introducing their AI technologies to the market. They must conduct conformity assessments, register high-risk AI technologies in an EU database, and subsequently monitor their technologies for continued compliance.

  • Users must use AI technologies according to instructions, and are required to report any serious incidents or malfunctions.


This shared responsibility model encourages accountability throughout the AI lifecycle.


Enforcement and Penalties


In order to ensure compliance, the EU AI Act facilitates enforcement mechanisms, including market surveillance and penalties. Non-compliance can lead to fines up to 7% of a company’s global annual turnover or 35 million euros, whichever is higher. These penalties emphasize the EU’s commitment to safe and ethical AI.


Impact on Innovation and Industry


The EU AI Act supports innovation by providing legal clarity. The EU AI Act encourages the creation and deployment of trustworthy AI that respects European values, such as privacy, non-discrimination, and human dignity. As a result, companies can develop AI technologies with confidence, in line with transparent regulatory requirements.


For start-ups and small businesses, the EU AI Act includes provisions aimed to reduce administrative burdens, including simplified conformity assessments for lower-risk AI technologies.


Examples of AI Technologies


  • Healthcare: AI tools diagnosing diseases or recommending treatments are considered high-risk, and therefore must meet strict standards.

  • Transportation: Autonomous vehicles also fall under high-risk AI, having to adhere to safety, and transparency requirements.

  • Recruitment: AI screening job candidates must avoid bias, and provide explanations behind the decision-making process.

  • Law enforcement: Facial recognition systems require careful review due to privacy concerns.


Implications of the EU AI Act


In line with the EU AI Act, businesses developing or implementing AI technologies within the EU should:


  • Conduct risk assessments of their AI technologies;

  • Review data sets for bias and quality;

  • Implement relevant documentation and transparency measures;

  • Establish processes to allow for human review;

  • Monitor regulatory updates and guidance.


Businesses involved in the development and deployment of AI-based technologies are advised to implement a considered strategy in accordance with the EU AI Act, thereby demonstrating themselves to be successful leaders in the field of responsible AI solutions.


Disclaimer: Content is not intended to, and does not constitute, legal advice, and no attorney-client relationship is formed.

 
 
bottom of page