In an era where regulatory landscapes are rapidly evolving, companies with a footprint in the European Union must stay vigilant and adaptable. The EU has recently unveiled a comprehensive set of guidelines that impose fresh obligations on both EU and non-EU based companies operating within its borders. This client alert is the first in a series designed to decode the complexities of the new EU regulations and provide actionable insights for businesses to ensure full compliance1. Stay tuned as we unravel the details of these pivotal changes and guide you through the steps your business needs to take to align with the EU's heightened regulatory standards.

Scope of the AI Act:

The AI Act casts a wide net, encompassing companies that design, develop, or deploy AI systems within the EU. This includes both EU-based entities and non-EU companies with a presence in the region.

Prohibited AI Practices

The Act identifies practices that are off-limits, aiming to prevent any potential misuse of AI that could harm individuals or society,

  • No Manipulation or Deception, pushing people into decisions they wouldn't naturally make.
  • Protecting Vulnerabilities of certain groups, such as based on age or disability.
  • Ban on Social Scoring that could result in discrimination or unjust treatment.
  • Profiling Restrictions to determine a person's likelihood of engaging in criminal activity.
  • Compiling facial recognition databases through indiscriminate data scraping is strictly forbidden.
  • Contextual Limits on Emotion Recognition
  • Biometric categorizations based on biometric data to deduce or infer e.g., race, religion are largely prohibited.
  • Controlled Use of Real-Time Biometric ID in public spaces is generally banned.

Mandatory Obligations for High-Risk AI Systems

For AI systems identified as high-risk, the AI Act prescribes a series of stringent requirements aimed at ensuring these technologies are safe and transparent:

  • Risk Management System: Providers must implement robust systems to identify, assess, and mitigate risks throughout an AI system's life cycle.
  • Data Governance: The quality, representativeness, and security of data used in AI systems must be maintained e.g. to avoid biases.
  • Human Oversight: There must be mechanisms in place allowing human intervention in AI decision-making, ensuring that technology remains under control and accountable.
  • Technical Documentation: Detailed documentation is required to demonstrate compliance with the Act.
  • Transparency and instructions for use: Deployers should be provided with clear, accessible information about how the AI system works and its limitations, enhancing understanding and trust.

Finally, the AI Act mandates strict transparency requirements for General-Purpose AI (GPAI) systems. These entail adhering to EU copyright laws and providing clear summaries of training datasets to ensure the ethical use of data. For GPAI models with potential systemic risks, additional safeguards include comprehensive performance evaluations, systemic risk assessments, and incident reporting to proactively manage and mitigate risks.

Furthermore, the Act addresses concerns around "deepfakes" by requiring that all artificially generated or manipulated multimedia content be explicitly labeled. This initiative aims to foster an environment where users can readily distinguish between authentic and altered content, reinforcing accountability and trust in the digital ecosystem.

Penalties

The stakes are high. Violations of the AI Act can result in significant penalties. For severe violations related to prohibited AI practices, fines can escalate to €35 million or 7% of annual global turnover. Companies cannot afford to take compliance lightly.

Timeline for AI Act Enforcement

The AI Act is in its final review stages and is expected to be adopted before the current legislative session ends. After formal approval by the Council and publication in the Official Journal, it will activate within twenty days. The enactment timeline is staggered: prohibitions will apply after six months, codes of practice after nine, general-purpose AI regulations after twelve, and obligations for high-risk systems after 36 months. This phased approach allows stakeholders ample time to understand, prepare for, and comply with the new regulatory framework, ensuring a smooth transition into this new era of AI governance.

Footnote

1. Disclaimer: the AI system promised a series of alerts to which we (humans with limited time) cannot commit.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.