EU Begins Landmark AI Law Enforcement as Initial Restrictions Take Effect
The European Union has officially commenced enforcement of its Artificial Intelligence Act (AI Act), a pioneering regulatory framework designed to govern AI technology. As of Sunday, companies must comply with new restrictions or face steep penalties.
Strict AI Regulations Now in Force
First enacted in August 2024, the AI Act aims to mitigate risks associated with AI applications. The initial enforcement phase brings a ban on AI systems deemed an “unacceptable risk.” These include:
- Social scoring systems are similar to those used in China.
- Real-time facial recognition and biometric classification based on race, sexual orientation, or other sensitive attributes.
- Manipulative AI tools designed to exploit vulnerabilities in individuals.
Companies that fail to comply now risk fines of up to €35 million ($35.8 million) or 7% of global annual revenue — whichever is higher. These penalties surpass those outlined in the General Data Protection Regulation (GDPR), which imposes a maximum fine of €20 million or 4% of global turnover.
Balancing Compliance and Innovation
While the AI Act marks a historic move in AI governance, experts acknowledge that full compliance remains a work in progress. Tasos Stampelos, head of EU public policy and government relations at Mozilla, previously stated that while the law is “not perfect,” it is “very much needed.”
“It’s quite important to recognize that the AI Act is predominantly a product safety legislation,” Stampelos noted in a panel discussion. “Right now, compliance will depend on how standards, guidelines, and secondary legislation develop following the act.”
The newly established EU AI Office is playing a key role in refining these guidelines. In December, the office introduced a second-draft code of practice for general-purpose AI models, such as OpenAI’s GPT series. The draft included exemptions for certain open-source models while mandating strict risk assessments for developers of high-impact AI systems.
Shaping AI Policy on a Global Scale?
Despite concerns over regulatory burdens, some industry leaders believe the AI Act could position Europe as a leader in ethical AI development.
“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones,” said Diyan Bogdanov, director of engineering intelligence at fintech firm Payhawk.
Bogdanov emphasized that requirements such as bias detection, risk assessments, and human oversight are not barriers to innovation but instead “define what good looks like.”
However, critics worry the stringent rules may stifle AI advancements. Prince Constantijn of the Netherlands expressed concern over Europe’s heavy regulatory focus, stating, “Our ambition seems to be limited to being good regulators.”
What’s Next?
While this marks the first step in AI regulation, full enforcement of the AI Act is still on the horizon. Additional rules, compliance measures, and risk assessments will continue to evolve as the EU refines its approach.
For AI developers and businesses operating in Europe, the message is clear: compliance is no longer optional, and failure to adhere to these new regulations could come at a steep cost.