Search
Close this search box.
Search
Close this search box.

European Union lawmakers have given final approval to the bloc’s artificial intelligence law, which is expected to become effective later this year

European Union lawmakers have given final approval to the 27-nation bloc’s artificial intelligence law. The move Wednesday put the world-leading set of rules on track to take effect later this year. Lawmakers in the European Parliament have voted overwhelmingly in

By KELVIN CHAN (AP Business Writer)

The European Parliament voted in favor of the Artificial Intelligence Act, which had been proposed five years ago. The AI Act is likely to influence other governments' regulations on the technology.

The AI Act has been approved by European Parliament lawmakers, indicating a significant shift in AI regulation towards being human-centric and promoting technological advances for the benefit of society and individuals.

Dragos Tudorache, a Romanian lawmaker, expressed that the AI Act will steer AI towards being human-controlled and promoting innovation and societal progress.

Major tech companies have generally supported the need for AI regulation, although they have lobbied for rules that benefit them. OpenAI CEO Sam Altman caused a stir by suggesting the company might leave Europe if it couldn't comply with the AI Act, but later clarified there were no plans to do so.

Here is an overview of the world’s first comprehensive set of AI rules:

Like many EU regulations, the AI Act was initially aimed at consumer safety, taking a “risk-based approach” to products or services using artificial intelligence.

The riskier an AI application, the more scrutiny it faces. Most AI systems are expected to be low risk, such as content recommendation systems or spam filters. Companies can choose to follow voluntary requirements and codes of conduct.

High-risk AI uses, such as in medical devices or critical infrastructure like water or electrical networks, face tougher requirements, such as using high-quality data and providing clear information to users.

Some AI uses are banned due to posing an unacceptable risk, such as social scoring systems, certain predictive policing, and emotion recognition systems in schools and workplaces.

Other banned uses include police using AI-powered remote “biometric identification” systems to scan faces in public, except for serious crimes like kidnapping or terrorism.

Initially, the law focused on AI systems performing limited tasks, but the rise of general purpose AI models prompted EU policymakers to update the regulation.

Provisions were added for generative AI models, the technology behind AI chatbot systems that provide unique and lifelike responses, images, and more.

Developers of general purpose AI models will need to provide a detailed summary of the data used to train the systems and comply with EU copyright law.

AI-created deepfake images, video or audio of real people, places or events must be marked as artificially manipulated.

The largest and most powerful AI models that present “systemic risks,” which include OpenAI’s GPT4 — its most advanced system — and Google’s Gemini, face additional scrutiny.

The EU is concerned that these powerful AI systems could cause serious accidents or be misused for widespread cyberattacks. They also worry that generative AI could propagate “harmful biases” across many applications, impacting numerous people.

Firms providing these systems will need to evaluate and reduce the risks; report any serious incidents, such as malfunctions leading to someone’s death or severe harm to health or property; establish cybersecurity measures; and disclose the energy consumption of their models.

Brussels initially recommended AI regulations in 2019, taking on a familiar global role in increasing scrutiny of emerging industries, while other governments rush to keep pace.

In the U.S., President Joe Biden signed a comprehensive executive order on AI in October that is anticipated to be supported by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are developing their own AI legislation.

Chinese President Xi Jinping has put forward his Global AI Governance Initiative for fair and safe use of AI, and authorities have issued “interim measures” for managing generative AI, which applies to text, images, audio, video and other content generated for people within China.

Other countries, from Brazil to Japan, as well as global groups such as the United Nations and Group of Seven industrialized nations, are moving to establish AI guardrails.

The AI Act is expected to officially become law by May or June, after a few final formalities, including approval from EU member countries. Provisions will start taking effect in stages, with countries required to ban prohibited AI systems six months after the rules become law.

Regulations for general purpose AI systems like chatbots will start applying a year after the law takes effect. By mid-2026, the complete set of regulations, including requirements for high-risk systems, will be in force.

When it comes to enforcement, each EU country will establish their own AI watchdog, where citizens can file a complaint if they believe they’ve been the victim of a violation of the rules. Meanwhile, Brussels will create an AI Office tasked with enforcing and supervising the law for general purpose AI systems.

Violations of the AI Act could result in fines of up to 35 million euros ($38 million), or 7% of a company’s global revenue.

This isn’t Brussels’ final word on AI rules, said Italian lawmaker Brando Benifei, co-leader of Parliament’s work on the law. More AI-related legislation may be on the horizon after summer elections, including in areas like AI in the workplace that the new law partly covers, he said.

Get more Colorado news by signing up for our daily Your Morning Dozen email newsletter.

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments