Top World News

European lawmakers approve world’s first major act to regulate AI


Surasak Suwanmake | Moment | Getty Images

The European Union Parliament on Wednesday endorsed the world’s first major set of regulatory ground rules to govern the mediatized artificial intelligence at the forefront of tech investment.

The EU brokered provisional political consensus in early December, and it was then endorsed in the Parliament’s Wednesday session, with 523 votes in favour, 46 against and 49 votes not cast.

“Europe is NOW a global standard-setter in AI,” Thierry Breton, the European Commissioner for internal market, wrote on X.

Born in 2021, the EU AI Act divides the technology into categories of risk, ranging from “unacceptable” — which would see the technology banned — to high, medium and low hazard.

Some EU countries have previously advocated self-regulation over government-led curbs, amid concerns that stifling regulation could set hurdles in Europe’s progress to compete with Chinese and American companies in the tech sector. Detractors have included Germany and France, which house some of Europe’s promising AI startups.

The EU has been scrambling to keep pace with the consumer impact of tech developments and the market supremacy of key players.

Last week, the Union brought into force landmark competition legislation set to rein in U.S. giants. Under the Digital Markets Act, the EU can crack down on anti-competitive practices from major tech companies and force them to open out their services in sectors where their dominant position has stifled smaller players and choked freedom of choice for users. Six firms — U.S. titans Alphabet, Amazon, Apple, Meta, Microsoft and China’s Bytedance — have been put on notice as so-called “gatekeepers.”

Concerns have been mounting over the potential for abuse of artificial intelligence, even as heavyweight players like Microsoft, Amazon, Google and chipmaker Nvidia beat the drum for AI investment.

AI investing focus should turn to adopters outside of tech, says Morgan Stanley's Lisa Shalett

Governments fear the possibility of deepfakes — forms of artificial intelligence that generate false events, including photos and videos — being deployed in the lead-up to a swathe of key global elections this year.

Some AI backers are already self-regulating to avoid disinformation. On Tuesday, Google announced it will limit the type of election-related queries that can be asked of its Gemini chatbot, saying it has already implemented the changes in the U.S. and in India.

“The AI Act has pushed the development of AI in a direction where humans are in control of the technology, and where the technology will help us leverage new discoveries for economic growth, societal progress, and to unlock human potential,” Dragos Tudorache, a lawmaker who oversaw EU negotiations of the agreement, said on social media on March 12. 

“The AI Act is not the end of the journey, but, rather, the starting point for a new model of governance built around technology. We must now focus our political energy in turning it from the law in the books to the reality on the ground,” he added. 

This breaking news story is being updated.




Leave a Reply

Your email address will not be published. Required fields are marked *