On March 13, the European Parliament overwhelmingly passed the European Union (EU) Artificial Intelligence Act or AI Act, the first-ever binding artificial intelligence law in history.
The regulation was passed with 523 votes in favor to 46 against. By banning certain applications of AI and requiring safety protocols for others deemed to be high-risk, the new law forms a comprehensive framework to address new and rapidly advancing problems that AI presents.
The legislation could cause some unintended consequences.
“The EU now has more AI regulations than meaningful AI companies,” wrote Anand Sanwal, CEO of CB Insights.
Critics of the AI Act argue that the law will stifle innovation and limit competition.
“Allowing competition to flourish in the AI market will be more beneficial than additional regulation prematurely being imposed,” said Aleksandra Zuchowska, the Competition Policy Manager of the Computer and Communications Industry Association.
The president of France also agreed with the sentiment, commenting soon after the legislation was approved late last year.
“We are regulating things that we have not yet produced or invented. It is not a good idea,” said Emmanuel Macron, the president of France.
At the same time, these regulations are currently the only action that has been taken anywhere to mitigate the risks of AI, such as cyberattacks and misinformation. The AI Act represents a human-centric approach to the issue of AI, banning dangerous potentials like emotion recognition, social scoring, and manipulation.
The AI Act will most likely become law by May or June after some formalities.
Across the world, others have also been eyeing AI. U.S. President Joe Biden signed an executive order on AI in October and Chinese authorities have adopted interim measures to manage AI in China. Groups such as the United Nations, the Group of Seven industrialized nations, and countries such as Brazil and Japan are all moving to regulate AI.