By Kuksung Nam, The Readable
Mar. 14, 2024 8:45PM GMT+9
European Union lawmakers officially approved legislation governing artificial intelligence technologies on Wednesday, marking the initiation of the world’s first comprehensive set of rules for the AI sector within the EU market.
The European Parliament declared that the AI Act received overwhelming support, with 523 members voting in favor of the new regulations. Meanwhile, 46 members opposed it, and 49 abstained from voting. This decision followed a provisional agreement reached last December among the EU’s three main governing bodies—the European Parliament, the Council of the European Union, and the European Commission—to address the potential risks associated with rapidly advancing AI technology.
The legislation is poised for its final phase before being progressively implemented over the coming years. The AI Act will initially take effect more than six months after receiving formal endorsement from the European Council, starting with the prohibition of certain AI technologies deemed to pose an unacceptable risk to society. The new regulation categorizes various AI technologies according to their level of risk, from unacceptable to minimal, and bans the use of AI technologies designed to manipulate human behavior, thereby undermining users’ free will.
Brando Benifei, a member of the Parliament co-leading the work on the new AI regulations, expressed his views during the plenary debate on March 12, stating, “We finally have the world’s first binding law on artificial intelligence, aimed at reducing risks, creating opportunities, combatting discrimination, and enhancing transparency. Unacceptable AI practices will be banned in Europe, ensuring the protection of workers’ and citizens’ rights. The establishment of the AI Office will support companies in starting to comply with these rules before they become enforceable.”
While European lawmakers assert that the new AI Act is designed to safeguard fundamental human rights and spur innovation at the forefront of technological advancement, experts have raised concerns about its potential effects on European citizens and companies.
Amnesty International, a human rights organization, expressed concerns regarding the use of biometric identification systems by law enforcement. While the new regulations prohibit the deployment of such technologies in principle, they also make exceptions for their application in narrowly defined circumstances, such as searching for a missing individual or preventing a terrorist attack. Amnesty International’s apprehensions underline a delicate balance; the exceptions within these regulations could potentially pave the way for overreach, representing a nuanced threat to personal freedoms and privacy.
Mher Hakobyan, Amnesty International’s Advocacy Advisor on Artificial Intelligence, remarked on March 13, “While adopting the world’s first regulations on the development and deployment of AI technologies is a significant milestone, it’s disheartening that the EU and its 27 member states have opted to prioritize the interests of the industry and law enforcement agencies over the protection of individuals and their human rights.”
Furthermore, there are significant concerns that the EU’s comprehensive regulations might impede AI technology innovation within the 27-member bloc due to stringent restrictions on its development. The new rule aims to facilitate the development and training of innovative technologies through regulatory sandboxes and real-world testing. Nevertheless, it is also anticipated to enforce transparency requirements on general-purpose AI systems, like ChatGPT. These systems are expected to comply with transparency mandates, including adherence to the EU’s copyright law.
Mikołaj Barczentewicz, a senior scholar at the International Center for Law & Economics (ICLE), expressed mixed feelings about the AI Act’s impact, stating, “The AI Act is primarily focused on restricting AI, with minimal emphasis on supporting EU developers. It’s challenging to predict whether the AI Act will ultimately have a significant positive or negative impact.” He added, “Developers are at risk of facing privacy and copyright laws applied in disproportionate ways by shortsighted enforcers who fail to recognize the importance of technological and economic growth.”
nam@thereadable.co
The cover image of this article was designed by Areum Hwang. This article was copyedited by Arthur Gregory Willers.
Kuksung Nam is a journalist for The Readable. She has extensively traversed the globe to cover the latest stories on the cyber threat landscape and has been producing in-depth stories on security and privacy by engaging with industry giants, foreign government officials and experts. Before joining The Readable, Kuksung reported on politics for one of South Korea’s top-five local newspapers, The Kyeongin Ilbo. Her journalistic skills and reportage earned her the coveted Journalists Association of Korea award in 2021 for her essay detailing exclusive stories about the misconduct of a former government official. She holds a Bachelor’s degree in French from Hankuk University of Foreign Studies, a testament to her linguistic capabilities.