EU in Final Stages of Work on Regulatory AI Act
Once enacted, the use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law.
The European Union (EU) is making strides toward enacting the EU AI Act into law by the end of the year. Dragoș Tudorache, Chair Special Committee on AI and a spokesperson for the EU Parliament on the AI Act, says they are in the final stages of negotiations.
“We are in the final stages of the negotiations between the Parliament and the Council, the two co-legislΩΩators that work in Europe on putting forward legislation, and we are very close to the finish line. We have two more months in which we have planned political negotiations and my estimate, which is shared by all the parties involved in these negotiations, is that by November, we can close this process, and then we have the final vote in Parliament and Council. That means that by the end of the year, the AI Act will become law.” - Dragoș Tudorache, Chair Special Committee on AI
Tudorache is also meeting with U.S. government officials to share and collaborate in hopes of aligning efforts. Once the AI Act becomes law, then the work begins to put the processes and governance in place to oversee it.
Here is a snapshot of what the EU AI Acti intends to address:
“Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people rather than by automation to prevent harmful outcomes. Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.”
Check out the new online course, “Revolutionize Your Marketing Content Writing With AI,” on Udemy.
For a limited time, we are offering subscribers to this newsletter a special discounted rate of $9.99. That’s $10 off the retail price of $19.99.
The course teaches you how to use groundbreaking generative AI writing tools to be more productive, save time, and improve your marketing writing. Don’t wait. The special offer is only available for a limited time. Sign up today!
Risk-based Approach
The EU AI Act takes a risk-based approach classifying each application into three categories: unacceptable risk, high risk, and limited, minimal, or no risk.
Unacceptable Risk
AI systems are systems considered a threat to people and will be banned. They include:
Cognitive behavioral manipulation of people or specific vulnerable groups; for example, voice-activated toys that encourage dangerous behavior in children
Social scoring: classifying people based on behavior, socio-economic status, or personal characteristics.
Real-time and remote biometric identification systems, such as facial recognition.
High Risk
AI systems that negatively affect safety or fundamental rights will be considered high risk. All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI, like ChatGPT, would have to comply with transparency requirements:
Disclosing that the content was generated by AI.
Designing the model to prevent it from generating illegal content.
Publishing summaries of copyrighted data used for training.
Limited Risk
AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio, or video content, for example, deepfakes.
Generative AI
Under the AI Act, generative AI models need to adhere to transparency requirements. This means disclosing that content is generated by AI, preventing models from generating illegal content, and publishing summaries of copyrighted training data.
Reactions
Many of the EU’s largest companies wrote a letter warning the European Commission that the drafted legislation “would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.” More than 150 executives from companies, including Renault, Heineken, Siemens, and Airbus, signed the letter.
The U.S. Chamber of Commerce submitted a letter to the Biden Administration's interagency team focused on the European Union AI Act, outlining key concerns from the U.S. business community. The letter said:
If adopted as drafted, the EU’s regulatory regime could undermine efforts to establish responsible standards for AI and marketing interoperability. The U.S. Chamber of Commerce has identified several critical concerns that would significantly impact U.S. scientific and technological interests and undermine our industrial superiority.
The Chamber letter listed burdensome targeted requirements for general-purpose AI systems, far-reaching prohibitions limiting AI’s transformative potential, imposition of unilateral export restrictions, and extensive EU regulator access to companies’ source codes among their concerns.
Leave a Comment
What are your thoughts about the EU’s efforts to regulate AI? Do you see government intervention as necessary (or a necessary evil)?