The implementation of the world's first comprehensive artificial intelligence (AI) regulation law prepared by the European Union (EU) has been confirmed.
The Transport, Communications and Energy Council, comprised of 27 EU countries, announced that it had finally approved the 'AI Act' at a meeting held in Brussels, Belgium on the 21st (local time).
With all legislative procedures completed, the bill will go into effect next month after being published in the Official Gazette.
Regulations on AI subject to ban will be implemented first 6 months after the effective date, and regulations on general-purpose AI (AGI, AI with intelligence similar to or higher than that of humans) will be implemented 12 months later. Full implementation will begin after mid-2026.
This law is characterized by differential regulation of the risk of AI use by dividing it into four levels, including high-risk levels. The higher the risk of negative impacts on society, the more stringent regulations are applied.
The highest risk level is AI technology used in public services including healthcare and education, elections, core infrastructure, and autonomous driving. In this field, AI use must be supervised by humans and a risk management system must be established.
The use of some AI technologies is prohibited within the EU.
Social scoring, which is the practice of assigning individual scores using data related to an individual's characteristics and behavior using AI, and the practice of randomly collecting facial images from the Internet or CCTV to build a database, etc. It is a target.
Practices that manipulate human behavior or exploit vulnerabilities are also prohibited.
The use of real-time remote biometric identification systems by law enforcement agencies is also subject to regulation in principle. As an exception, this system can be used to prevent serious crimes such as rape or terrorism or to search for suspects, but even in this case, prior or post-approval from judicial authorities must be obtained.
A basic 'transparency obligation' applies to companies developing general-purpose AI. For example, this includes the obligation to comply with EU copyright law and specify the content used in the AI learning process.
Among general-purpose AI, if it is classified as a 'strong' system with systemic risk, additional obligations such as systematic risk assessment and mitigation, and accident reporting are assigned.
The EU plans to establish an 'AI Office' under the Directorate-General of the Commission to oversee the enforcement of AI laws, and if violations are detected, a fine of up to 7% of global sales may be imposed.
After the draft of the AI Act was proposed in 2021, the pace of legislation accelerated as concerns about side effects spread with the emergence of generative AI such as Chat GPT the following year.
Given that this is the first time that a comprehensive AI regulation law has been enacted globally, there are observations that it will have an impact on the establishment of AI regulatory models in other countries in the future.
Lee Seong-yeop, director of Korea University's Technology Law Policy Center, said, "This bill contains strong regulations, so it is difficult to apply it directly to Korea, which is in the early stages of the AI industry, in terms of industry growth." He added, "First, create a basic AI law and specify the necessary provisions for each industry sector." “We will have to move forward with more detailed regulations,” he predicted.