South Korea Passes ‘World’s First’ Law to Label AI-Generated Content
South Korea has taken the bull by the horns, implementing a landmark set of laws requiring companies to clearly label AI content.
The 43 articles of law known as the Act on the Development of Artificial Intelligence and Establishment of Trust (AI Basic Act) claim to be the “world’s first” to be enforced at a national level.
The AI Basic Act stipulates that the use of generative AI to produce text, images, sound, and video must include invisible digital watermarks, and realistic deepfakes must carry visible labels. The act applies to Korean companies of all sizes.
According to South Korean officials, the goal is to improve trust and safety in AI while furthering the country’s ambition to join the US and China as the top three global AI powers.
Pushback from startups
Under the legislation, operators of so-called “high-impact AI” systems, such as tools used in medical diagnosis, hiring decisions, and loan approvals, must now conduct risk assessments and document how their systems make decisions.
Critics say that forcing companies to determine whether their systems qualify as high-impact AI is a lengthy, uncertain process.
South Korean tech startups have reportedly criticized the AI Basic Act, maintaining that the disclosure and labeling requirements go too far and could slow innovation, according to The Guardian.
Government officials have disputed claims that the law is overly restrictive, saying that 80% to 90% of the legislation aims to promote the AI industry rather than limit it.
Companies that fail to comply with the labeling requirements could face fines of up to about $20,400 (30 million won). However, to allow companies time to comply, the government has set a grace period of at least one year before penalties are enforced.
AI regulation in other countries
Other countries are taking different approaches to regulate AI. The European Union (EU) began implementing AI-focused rules, although the main provisions of its AI Act, which includes the mandate that generative AI companies disclose copyrighted material used in training datasets, will be phased in through 2027.
In the US, the White House signed an executive order in December 2025 targeting what it describes as “cumbersome” and “excessive” state laws that it maintains are holding back AI development.
The Trump Administration denies that AI poses an existential risk and has argued that state-level regulations are too restrictive.
Meanwhile, New York’s Responsible AI Safety and Education (RAISE) Act has now become a law and requires the largest AI developers to follow basic safety rules.
Also read: The EU is investigating Grok AI deepfakes amid concerns about synthetic explicit content and AI misuse.
The post South Korea Passes ‘World’s First’ Law to Label AI-Generated Content appeared first on eWEEK.