SSouth Korea has launched an effort to regulate AI, introducing one of the most comprehensive laws anywhere in the world that could serve as a model for other countries, but the new law has already faced opposition.
The laws, which would force companies to label AI-generated content, have been criticized by local tech startups, who say they go too far, and civil society groups, who say they don’t go far enough.
The AI Basic Act, which took effect on Thursday last week, comes amid growing global unease over artificially created media and automated decision-making, as governments struggle to keep pace with rapidly advancing technologies.
The Act will oblige companies providing AI services to:
-
Add invisible digital watermarks to clearly artificial output such as cartoons or artwork. For realistic deepfakes, visible labels are required.
-
“High-impact AI,” including systems used for medical diagnosis, appointments and loan approval, will require operators to conduct risk assessments and document how decisions are made. The system may be out of range if a human makes the final decision.
-
Safety reports will be required for extremely powerful AI models, but the bar has been set so high that government officials admit no models worldwide currently meet it.
Companies that violate the rules face fines of up to 30 million won (£15,000), but the government has promised a grace period of at least a year before fines are imposed.
The law is believed to be the “world’s first” to be fully implemented by a country, and is central to South Korea’s ambition to become one of the world’s three leading AI powers alongside the US and China. Government officials say the law is 80-90% focused on promoting the industry rather than restricting it.
Alice Oh, a professor of computer science at Korea Advanced Institute of Science and Technology (KAIST), said that although the law was not perfect, it was intended to develop without stifling innovation. However a survey In December, Startup Alliance found that 98% of AI startups were not ready for compliance. Its co-chief Lim Jung-wook said disappointment was widespread. “There’s a little bit of resentment,” he said. “Why do we have to be the first to do this?”
Companies must determine for themselves whether their systems qualify as high-impact AI, a process critics say is lengthy and creates uncertainty.
They also warn of competitive imbalance: All Korean companies face regulation regardless of size, while only foreign companies that meet certain thresholds – such as Google and OpenAI – must comply.
The push for regulation has unfolded against a uniquely charged domestic backdrop that has left civil society groups concerned that the law does not go far enough.
According to a 2023 report from US-based identity protection firm Security Hero, 53% of all global deepfake pornography victims are in South Korea. In August 2024, an investigation uncovered a massive network of Telegram chatrooms that created and distributed AI-generated sexual imagery of women and girls, foreshadowing the scandal that would later spread around Elon Musk’s Grok chatbot.
However, the origins of the law predate this crisis, with the first AI-related bill being presented in Parliament in July 2020. It was repeatedly stalled due to provisions that were accused of prioritizing the interests of industry over civilian safety.
Civil society groups say the new law offers limited protection to people harmed by AI systems.
Four organizations, including Minbyun, a group of human rights lawyers, issued a joint statement the day after its implementation, arguing that the law has almost no provisions to protect citizens from AI risks.
The groups noted that while the law sets out protections for “users”, those users were hospitals, financial companies and public institutions that use AI systems, not people affected by AI. They argued that the law did not establish any prohibited AI systems, and that the exemption for “human participation” created significant loopholes.
Human Rights Commission of the country has criticized The enforcement decree clarifies the lack of clear definitions of high-impact AI, noting that those most likely to suffer rights violations reside in regulatory blind spots.
In a statement, the Ministry of Science and ICT said it hoped the law would “remove legal uncertainty” and create “a healthy and safe domestic AI ecosystem”, adding that it would continue to clarify the rules through revised guidelines.
Experts said South Korea deliberately chose a different path from other jurisdictions.
Unlike the EU’s strict risk-based regulatory model, the US and Britain’s largely sector-specific, market-driven approach, or China’s combination of state-led industrial policy and detailed service-specific regulation, South Korea has opted for a more flexible, principles-based framework, said Melissa Hyesun Yoon, a law professor at Hanyang University who specializes in AI governance.
This approach focuses on what Yoon describes as “faith-based promotion and regulation”.
“Korea’s framework will serve as a useful reference point in global AI governance discussions,” he said.
