South Korea’s “AI Basic Act” (formally, the Act on the Development of Artificial Intelligence and Establishment of a Foundation of Trust) is scheduled to take effect on January 22, 2026. It is widely viewed as one of the earliest cases where a country attempts to operationalize a comprehensive, binding AI governance framework at national scale. (National Law Information Center)
1) Starting Motivation
The Act emerged from a practical policy concern: AI was moving faster than society’s ability to define responsibility, safety, and transparency. The rapid diffusion of generative AI heightened public and governmental anxiety around deepfakes, synthetic misinformation, and the use of AI in high-stakes decisions (e.g., education, employment, finance, and public services).
At the same time, global regulatory momentum—especially Europe’s AI governance direction—signaled that “voluntary AI ethics” would not be sufficient. Korea’s strategic intent became clear: build trust mechanisms early so AI adoption can scale without turning into a governance crisis.
2) Legislative Path and Key Milestones
- 2025-01-21: The Act was enacted as Law No. 20676, with a delayed enforcement schedule.
- 2026-01-22: The Act is set to be enforced. (Law.go.kr)
- 2025-11-12: The government published a draft Enforcement Decree (입법예고), specifying delegated details such as procedures and operational standards. (Lawmaking public notice)
This sequence matters because it separates the “law on paper” from the “law in motion.” The Act sets the framework; the Enforcement Decree and guidelines determine what compliance will actually look like.
3) Current Situation: Law Finalized, Operational Details Still Maturing
Today, the macro picture is stable: the statute is enacted and the enforcement date is fixed. The micro picture remains in flux: implementation details—especially around classification, labeling, and compliance procedures—are being refined through the Enforcement Decree and anticipated guidelines. (Draft Enforcement Decree notice)
In practice, the biggest near-term risk is not regulation itself but interpretive ambiguity: companies need clarity on what counts as high-impact AI, what “reasonable” transparency looks like, and how labeling requirements should be met in real products and services.
4) The Act’s Most Important Requirements (What to Know)
4.1 Risk-based governance with “High-Impact AI” at the center
The Act’s most influential concept is “high-impact AI”—AI systems that may significantly affect life, bodily safety, or fundamental rights. This is a context-driven idea: classification depends heavily on where and how AI is used, not only on model type. (Overview and definition reference: summary guide; statutory basis: law.go.kr)
4.2 Labeling obligations for generative AI outputs (including watermark options)
Draft implementation materials emphasize that providers of generative AI products/services may need to label outputs indicating they were created using generative AI. Legal commentary on the draft decree notes labeling may be in a human- or machine-readable form, explicitly including invisible watermarking as a possible method. (Kim & Chang insight; Lee & Ko newsletter)
4.3 Government-level AI governance infrastructure
The draft Enforcement Decree explains delegated details such as AI basic plan procedures and R&D support scope—an indicator that the Act is designed as both a “trust law” and an “industrial policy law.” (Draft decree notice)
5) Meaning and Strategic Implications
5.1 Korea becomes a real-world “test case” for comprehensive AI governance
Many jurisdictions discuss AI principles; fewer attempt to implement a nationwide framework that is enforceable and operational. Korea’s timeline places it among the earliest movers for “real enforcement conditions,” not just policy intent. (Law.go.kr)
5.2 Implementation quality will decide whether the Act accelerates or slows innovation
The Act’s success will depend less on its abstract goals and more on execution: clear definitions, practical compliance pathways for startups, and a balance between transparency requirements and technical feasibility—especially for watermarking and labeling. A policy analysis from ITIF highlights the technical fragility and cross-jurisdiction inconsistency of watermarking/labels, warning against over-reliance on them. (ITIF analysis)
5.3 Spillover effects: adjacent “AI labeling” regulation is also accelerating
Separately from the AI Basic Act, Korea has also announced moves to require labeling for AI-generated advertisements starting in early 2026, reflecting a broader policy direction toward AI-origin disclosure in consumer-facing contexts. (AP News report)
References
- National Law Information Center (law.go.kr) — Act on the Development of Artificial Intelligence and Establishment of a Foundation of Trust (Effective 2026-01-22)
- Korea Lawmaking Center — Draft Enforcement Decree public notice (Nov 12, 2025)
- Kim & Chang — Insight on the Draft Enforcement Decree (labeling; machine/human readable; watermark option)
- Lee & Ko — Newsletter on draft decree (labeling obligations; transparency guidance timing)
- ITIF — Analysis on Korea’s AI policy and watermark/label limitations (Sep 29, 2025)
- AP News — South Korea to require advertisers to label AI-generated ads (Dec 2025)
AI FOCUS에서 더 알아보기
구독을 신청하면 최신 게시물을 이메일로 받아볼 수 있습니다.