The Framework Act on Artificial Intelligence: Legislative Genesis

On 22 January 2026, South Korea's Framework Act on Artificial Intelligence (AI Basic Act) entered into force, making the Republic of Korea the second jurisdiction in the world, after the European Union, to implement a comprehensive statutory framework governing artificial intelligence development, deployment, and oversight. The legislation's enactment was the culmination of a legislative process that had consumed the National Assembly for over three years, during which no fewer than 19 separate AI-related bills were proposed by legislators across both ruling and opposition parties. The final act represented a consolidation of these competing proposals into a unified governance framework, a legislative achievement that reflected both the urgency of the AI policy challenge and the political consensus that Korea's regulatory posture could not remain fragmented while the nation committed tens of trillions of won to AI-led industrial transformation.

The passage of the Framework Act in January 2025, with its one-year implementation period, was strategically timed to precede the announcement of the K-Moonshot initiative by approximately 14 months. This sequencing was deliberate: Deputy Prime Minister and Minister of Science and ICT Bae Kyung-hoon and senior officials at MSIT recognised that a national AI investment programme of K-Moonshot's scale required a governance foundation that would provide regulatory certainty to participating corporations, protect public interests, and signal international credibility to foreign partners and investors. Without comprehensive AI legislation, Korea's 10.1 trillion KRW AI budget risked being deployed into a regulatory vacuum, creating compliance risks for the 161 companies in the K-Moonshot Corporate Partnership and undermining international confidence in Korea's AI governance maturity.

Core Provisions of the Framework Act

The Framework Act establishes several foundational governance mechanisms that directly shape the operating environment for K-Moonshot participants.

Risk-Based Classification System

The Act introduces a risk-based classification system for AI applications, distinguishing between general-purpose AI systems and high-impact AI systems that require enhanced oversight. High-impact AI is defined as systems that make or materially influence decisions affecting individuals' rights, safety, or legal status. This includes AI deployed in healthcare diagnostics, criminal justice, employment screening, financial credit assessment, and critical infrastructure management. For K-Moonshot's 12 national missions, several application domains fall squarely within the high-impact category, most notably Mission 1 (Drug Development Acceleration), where AI-driven clinical trial systems and diagnostic tools interact directly with patient welfare, and Mission 2 (Brain Implant Commercialization), where neurotechnology interfaces raise profound safety and autonomy concerns.

The classification system draws partially on the EU AI Act's risk-based approach but reflects distinctly Korean regulatory traditions. Rather than prescribing detailed technical compliance requirements at the statutory level, the Framework Act delegates specificity to subordinate regulations and ministerial guidelines, providing flexibility for the technology to evolve without requiring frequent legislative amendments. This approach aligns with Korea's broader regulatory philosophy of establishing principle-based frameworks at the statutory level while permitting administrative agility through ministerial directives.

National AI Committee

The Act establishes a National AI Committee (NAIC) positioned under the President's Office, the highest-level inter-ministerial coordination body dedicated to AI governance in any OECD nation. The Committee is chaired by a presidential appointee and comprises the heads of relevant ministries, including MSIT, the Ministry of Trade, Industry and Energy (MOTIE), the Ministry of Health and Welfare, the Ministry of Justice, and the Ministry of National Defence, along with designated experts from academia and the private sector.

The NAIC's mandate encompasses four primary functions: first, formulating the National AI Master Plan, a comprehensive five-year strategy document that sets priorities, budgets, and performance targets for national AI development; second, coordinating inter-ministerial policy to prevent regulatory fragmentation across different ministries' AI initiatives; third, reviewing and approving the designation of high-impact AI application categories; and fourth, overseeing the implementation of the AI Safety Research Institute. The Committee's placement under the President's Office, rather than under any single ministry, reflects the cross-cutting nature of AI governance and ensures that AI policy receives attention at the highest level of government.

GOVERNANCE ARCHITECTURE
NATIONAL AI COMMITTEE UNDER PRESIDENT'S OFFICE

Korea's National AI Committee is the highest-level AI governance body in any OECD nation, coordinating across all relevant ministries and directly advising presidential leadership on AI strategy, safety, and international cooperation.

For K-Moonshot execution, the NAIC provides a critical coordination mechanism. The initiative's 12 missions span the jurisdictional boundaries of multiple ministries: MSIT leads AI and ICT missions, MOTIE oversees semiconductor and energy missions, the Ministry of Health and Welfare has equities in biomedical missions, and the Ministry of Oceans and Fisheries has jurisdiction over SMR vessel development. Without the NAIC's inter-ministerial authority, K-Moonshot's implementation risks being slowed by bureaucratic turf conflicts and regulatory inconsistencies across ministerial boundaries.

AI Safety Research Institute

The Framework Act mandates the establishment of a dedicated AI Safety Research Institute (AISRI), modelled in part on the United Kingdom's AI Safety Institute and the United States' National Institute of Standards and Technology (NIST) AI Safety programme. The Korean AISRI is tasked with conducting technical safety evaluations of AI systems, developing safety testing methodologies and benchmarks, publishing safety assessment guidelines for high-impact AI, and advising the National AI Committee on emerging risk categories.

The Institute's establishment represents a significant institutional investment in AI safety infrastructure. Korea's approach positions the AISRI as a technically focused body distinct from the policy-making functions of MSIT and the NAIC. This separation of technical assessment from political decision-making mirrors the institutional architecture of established safety regulators such as the Nuclear Safety and Security Commission (NSSC), reflecting a deliberate institutional design choice to insulate safety evaluations from political pressures.

For K-Moonshot participants, the AISRI will serve as the primary technical arbiter of AI safety compliance. Companies developing high-impact AI systems under the 12 missions will be required to submit their systems for safety assessment, and the Institute's evaluation reports will inform regulatory decisions by sectoral authorities. The pharmaceutical AI platforms being developed under Mission 1, the neurotechnology systems under Mission 2, and the physical AI models under Mission 7 are all likely to require AISRI review before deployment.

The 19-Bill Consolidation: Legislative History

The Framework Act's legislative journey reveals the complexity of building political consensus on AI governance. Between 2021 and 2024, lawmakers in the National Assembly introduced 19 separate AI-related bills, reflecting competing visions of how Korea should govern artificial intelligence. Some proposals, primarily from ruling party legislators aligned with the pro-business agenda, emphasised innovation promotion and light-touch regulation. Others, particularly from opposition lawmakers and civil society-aligned legislators, prioritised algorithmic accountability, transparency requirements, and individual rights protections.

The key policy tensions that had to be resolved during the consolidation process included the scope of mandatory impact assessments for AI systems, the extent of transparency obligations for algorithmic decision-making, the liability framework for AI-caused harms, the balance between promoting AI innovation and protecting individual rights, and the governance of foundation models and generative AI. The final Act represents a carefully calibrated compromise: it establishes a clear governance framework with enforceable obligations for high-impact AI while maintaining a comparatively permissive environment for AI research and development, particularly in the context of national strategic programmes such as K-Moonshot.

The consolidation process was accelerated by two external factors. The first was the EU AI Act's finalisation in 2024, which created competitive pressure on Korea to demonstrate equivalent regulatory maturity, particularly given Korea's ambitions for a Korea-EU Digital Partnership. The second was the rapid proliferation of generative AI systems following OpenAI's release of GPT-4 and subsequent models, which made the absence of a comprehensive AI governance framework increasingly untenable as Korean companies including Naver and Kakao deployed their own large language models to millions of users.

Public-Private Collaboration Architecture

A distinctive feature of Korea's AI governance approach, and one that directly enables K-Moonshot, is the emphasis on structured public-private collaboration rather than purely top-down regulation. The Framework Act formalises several collaboration mechanisms that are already being operationalised through K-Moonshot.

AI Development Master Plans

The Act requires the government to publish a National AI Development Master Plan every five years, developed through a structured consultation process involving industry representatives, academic researchers, civil society organisations, and international experts. The first Master Plan under the new Act is expected to be published in the second half of 2026 and will effectively serve as the strategic complement to K-Moonshot's operational programme. Where K-Moonshot defines specific missions and corporate partnerships, the Master Plan will establish the broader policy environment, including workforce development targets, international cooperation priorities, and regulatory evolution pathways.

Corporate Partnership Framework

The K-Moonshot Corporate Partnership, which brings together 161 companies (including 88 AI and infrastructure firms), operates within the governance framework established by the Act. Participating companies benefit from streamlined regulatory pathways for K-Moonshot-aligned AI development while accepting enhanced reporting obligations on safety, ethics, and societal impact. This reciprocal arrangement, regulatory facilitation in exchange for governance compliance, represents a model that several other nations are studying as they develop their own national AI programmes.

Sectoral AI Guidelines

The Framework Act authorises MSIT to issue sectoral AI guidelines in coordination with relevant line ministries. These guidelines provide detailed, sector-specific compliance requirements that operationalise the Act's general provisions for particular industries. For K-Moonshot, this means that each of the eight key sectors will eventually have tailored AI governance guidelines addressing the unique risks and opportunities in areas such as advanced biotechnology, future energy, semiconductors, and physical AI.

International Comparative Assessment

Korea's Framework Act occupies a distinctive position in the global landscape of AI governance, sharing some features with the EU AI Act while diverging significantly in approach and emphasis.

Compared with the EU AI Act

The EU AI Act, which entered into force in August 2024 with a phased compliance timeline extending to 2027, establishes the most prescriptive AI regulatory framework in the world. It defines prohibited AI practices (such as social scoring and certain biometric surveillance applications), mandates detailed conformity assessments for high-risk AI systems, and imposes substantial fines for non-compliance (up to 7 percent of global annual turnover). Korea's Framework Act shares the EU's risk-based classification approach but is significantly less prescriptive in its compliance requirements. The Korean approach favours ministerial guidelines and industry self-regulation over detailed statutory mandates, reflecting a pragmatic recognition that Korea, as a nation seeking to build AI capabilities rapidly through K-Moonshot, cannot afford the compliance costs and innovation-dampening effects of an EU-style prescriptive regime.

Compared with the United States

The United States has adopted a primarily executive-action-based approach to AI governance, relying on Executive Orders (notably EO 14110 on AI Safety in October 2023), NIST frameworks, and sector-specific regulation rather than comprehensive legislation. Korea's legislative approach provides greater regulatory certainty and institutional permanence than the US model, which can shift significantly with changes in presidential administration. For K-Moonshot's corporate partners, the Framework Act's statutory basis provides a more stable planning horizon than would a governance framework dependent on executive discretion.

Compared with China

China has adopted a rapid-iteration regulatory model, issuing a series of targeted regulations on specific AI applications (algorithmic recommendation systems, deep synthesis technology, generative AI services) rather than a single comprehensive law. This approach allows China to regulate quickly but creates a fragmented regulatory landscape. Korea's unified Framework Act, by contrast, provides a coherent governance architecture that reduces compliance complexity for companies operating across multiple AI application domains, a particularly important consideration for K-Moonshot participants whose activities span multiple missions and sectors.

GLOBAL AI GOVERNANCE
KOREA: 2ND COMPREHENSIVE AI LAW

With the Framework Act, Korea joins the EU as one of only two jurisdictions with comprehensive AI legislation, positioning itself as a credible governance leader in the Indo-Pacific and a preferred regulatory partner for international AI cooperation.

Implications for K-Moonshot Execution

The Framework Act's provisions have direct operational implications for every stage of K-Moonshot execution.

Mission Director Appointments

The Act's governance requirements influence the selection and mandate of mission directors for each of the 12 national missions. Mission directors must ensure that AI systems developed under their missions comply with the Act's provisions, particularly for high-impact applications. This adds a governance dimension to what might otherwise be purely technical leadership roles, requiring mission directors with both domain expertise and regulatory awareness.

Data Access and Utilisation

The Act intersects with Korea's data governance framework to define the conditions under which K-Moonshot participants can access and utilise training data. For missions that rely on large-scale data resources, such as Mission 1's use of national health insurance data and Mission 10's requirements for educational and research data, the Framework Act's provisions on data access for AI development interact with PIPA's protections to create a nuanced compliance environment that must be navigated carefully.

International Collaboration

The Framework Act's alignment with international AI governance norms, particularly the OECD AI Principles, facilitates Korea-US technology cooperation and Korea-EU digital partnership arrangements. K-Moonshot missions with significant international collaboration components, such as Mission 4 (Fusion Demonstration Reactor) and its ties to ITER, benefit from the Act's international alignment provisions, which establish mutual recognition frameworks for AI safety assessments conducted by partner nations.

Sovereign AI Infrastructure

The Act includes provisions supporting the development of sovereign AI computing infrastructure, directly enabling Mission 7 (General-Purpose Physical AI Models) and the broader AI sovereignty agenda. These provisions authorise the government to designate AI computing resources as critical national infrastructure, enable preferential access for nationally strategic AI workloads, and establish security requirements for AI infrastructure operated within Korean jurisdiction.

Critical Assessment and Outstanding Questions

While the Framework Act represents a significant governance achievement, several questions remain unresolved as implementation proceeds in parallel with K-Moonshot execution.

Enforcement capacity is the most immediate concern. The Act creates new regulatory obligations but the institutional capacity to monitor and enforce compliance across a rapidly expanding AI ecosystem remains constrained. The AI Safety Research Institute is still in its early operational phase, and its ability to conduct meaningful technical safety evaluations at scale is unproven. If enforcement lags behind deployment, the Act's governance provisions risk becoming nominal rather than substantive.

Regulatory coherence across ministries presents an ongoing challenge. Despite the National AI Committee's coordination mandate, Korea's ministerial structure creates inherent centrifugal pressures. Different ministries may interpret the Act's provisions differently in their sectoral guidelines, creating inconsistencies that K-Moonshot participants must navigate. The regulatory sandbox programme provides a partial safety valve by allowing innovation to proceed under temporary regulatory exemptions, but this mechanism was designed for discrete products and services rather than for the sprawling, interconnected research programmes that characterise K-Moonshot.

Foundation model governance remains an area of active policy development. The Framework Act's provisions were drafted during a period of rapid evolution in generative AI capabilities, and some provisions may require updating as foundation models become more capable and more deeply integrated into K-Moonshot mission workstreams. The governance of open-source AI models, model evaluation standards, and compute governance are all areas where the Act's current provisions are relatively general and will require elaboration through subordinate regulations.

Liability allocation for AI-caused harms is addressed in the Act at a general level but will require detailed judicial and regulatory interpretation as specific cases arise. For K-Moonshot missions developing AI systems in safety-critical domains, such as autonomous vehicle control in Mission 6 (Humanoid Robots) and clinical decision support in Mission 1, the liability framework has direct implications for corporate risk assessment, insurance requirements, and ultimately the pace of deployment.

Strategic Significance

The Framework Act on Artificial Intelligence is more than a regulatory instrument; it is a strategic asset for Korea's national AI ambitions. By establishing credible, internationally aligned AI governance before committing 10.1 trillion KRW to AI development through K-Moonshot, Korea has created a governance foundation that enhances the programme's legitimacy, reduces regulatory risk for participating companies, and positions Korea as a responsible AI power capable of leading in both innovation and governance.

The Act's success will ultimately be measured not by its statutory provisions alone but by the quality of its implementation. The coming years will test whether Korea's governance institutions can keep pace with the ambition and scale of K-Moonshot, ensuring that one of the world's largest national AI investment programmes delivers its promised benefits while managing the risks inherent in deploying artificial intelligence across every major sector of the economy. For investors, analysts, and policymakers, the Framework Act provides a governance lens through which every K-Moonshot milestone should be evaluated.