Introduction

The governance of artificial intelligence (AI) is fragmenting into multiple overlapping institutions, national laws, corporate self-governance frameworks, and international initiatives. Rather than a single unified regulatory structure, AI oversight increasingly resembles what international relations scholars describe as a regime complex (Raustiala & Victor, 2004). This paper examines how the current global AI governance landscape reflects growing institutional fragmentation, creating regulatory gaps and governance asymmetries.


The Fragmented Governance Architecture

AI governance today unfolds across multiple domains:

National Regulations

The European Union’s AI Act, passed in 2024, establishes the most comprehensive binding legislation to date. It uses a risk-based approach to regulate AI systems, classifying them into prohibited, high-risk, and general-purpose categories (European Union, 2024). In the United States, President Biden’s Executive Order 14110 (2023) outlines a series of voluntary commitments, reporting requirements, and safety protocols, but comprehensive federal legislation remains pending. China has issued regulatory measures focused on generative AI, data security, and content control, reflecting its state-centric governance model (China Cyberspace Administration, 2023).

International Organizations

The Organisation for Economic Co-operation and Development (OECD)’s 2019 AI Principles were adopted by over 40 countries and emphasize trustworthy, human-centered AI development (OECD, 2019). The United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted the Recommendation on the Ethics of Artificial Intelligence in 2021, which provides high-level ethical guidelines but no enforcement mechanism (UNESCO, 2021).

Technical Standard-Setting Bodies

Organizations such as the Institute of Electrical and Electronics Engineers (IEEE), International Organization for Standardization (ISO), and International Electrotechnical Commission (IEC) have developed technical standards addressing transparency, safety, and interoperability in AI systems. However, these standards are often heavily influenced by dominant industrial actors and vary in adoption across jurisdictions (ISO/IEC JTC 1/SC 42, 2023; IEEE, 2020).

Industry Self-Governance

Major AI labs such as OpenAI, Anthropic, Google DeepMind, and Microsoft have developed voluntary safety frameworks and alignment protocols. These include measures like red teaming, responsible scaling policies, and model evaluations, but remain self-enforced and lack independent oversight (Anthropic, 2023; OpenAI, 2023).

Civil Society and Academic Input

Nonprofit organizations, independent researchers, and academic centers such as the Center for AI Safety, the Partnership on AI, and the Future of Life Institute contribute critical expertise but often face limited access to model internals and corporate safety evaluations.


Regime Complexity and Institutional Gaps

The growing complexity of AI governance creates both flexibility and danger. While multiple regimes allow for experimentation, they also introduce:

  • Regulatory gaps: Certain high-risk capabilities may fall outside the jurisdiction of existing national laws.

  • Norm collisions: Diverging definitions of safety, alignment, and fairness complicate interoperability across borders.

  • Governance asymmetry: Private actors retain significant control over safety data, risk evaluation, and disclosure, creating accountability deficits.

As Raustiala and Victor (2004) argue, such regime complexes generate coordination challenges, particularly when no central authority exists to resolve conflicts across overlapping institutions.


Conclusion

AI governance is rapidly evolving into a fragmented, multipolar system of overlapping regimes. While this structure allows for normative diversity, it also risks embedding asymmetries that favor well-resourced states and private actors. The future stability of AI governance will depend on whether enforceable international institutions emerge that can balance innovation with accountability.


References

Anthropic. (2023). Responsible scaling policy. Retrieved from https://www.anthropic.com/policies/responsible-scaling

China Cyberspace Administration. (2023). Interim measures for the management of generative artificial intelligence services. Beijing: CAC.

European Union. (2024). Regulation (EU) 2024/… on artificial intelligence (AI Act). Brussels: European Commission.

IEEE. (2020). IEEE 7000™-2021: Model process for addressing ethical concerns during system design. Piscataway, NJ: Institute of Electrical and Electronics Engineers.

ISO/IEC JTC 1/SC 42. (2023). Artificial intelligence standards portfolio. Geneva: International Organization for Standardization.

OECD. (2019). OECD principles on artificial intelligence. Paris: OECD Publishing.

OpenAI. (2023). Preparedness framework. Retrieved from https://openai.com/preparedness

Raustiala, K., & Victor, D. G. (2004). The regime complex for plant genetic resources. International Organization, 58(2), 277–309.

UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Paris: United Nations Educational, Scientific and Cultural Organization.

Leave a Reply

Your email address will not be published. Required fields are marked *