AI governance does not operate in a legal vacuum; it operates in a geopolitical one. The accelerating race to dominate frontier AI has fractured global coordination efforts and created new layers of geopolitical asymmetry.

Unlike domains such as nuclear nonproliferation or global trade, where binding multilateral agreements exist, AI is evolving under what can be described as competitive multipolar governance (Kallina et al., 2024). Several power centers are emerging—each pursuing divergent regulatory models shaped by national interests, economic priorities, and ideological commitments.

The three dominant governance poles today:

  1. The United States:
    The U.S. favors flexible regulatory frameworks emphasizing innovation, private sector leadership, and voluntary safety commitments (Strauss et al., 2025; Executive Order 14110, 2023). While federal agencies are slowly building regulatory capacity, much oversight remains informal, self-regulatory, and heavily influenced by industry stakeholders (Deeks, 2021).
  2. The European Union:
    The EU has adopted the most comprehensive binding legislation to date with the AI Act, prioritizing human rights, risk-based regulation, and algorithmic accountability (OECD, 2024). However, enforcement capacity and extraterritorial effectiveness remain open questions (Roberts & Ziosi, 2024).
  3. China:
    China employs a highly centralized regulatory approach combining state control, content governance, and industrial policy. Its AI regulations tightly integrate national security, social stability, and global competitiveness objectives (Kallina et al., 2024).

Beyond these poles, many Global South nations remain norm-takers rather than norm-setters. They face the double asymmetry of technological dependency and limited participation in international standard-setting forums (Bengio et al., 2024). This deepens global digital inequalities and raises concerns of technological colonialism (Roberts & Ziosi, 2024).

Three structural risks emerge from this geopolitical fragmentation:

  • Normative Divergence:
    Competing governance models risk producing incompatible regulatory systems that complicate cross-border deployment, compliance, and enforcement.
  • Power Concentration:
    Regulatory fragmentation advantages countries and firms with dominant compute resources, data access, and capital, amplifying global power asymmetries.
  • Governance Vacuums:
    Key transnational risks—such as model proliferation, misuse by non-state actors, and systemic safety failures—remain unaddressed in the absence of binding multilateral coordination.

The current AI governance trajectory resembles the early stages of other regime complex formations, where multipolar competition outpaces institutional coherence (Raustiala & Victor, 2004). Without deliberate institutional design, global AI governance risks hardening into a fragmented, unstable system driven by national interest calculations rather than collective safety or global equity.


references

Bengio, Y., et al. (2024). AI safety and alignment: Research priorities for frontier models. arXiv preprint.

Deeks, A. S. (2021). Artificial Intelligence and the International Legal Order. American Journal of International Law, 115(3), 443-481.

Executive Order 14110, 3 C.F.R. (2023). Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.

OECD. (2024). AI Governance in Practice: Global Trends and Institutional Gaps. OECD Publishing.

Raustiala, K., & Victor, D. G. (2004). The Regime Complex for Plant Genetic Resources. International Organization, 58(2), 277-309.

Roberts, H., & Ziosi, M. (2024). Standardizing the Frontier of AI: International Institutions and Technical Norms. Oxford Internet Institute.

Strauss, I., et al. (2025). Real-World Gaps in AI Governance Research. AI and Society, forthcoming.

Leave a Reply

Your email address will not be published. Required fields are marked *