Introduction

As governments attempt to develop AI governance frameworks, they face a growing risk familiar from other regulatory domains: regulatory capture. In the AI context, capture occurs when private actors with concentrated technical expertise and economic power gain disproportionate influence over the design and enforcement of rules that govern their own activities (Stigler, 1971; Carpenter & Moss, 2014).

This article analyzes how structural conditions in frontier AI development amplify the risk of regulatory capture, particularly as public institutions struggle to keep pace with rapid technical advancement and resource imbalances.


The Mechanisms of Capture in AI Governance

Expertise Asymmetry

Frontier AI systems involve highly specialized knowledge. Most public regulatory agencies lack sufficient in-house technical expertise to independently assess safety claims, leading to reliance on private disclosures from the very firms being regulated (Brundage et al., 2020; Crootof & Bowers, 2022).

Information Control

Private AI labs tightly control access to model weights, training data, and system evaluations. External regulators typically do not have legal authority to compel access to these proprietary assets, limiting their capacity to conduct independent assessments (Fjeld et al., 2020).

Policy Agenda Setting

Many leading AI firms actively participate in advisory bodies, standards organizations, and regulatory consultations. While participation provides valuable input, it also allows dominant firms to shape the regulatory agenda, define acceptable risk frameworks, and influence the pace of rule-making (Gleckman, 2018).

Institutional Resource Imbalance

The financial, legal, and lobbying resources of major AI firms often surpass those of public regulators, especially in developing regulatory jurisdictions. This imbalance risks embedding industry preferences directly into governance frameworks (Carpenter & Moss, 2014).


Why AI Governance is Structurally Vulnerable

Compared to mature regulatory domains like pharmaceuticals or financial markets, AI governance suffers from several aggravating factors:

  • Speed of Innovation: Technical capabilities evolve far faster than regulatory adaptation cycles.
  • Opaque Risk Profiles: Many AI safety risks emerge only under real-world deployment conditions.
  • Lack of Precedent: No longstanding institutional framework exists for auditing large-scale cognitive systems.

These conditions amplify the potential for private actors to dominate both knowledge production and risk framing, creating a closed loop of self-referential safety claims.


Mitigating Regulatory Capture in AI

To prevent regulatory capture, governance frameworks must adopt institutional counterweights:

  • Independent research funding to support external safety assessments.
  • Mandatory disclosure requirements to ensure regulators have direct access to safety data.
  • Third-party audit mandates with investigatory authority.
  • Public accountability mechanisms to ensure transparency in standard-setting processes.
  • Separation of rule-making from industry advisory influence to prevent conflicts of interest.

As Carpenter and Moss (2014) emphasize, preventing capture requires not simply including more expertise but building independent institutional capacity capable of challenging dominant private narratives.


Conclusion

AI governance is unfolding under structural conditions that magnify the risk of regulatory capture. Without deliberate institutional design, safety regulation may ultimately serve to ratify private risk assessments rather than independently verify them. To preserve public trust, AI safety governance must be structurally resistant to capture through enforceable institutional safeguards.


References

Brundage, M., Avin, S., Clark, J., Toner, H., Eckersley, P., Garfinkel, B., … & Anderson, H. (2020). Toward trustworthy AI development: mechanisms for supporting verifiable claims. Futures, 116, 102500.

Carpenter, D., & Moss, D. A. (Eds.). (2014). Preventing regulatory capture: Special interest influence and how to limit it. Cambridge, MA: Cambridge University Press.

Crootof, R., & Bowers, M. (2022). Regulating AI transparency. Yale Journal on Regulation Bulletin, 39, 46-59.

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., & Srikumar, M. (2020). Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center for Internet & Society.

Gleckman, H. (2018). Multistakeholder governance and democracy: A global challenge. Routledge.

Stigler, G. J. (1971). The theory of economic regulation. The Bell Journal of Economics and Management Science, 2(1), 3–21.

Leave a Reply

Your email address will not be published. Required fields are marked *