As global demand for artificial intelligence (AI) regulation increases, regulatory strategies across regions have begun to diverge. The contrast between the European Union and the United States highlights both the challenges and opportunities of transatlantic cooperation.
The EU has largely adopted a centralized, comprehensive regulatory approach, while the U.S. favors a more decentralized, risk-management-oriented strategy. This divergence reflects deeper philosophical differences in how technological governance is conceived, and it raises fundamental questions about the future of global AI deployment—and the structure of the regulatory market that will govern it.
The Limits of Centralized, Supra-Organizational Regulation
Centralized regulatory models—often compared to nuclear-energy-style oversight by powerful supra-organizations—have faced increasing criticism. Such models tend to be rigid, slow to adapt, and detached from local contexts.
They often fail to account for regional differences in culture, economic structure, and technological maturity. As a result, regulations may be ill-suited to many real-world conditions. Furthermore, large centralized bodies can struggle to keep pace with rapid technological change, inadvertently stifling innovation. Excessive concentration of regulatory power also increases the risks of inefficiency and corruption.
These limitations suggest the need for more distributed and adaptive regulatory frameworks—ones capable of responding to diverse local needs while remaining responsive to technological evolution.
Lessons from Medical Device Regulation
Past global coordination efforts in medical device regulation offer a cautionary parallel. Despite decades of attempts, a unified global regulatory regime never materialized.
Differences in national legal systems, cultural expectations, and economic priorities made consensus on safety, efficacy, and usage conditions nearly impossible. The result was regulatory fragmentation, forcing manufacturers to comply with multiple overlapping regimes—dramatically increasing cost and complexity.
AI regulation faces a similar structural challenge. Expecting a single centralized authority to govern a globally deployed, rapidly evolving technology is neither realistic nor desirable.
Why a Regulatory Market Model Matters
AI evolves faster than governments can reasonably track. Public regulators often lack both the technical expertise and the financial resources required for effective oversight. Resource constraints may also bias regulators toward particular interests, undermining impartial enforcement.
A regulatory market offers an alternative.
In this model, multiple private regulatory institutions emerge, each specializing in particular regulatory objectives, technologies, or regions. Governments define regulatory goals and authorize one or more private regulators whose services regulated entities must adopt.
Manufacturers, in turn, comply with the rules of their chosen regulator. If those regulators are recognized across jurisdictions, compliance enables market access in multiple regions. National sovereignty is preserved, while regulatory burden is reduced.
Origins: Hadfield’s Regulatory Market Theory
The concept of regulatory markets has been advanced most prominently by Gillian K. Hadfield, Professor of Law and Economics at the University of Toronto. From 2018 to 2023, she also served as Senior Policy Advisor at OpenAI.
The framework builds on Kenneth Abbott’s Regulatory Intermediary Theory (RIT), which identifies three core actors:
- Regulated entities (e.g., AI developers or deployers)
- Private regulatory intermediaries, competing to provide compliance and assurance services
- Governments, which define public objectives and oversee the intermediaries
How the Model Works in Practice
Consider a company developing AI products. Governments may be concerned with safety, privacy, or societal impact, but lack the capacity to regulate directly.
Private regulatory institutions fill this gap by developing domain-specific regulatory services. Governments require companies to purchase and comply with these services as a condition of market participation.
Crucially, governments do not abdicate responsibility. They pre-approve regulators and subject them to continuous audits, ensuring alignment with public goals.
Risks and Historical Warnings
This model is not without risk.
Failures of regulatory intermediaries played a role in the 2008 financial crisis and in the Boeing 737 MAX disasters. In both cases, inadequate oversight of intermediaries contributed to catastrophic outcomes.
Facial recognition governance provides a contemporary illustration. Multiple private regulators offer competing frameworks, and governments select among them based on policy priorities. While this diversity fosters innovation, it also demands strong meta-regulation.
The Cambridge Analytica scandal further demonstrated how AI and data analytics can be weaponized to manipulate social behavior, reinforcing the need for multiple independent regulators focused on different risk dimensions.
Structure of a Regulatory Market
A regulatory market consists of three primary participants:
- Regulated firms
- Private regulatory institutions (for-profit or non-profit)
- Governments, acting as meta-regulators
Governments shift their focus from direct enforcement to goal-setting and oversight. Execution is delegated to accredited private regulators. Effectiveness is verified through ex ante assessments and ongoing audits, using outcome-based metrics.
Economic and Global Implications
Vendors benefit from regulatory coordination: compliance with one or a small number of regimes grants access to multiple jurisdictions. Meanwhile, each jurisdiction retains control over its regulatory objectives and choice of regulators.
Global scale introduces additional benefits. Competition among regulators incentivizes investment in regulatory innovation and reduces monopolistic risk. Smaller or less-resourced countries may benefit from spillovers generated by regulatory innovation in wealthier markets.
The Core Risk: Regulatory Capture
The greatest threat to regulatory markets is regulatory capture.
Close relationships between regulators and industry actors can lead to collusion or collective blind spots. Historical examples include credit rating agencies before the 2008 crisis and insufficient oversight by the U.S. Federal Aviation Administration in the 737 MAX case.
Avoiding capture requires sustained investment in oversight and a viable funding model.
Financing Vigilant Regulation
Paolo has argued that bounty-based incentives—common in cybersecurity—are insufficient to sustain effective regulatory markets. Instead, governments should treat regulatory incentives as preventive investments.
Funding should continue even in the absence of detected failures, with clawback mechanisms if regulators fail to identify risks. This structure incentivizes continuous vigilance, even among high-performing institutions.
Given fiscal constraints, alternative funding mechanisms—such as public–private partnerships or targeted technology levies—may be necessary to sustain regulatory markets over time.
References
Jack (2019), Regulatory Markets for AI Safety
https://arxiv.org/abs/2001.00078Gillian K. Hadfield (2023), Regulatory Markets: The Future of AI Governance
https://arxiv.org/abs/2304.04914Alex Engler (2023), The EU and U.S. Diverge on AI Regulation
https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/Jovana Karanovic (2020), What Are Regulatory Markets?
https://srinstitute.utoronto.ca/news/regulatory-markets-for-aiGillian K. Hadfield (2020), An AI Regulation Strategy That Could Really Work
https://venturebeat.com/ai/an-ai-regulation-strategy-that-could-really-work/Kenneth W. Abbott et al. (2017), The RIT Model
https://journals.sagepub.com/doi/10.1177/0002716216688272Paolo et al. (2023), Both Eyes Open: Vigilant Incentives
https://arxiv.org/abs/2303.03174