« Back to Results

Policy Implications of Transformative AI

Paper Session

Friday, Jan. 3, 2025 2:30 PM - 4:30 PM (PST)

Hilton San Francisco Union Square, Yosemite B
Hosted By: American Economic Association
  • Chair: Basil Halperin, Stanford University

Robust Technology Regulation

Andrew Joshua Koh
,
Massachusetts Institute of Technology
Sivakorn Sanguanmoo
,
Massachusetts Institute of Technology

Abstract

We analyze how uncertain technologies should be robustly regulated. An agent develops a new technology and, while privately learning about its harms and benefits, continually chooses whether to continue development. A principal, uncertain about what the agent might learn, chooses among dynamic mechanisms (e.g., paths of taxes or subsidies) to influence the agent’s choices in different states. We show that learning robust mechanisms—those which deliver the highest payoff guarantee across all learning processes—are simple and resemble ‘regulatory sandboxes’ consisting of zero marginal tax on R&D which keeps the agent maximally sensitive to new information up to a hard deadline, upon which the agent turns maximally insensitive. Robustness is important: we characterize the worst-case learning process under non-robust mechanisms and show that they induce growing but weak optimism which can deliver unboundedly poor principal payoffs; hard deadlines safeguard against this. If the regulator also learns, adaptive hard deadlines are robustly optimal which highlights the importance of expertise in regulation.

The Rise of Artificially Intelligent Agents

Anton Korinek
,
University of Virginia

Abstract

Rapid advances in AI imply that machines increasingly behave like artificially intelligent agents (AIAs). This raises fundamental questions about what an economy with AIAs will look like – questions that stretch from the allocation of resources between humans and non-humans to the potential for an existential race. We develop a general equilibrium framework that describes humans and AIAs symmetrically as goal-oriented entities that each absorb scarce resources, contribute to the economy, exhibit defined behavior and are subject to laws of motion. We describe scenarios in which competition over scarce factors eventually reduces human absorption. In the limit case of an AIA- only economy, AIAs both produce and absorb large quantities of output without any role for humans. Finally, we discuss a number of recent macroeconomic trends that we interpret as harbingers of the rise of AIAs, and we analyze policy interventions to preserve positive human absorption.

Existential Risk and Growth

Phil Trammell
,
Digital Economy Lab, Stanford University
Leopold Aschenbrenner
,
Situational Awareness LP

Abstract

Technology increases consumption but can create or mitigate "existential risk" to human civilization. In a model of endogenous technology regulation, the willingness to sacrifice consumption for safety grows as the value of life rises and the marginal utility of consumption falls. As a result, in a broad class of cases, when technology is optimally regulated, existential risk follows a Kuznets-style inverted U-shape. This suggests an economic foundation for the prominent view that we are living through a once-in-history "time of perils". Though accelerating technological development during such a period may initially increase risk, it typically decreases cumulative risk in the long run. When technology is regulated optimally, therefore, there is typically no tradeoff between technological progress and the probability of existential catastrophe.

Copyright Policy Options for Generative Artificial Intelligence

Joshua Gans
,
University of Toronto

Abstract

New generative artificial intelligence (AI) models, including large language models and image generators, have created new challenges for copyright policy as such models may be trained on data that includes copy-protected content. This paper examines this issue from an economics perspective and analyses how different copyright regimes for generative AI will impact the quality of content generated as well as the quality of AI training. A key factor is whether generative AI models are small (with content providers capable of negotiations with AI providers) or large (where negotiations are prohibitive). For small AI models, it is found that giving original content providers copyright protection leads to superior social welfare outcomes compared to having no copyright protection. For large AI models, this comparison is ambiguous and depends on the level of potential harm to original content providers and the importance of content for AI training quality. However, it is demonstrated that an ex-post ‘fair use’ type mechanism can lead to higher expected social welfare than traditional copyright regimes.

Discussant(s)
Seth Benzell
,
Chapman University
Daniel Rock
,
University of Pennsylvania
Gabriel Unger
,
Stanford University
Zoë Hitzig
,
Harvard University
JEL Classifications
  • O3 - Innovation; Research and Development; Technological Change; Intellectual Property Rights
  • L5 - Regulation and Industrial Policy