Policy Implications of Transformative AI
Paper Session
Friday, Jan. 3, 2025 2:30 PM - 4:30 PM (PST)
- Chair: Basil Halperin, Stanford University
The Rise of Artificially Intelligent Agents
Abstract
Rapid advances in AI imply that machines increasingly behave like artificially intelligent agents (AIAs). This raises fundamental questions about what an economy with AIAs will look like – questions that stretch from the allocation of resources between humans and non-humans to the potential for an existential race. We develop a general equilibrium framework that describes humans and AIAs symmetrically as goal-oriented entities that each absorb scarce resources, contribute to the economy, exhibit defined behavior and are subject to laws of motion. We describe scenarios in which competition over scarce factors eventually reduces human absorption. In the limit case of an AIA- only economy, AIAs both produce and absorb large quantities of output without any role for humans. Finally, we discuss a number of recent macroeconomic trends that we interpret as harbingers of the rise of AIAs, and we analyze policy interventions to preserve positive human absorption.Existential Risk and Growth
Abstract
Technology increases consumption but can create or mitigate "existential risk" to human civilization. In a model of endogenous technology regulation, the willingness to sacrifice consumption for safety grows as the value of life rises and the marginal utility of consumption falls. As a result, in a broad class of cases, when technology is optimally regulated, existential risk follows a Kuznets-style inverted U-shape. This suggests an economic foundation for the prominent view that we are living through a once-in-history "time of perils". Though accelerating technological development during such a period may initially increase risk, it typically decreases cumulative risk in the long run. When technology is regulated optimally, therefore, there is typically no tradeoff between technological progress and the probability of existential catastrophe.Copyright Policy Options for Generative Artificial Intelligence
Abstract
New generative artificial intelligence (AI) models, including large language models and image generators, have created new challenges for copyright policy as such models may be trained on data that includes copy-protected content. This paper examines this issue from an economics perspective and analyses how different copyright regimes for generative AI will impact the quality of content generated as well as the quality of AI training. A key factor is whether generative AI models are small (with content providers capable of negotiations with AI providers) or large (where negotiations are prohibitive). For small AI models, it is found that giving original content providers copyright protection leads to superior social welfare outcomes compared to having no copyright protection. For large AI models, this comparison is ambiguous and depends on the level of potential harm to original content providers and the importance of content for AI training quality. However, it is demonstrated that an ex-post ‘fair use’ type mechanism can lead to higher expected social welfare than traditional copyright regimes.Discussant(s)
Seth Benzell
,
Chapman University
Daniel Rock
,
University of Pennsylvania
Gabriel Unger
,
Stanford University
Zoë Hitzig
,
Harvard University
JEL Classifications
- O3 - Innovation; Research and Development; Technological Change; Intellectual Property Rights
- L5 - Regulation and Industrial Policy