« Back to Results

Political Economy and Public Policy in the Emerging Age of AI

Paper Session

Friday, Jan. 7, 2022 3:45 PM - 5:45 PM (EST)

Hosted By: Association for Comparative Economic Studies
  • Chair: Noam Yuchtman, London School of Economics

A Model of Behavioral Manipulation

Daron Acemoglu
,
Massachusetts Institute of Technology
Ali Makhdoumi
,
Duke University
Azarakhsh Malekian
,
University of Toronto
Asu Ozdaglar
,
Massachusetts Institute of Technology

Abstract

The default position among economists and AI researchers is that the vast amounts of data collected by online platforms ultimately benefit users by providing them with more informative advertising, better targeted products and more personalized services. This paper raises and explores the possibility that this informational advantage may also enable platforms to engage in behavioral manipulation, which we define as the ability of platforms to modify the behavior of users in a way that is beneficial for the platform and costly for users. Our approach is motivated by the possibility that users cannot fully understand the ways in which the big data and AI tools give new capabilities to platforms to engage in behavioral manipulation. In our model, platforms dynamically offer one of N products and an associated price to a user, who is uncertain about the quality of the products but can slowly learn about the quality of the goods she consumes. Formally, the user receives a Brownian motion signal about the quality of the goods she consumes. Crucially, the signal received by the user also depends on extraneous factors, which may for a while generate higher signals (the appearance of the good or various behavioral biases that temporarily make consumers overestimate the quality of some types of goods). Big data and AI enable platforms not only to better estimate the quality of a good but also learn from the experiences of other similar users which goods will tend to generate higher signals and when. This superior information enables the platform to engage in behavioral manipulation.

Safe Spaces: Shelters or Tribes?

Jean Tirole
,
Toulouse School of Economics

Abstract

By making our lives more transparent than ever, technology exposes our behavior to an audience that is less like-minded than that in our private sphere. In reaction, either we change our behavior or we incur costs to join safe spaces: reduced use of public spaces and forgone diversity and opportunities when selecting our social graph. This paper provides a framework for thinking about the endogeneity of our private sphere in environments in which issues are divisive (politics, religion, sexuality, antagonistic social views…). It studies the emergence of safe spaces of like-minded individuals and their societal consequences.

AI-tocracy: A Symbiosis of Autocrats and Innovators

Noam Yuchtman
,
London School of Economics
Martin Beraja
,
Massachusetts Institute of Technology
Andrew Kao
,
Harvard University
David Y. Yang
,
Harvard University

Abstract

Can frontier innovation be promoted and sustained under autocracy? We argue that a symbiotic relationship between autocracy and innovation can arise when two conditions hold: (i) innovative output increases the autocrats' probability of maintaining power; and (ii) autocrats' spending on the innovative output to maintain power generates commercial spillovers and further innovation. We evaluate these two conditions in China's facial recognition AI sector. We gather comprehensive data on firms and government procurement contracts in this sector, as well as on social unrest across China during the last decade. We show that, first, autocrats benefit from AI: local unrest leads to greater government procurement of facial recognition AI, and increased AI procurement suppresses subsequent unrest. Second, the AI sector benefits from autocrats' suppression of unrest: the contracted AI firms innovate more both for the government and commercial markets. Taken together, these results challenge the conventional wisdom that there exists a fundamental misalignment between autocracy and technological innovation; they suggest the emergence of a symbiosis, in particular, an ``AI-tocracy'' equilibrium in which AI innovation entrenches autocrats, and autocrats' entrenchment stimulates AI innovation.
JEL Classifications
  • O3 - Innovation; Research and Development; Technological Change; Intellectual Property Rights
  • P0 - General