By clicking the "Accept" button or continuing to browse our site, you agree to first-party and session-only cookies being stored on your device to enhance site navigation and analyze site performance and traffic. For more information on our use of cookies, please see our Privacy Policy.
Advances in artificial intelligence (A.I.) are a double-edged sword. On
the one hand, they may increase economic growth as A.I. augments our
ability to innovate. On the other hand, many experts worry that these
advances entail existential risk: creating a superintelligence misaligned
with human values could lead to catastrophic outcomes, even possibly
human extinction. This paper considers the optimal use of A.I. technology
in the presence of these opportunities and risks. Under what
conditions should we continue the rapid progress of A.I. and under what
conditions should we stop?