Isaiah Andrews, Clark Medalist 2021


American Economic Association Honors and Awards Committee

April 2021

Isaiah Andrews, Clark Medalist 2021Isaiah Andrews’ contributions to econometric theory and empirical practice have improved the quality, credibility, and communication of quantitative research in economics. He is playing a key role in the recent turn of econometrics back toward the study of the most important problems faced in empirical research. Andrews’ contributions fall in three main areas. The first is to provide ways to make the sensitivity of parameter estimates to the features of the data used to estimate them more transparent. The second concerns the problem of publication bias and related problems concerning how to draw inferences from estimates that have been selected. The third concerns estimation and inference in the presence of weak identification.

Tools to characterize the sensitivity of parameter estimates to estimation moments and model assumptions.

In “Measuring the Sensitivity of Parameter Estimates to Estimation Moments” (with Matthew Gentzkow and Jesse M. Shapiro, QJE, 2017) he shows how to quantify the sensitivity of parameter estimates to the assumptions that determine the relationship between the estimator and features of the data. The first ingredient is the “sensitivity matrix”. This matrix determines how the parameter of interest changes as the true model deviates from the assumed model. The second ingredient is the size of the departure. In the case of least squares regression, these are the ingredients of the omitted variables bias formula. Andrews and his co-authors generalize the formula to a broad class of models and demonstrate its usefulness with compelling empirical examples. The methods are becoming a standard part of the toolkit of applied researchers.

Equally important is “On the Informativeness of Descriptive Statistics for Structural Estimates” (with Matthew Gentzkow and Jesse M. Shapiro, Econometrica, 2020). Researchers informally discuss what statistics drive the sampling distribution of their estimator. But whether they are correct is not clear. Researchers also attempt to validate structural models by comparing predictions based on the model, with descriptive statistics (means, regression coefficients, etc.). However, it is often hard to know whether the comparisons researchers make are informative about the suitability of the model for counterfactual predictions.

Andrews and his co-authors provide an intuitive and theoretically grounded way to quantify the degree to which particular statistics discipline model estimates of a particular object of interest, such as the effect of a counterfactual policy. Their approach can be applied to a broad class of models, and the empirical examples in the paper show how.

The two papers hold the promise of fundamentally changing the way economists assess and communicate their results.

Publication Bias and Inference after Selection

“Identification of and Correction for Publication Bias” (with M. Kasy, AER, 2019) is a major contribution to the growing literature on publication bias. The paper proposes models of selective reporting behavior and selection into publication. It shows that replication studies with the same design as the published studies can be used to obtain the unconditional distribution of a parameter of interest. This distribution, along with the distribution of the parameter estimates in the published studies, can be used to correct for publication bias. The same idea can be used in a meta-study that includes both published papers and unpublished papers. The paper has important implications for empirical practice and for rules about publishing studies.

Andrews’ recent working paper, “Inference on Winners” (with Toru Kitagawa and Adam McCloskey, 2020) shows how to conduct inference about a parameter that is chosen as the “best” out of a set of choices, where the choice must be based on sample estimates. For example, one might run an experiment with multiple treatments with the aim of identifying the treatment to recommend to a policy maker. Choosing the best treatment based on the treatment effect estimates, which contain sampling error, leads to bias from a “winner’s curse” problem. Andrews and his co-authors provide estimators that eliminate this bias.

Inference with Weak Identification

In a series of papers, Andrews has provided better ways to perform statistical inference when weak identification is a possibility for a broad class of nonlinear models. And he has provided procedures that can be applied without knowing in advance whether identification is weak and which perform well if identification is strong.

For example, “Conditional Inference with a Functional Nuisance Parameter” (with Anna Mikusheva, Econometrica, 2016) considers statistical inference for a broad class of moment condition models when the moment conditions used may not be sufficient to identify the parameter of interest. In contrast to the linear IV model, the GMM model is essentially a semi-parametric model. One can think of the distribution of the moment equations as involving a nuisance functional parameter that arises from the nonparametric part of the model. The key idea of the paper is to condition the distribution of a test statistic on a sufficient statistic for the nuisance functional parameter. This provides the basis for a test with correct size and good power properties. The paper is influencing both empirical practice and how econometricians study GMM models.

Many empirical papers in economics estimate the structural parameters by choosing them to minimize the distance between model predictions about reduced form parameters and sample estimates of those parameters. Statistical inference and based on standard asymptotic theory is justified only if the sampling distribution of the reduced form parameters is tight enough relative to the degree of nonlinearity in the model predictions as a function of the structural parameters. In “A Geometric Approach to Weakly Identified Econometric Models” (with Anna Mikusheva, Econometrica, 2016), Andrews and Mikusheva use differential geometry to derive uniformly asymptotically valid minimum distance tests. The tests are applicable for a wide range of data generating processes and structural models. This creative paper offers deep insights into the nature of weak identification and provides a better tool for situations in which standard asymptotic inference may not apply.