(Quasi-)Experimental Identification and Estimation
Paper Session
Friday, Jan. 6, 2023 10:15 AM - 12:15 PM (CST)
- Chair: Alberto Abadie, Massachusetts Institute of Technology
Negative Controls for Instrumental Variables
Abstract
Instrumental variable designs rely on exclusion and independence assumptions that are not directly testable. A commonly used indirect falsification (``placebo'') test for these assumptions examines whether variables that should be independent of a valid instrument are indeed independent of the instrument being used (e.g., testing that an instrument is balanced on predetermined characteristics). Adapting terminology from other disciplines, we refer to these variables as \emph{negative controls}. We develop a theoretical framework for negative-control tests for instrumental variables and derive the assumptions underlying such tests. The theory defines negative controls as proxies for \textit{alternative path variables} --- unobserved threats to instrument validity. While many studies typically use only one negative control, our theory can be used to detect multiple negative controls in existing data sets. Our theory also shows that negative-control tests map to testing the (conditional) independence of the set of negative controls and the instrument. We discuss various conditional independence tests that can be used as negative-control tests, including tests that can also detect a violation of the functional form assumption when 2SLS is used. We demonstrate that these negative control tests can find and diagnose potential violations of the identification assumptions that commonly used tests would miss.Assessing Omitted Variable Bias when the Controls are Endogenous
Abstract
Omitted variables are one of the most important threats to the identification of causal effects. Several widely used methods, including Oster (2019), have been developed to assess the impact of omitted variables on empirical conclusions. These methods all require an exogenous controls assumption: the omitted variables must be uncorrelated with the included controls. This is often considered a strong and implausible assumption. We provide a new approach to sensitivity analysis that allows for endogenous controls, while still letting researchers calibrate sensitivity parameters by comparing the magnitude of selection on observables with the magnitude of selection on unobservables. We illustrate our results in an empirical study of the effect of historical American frontier life on modern cultural beliefs. Finally, we implement these methods in the companion Stata module regsensitivity for easy use in practice.Design-Based Uncertainty for Quasi-Experiments
Abstract
This paper develops a design-based theory of uncertainty that is suitable for analyzing quasi-experimental settings, such as difference-in-differences (DiD). A key feature of our framework is that each unit has an idiosyncratic treatment probability that is unknown to the researcher and may be related to the potential outcomes. We derive formulas for the bias of common estimators (including DiD), and provide conditions under which they are unbiased for an intrepretable causal estimand (e.g., analogs to the ATE or ATT). We further show that when the finite population is large, conventional standard errors are valid but typically conservative estimates of the variance of the estimator over the randomization distribution. An interesting feature of our framework is that conventional standard errors tend to become more conservative when treatment probabilities vary across units. This conservativeness helps to (partially) mitigate the undercoverage of confidence intervals when the estimator is biased. Thus, for example, confidence intervals for the DiD estimator can have correct coverage for the average treatment effect on the treated even if the parallel trends assumption does not hold exactly. We show that these dynamics can be important in simulations calibrated to real labor-market data. Our results also have implications for the appropriate level to cluster standard errors, and for the analysis of linear covariate adjustment and instrumental variables.Discussant(s)
Jann Spiess
,
Stanford University
Ashesh Rambachan
,
Harvard University
Evan Kershaw Rose
,
University of Chicago
Jeffrey M. Wooldridge
,
Michigan State University
JEL Classifications
- C1 - Econometric and Statistical Methods and Methodology: General
- C2 - Single Equation Models; Single Variables