Scaling Field Experiments
Paper Session
Sunday, Jan. 7, 2024 1:00 PM - 3:00 PM (CST)
- Chair: Robert Metcalfe, University of Southern California
Enhancing Human Capital in Children: A Case Study on Scaling
Abstract
This paper provides new insights on the science of scaling. We study an educational mentoring program with a home visit component implemented at scale in Mexico, under different modalities (original and enhanced training for mentors) and different situations (field experiment and policy implementation). While the program was ineffective when implemented by the government in its original modality, the enhanced modality boosts children's outcomes, both in the field experiment and during the government implementation. Higher-quality home visits encourage parent/child and parent/community interactions, which in turn are found to promote the scalability of the program. Our work provides new knowledge on the socially determined nature of scaling educational programs.Improving Public Sector Management at Scale: Experimental Evidence on School Governance in India
Abstract
We present results from a large-scale experimental evaluation of an ambitious attemptto improve management quality in Indian schools (implemented in 1,774 randomly-selected
schools). The intervention featured several global “best practices” including comprehensive
assessments, detailed school ratings, and customized school improvement plans. It did
not, however, change accountability or incentives. We find that the assessments were
near-universally completed, and that the ratings were informative, but the intervention
had no impact on either school functioning or student outcomes. Yet, the program was
scaled up to cover over 600,000 schools nationally. We find using a matched-pair design
that the scaled-up program continued to be ineffective at improving student learning in the
state we study. We also conduct detailed qualitative interviews with frontline officials and
find that the main impact of the program on the ground was to increase required reporting
and paperwork. Our results illustrate how ostensibly well-designed programs, that appear
effective based on administrative measures of compliance, may be ineffective in practice.
Bottlenecks for Evidence Adoption
Abstract
Governments increasingly use RCTs to test innovations, yet we know little about how they incorporate results into policy-making. We study 30 U.S. cities that ran 73 RCTs with a national Nudge Unit. Cities adopt a nudge treatment into their communications in 27% of the cases. We find that the strength of the evidence and key city features do not strongly predict adoption; instead, the largest predictor is whether the RCT was implemented using pre-existing communication, as opposed to new communication. We identify organizational inertia as a leading explanation: changes to pre-existing infrastructure are more naturally folded into subsequent processes.A Simple Rational Expectations Model of the Voltage Effect
Abstract
The “voltage effect” is defined as the tendency for a program’s efficacy to change when it isscaled up, which in most cases results in the absolute size of a program’s treatment effects to
diminish when the program is scaled. Understanding the scaling problem and taking steps to
diminish voltage drops are important because if left unaddressed, the scaling problem can weaken the public’s faith in science, and it can lead to a misallocation of public resources. There exists a growing literature illustrating the prevalence of the scaling problem, explaining its causes, and proposing countermeasures. This paper adds to the literature by providing a simple model of the scaling problem that is consistent with rational expectations by the key stakeholders. Our model highlights that asymmetric information is a key contributor to the voltage effect.
JEL Classifications
- C9 - Design of Experiments
- D2 - Production and Organizations