Research Highlights Article
February 22, 2021
Beyond the lab
Do experiments with student participants produce valid measures of behavior within the general population?
Bigstock
Leeat Yariv likes lab experiments.
The Princeton University economist uses them in her research on matching markets and political economy, and appreciates them for the control they allow. But this research approach is not without its critics, who argue that experiments, largely performed with college students in an artificial setting, are unlikely to yield insights applicable to the real world.
Yariv’s coauthor, Erik Snowberg of the University of British Columbia, sympathizes with those criticisms, but he didn’t dismiss lab experiments entirely. He wanted to know what the data showed.
“I was bothered by the self-righteous tone of people who might have a prior like mine but would make grand statements on little or no data,” Snowberg said in an interview with the AEA. “Even if I was more sympathetic to those complaints, I felt like trashing a bunch of people's research with no data to back it up wasn’t terribly useful.”
So, Yariv and Snowberg set out to provide a more complete accounting of the data that could validate or counter those concerns.
Even if I was more sympathetic to those complaints, I felt like trashing a bunch of people's research with no data to back it up wasn’t terribly useful.
Erik Snowberg
In their paper in the February issue of American Economic Review, Snowberg and Yariv offer some positive news for researchers who rely on lab experiments for their work. Comparing survey responses from students to other populations, they say that the campus lab is generally reliable for gaining general insights into human behavior.
Lab experiments have some key advantages, allowing researchers to collect large amounts of data in a setting researchers entirely control. But critics have argued that people in their late teens and early 20s likely make different choices from the more general population. Also, there are questions about whether lab participants behave differently in the lab than in other environments, and whether there is any selection bias with students who sign up for these studies.
To answer these questions, Snowberg and Yariv collected a sample of university students via the Caltech Cohort Study (CCS), an online survey covering 90 percent of Caltech’s roughly 900 undergraduate students. The incentivized survey (students were paid an average of $29 to complete it) measures a range of behaviors like risk, lying, altruism, competition, and implicit attitudes about race and gender, among other things.
They then surveyed other populations. One was a more representative sample of the United States made up of 1,000 people recruited through the platform Survey Sampling International. Another was recruited through the crowdsourcing service Amazon Mechanical Turk, which is widely used by economists because it allows them to collect large volumes of data quickly and inexpensively. They also surveyed a couple hundred University of British Columbia students to round out the student sample in case Caltech students were cognitively or behaviorally too extreme.
While there were some differences, overall the responses were quite close. Correlations between behaviors were similar across the student, representative, and MTurk samples.
The setting in which the survey was administered also didn’t seem to matter. Yariv and Snowberg invited nearly 100 Caltech students to take the survey again, only this time in a lab. Again, their responses were virtually identical to those provided outside of the lab.
And there is limited selection bias from college students who opted to participate in the experiment, which the authors say is easy to control for.
The college students (both Caltech and University of British Columbia) were closer to the ideal of rational actors and showed greater cognitive sophistication. But that doesn’t make lab experiments with students invalid. They might be useful for establishing upper or lower bounds for a general population or mimicking certain economic environments. It all depends on what the researcher is looking for.
“If the policy rests on the fact that people discount the future, then the fact that students discount it less than a normal person in a laboratory environment doesn't matter,” Snowberg said. “But if you really care about getting what that discount rate is, then yes, it's going to matter.”
Yariv said she took comfort in the fact that this paper validates lab experiments and hoped that it would cause doubters to think twice before dismissing them. That said, she doesn’t want or expect the critics to disappear.
“It’s useful for the profession to inspect how we do things,” Yariv said. “I think it's actually healthy for the experimental enterprise to be exposed to criticism. But at the same time, the way to address these concerns is not through polemics but through data.”
♦
“Testing the Waters: Behavior across Participant Pools” appears in the February issue of the American Economic Review.