Berkeley, California - A course on critical thinking at UC Berkeley, co-taught for the past three years by a public policy expert and a Nobel Prize-winning physicist, has generated a new proposal to remove sources of bias in research and improve confidence in published studies.
Social science research got a black eye recently when the authors of several studies were shown to have manipulated data. But the more prevalent issue in the social sciences today is not actual fraud, but subtle and usually inadvertent bias that skews the conclusions of studies and often makes them unrepeatable.
In a commentary in the Oct. 8 issue of Nature, Robert MacCoun, a former UC Berkeley professor of law and public policy who is now at Stanford University, and Saul Perlmutter, a Berkeley professor of physics who won the 2011 Nobel Prize in Physics for the discovery of dark energy, propose that empirical scientists in the fields of biology, psychology and the social sciences adopt some of the blind analysis techniques now common in some fields of physics.
“There is increasing evidence that a large fraction of the published results in many fields, including medicine, don’t hold up under attempts at replication, and that the proportion of ‘statistically significant’ published results is too good to be true, given existing sample sizes,” MacCoun said. “What’s causing this? Many factors, but much of it has to do with confirmation biases that stack the deck in favor of a preferred hypothesis.”
Perlmutter said that particle physics and cosmology – fields that employ particle accelerators and telescopes to ask some of the same fundamental physical questions – adopted blind analysis more than a decade ago to prevent the expectations of experimenters from affecting their conclusions. This happened after the late physicist Richard Feynman and others noticed that studies confirmed one another more frequently than one would expect, suggesting a confirmation bias in published studies.
“There is clear evidence in the literature that people tend to look for the errors in their analysis only when they get a surprising result or effect,” Perlmutter said. “This leads to people re-examining their analyses, and since there are often alternative approaches and/or subtle hidden bugs, the final conclusions typically end up more in line with previous results.”
“In addition, there’s also a bias toward interpreting random fluctuations in the data as meaningful if they back up some theory you have,” he said. “In blind analysis, you don’t let anybody who is working on the analysis see any of the relevant science results until they have debugged all their analysis techniques and they have checked all the calculations and analytic decisions they want to check.”
Hiding data from researchers
Blind analysis is not the same as double-blind studies in medicine, where experimenters and the patient are kept in the dark about which drug the patient received, so as not to influence their observations or get a placebo effect.
Instead, in a blind analysis, the computer or a colleague hides the identity of the data or shifts its values by a hidden amount while the computer analysis is performed, debugged and finalized. As a result, the researchers do not know how any of their decisions about the analysis and its checks and debugging will affect the outcome. Only at the end are the researchers told the real identity of the data.
“It forces people to think about any sources of errors, whether or not they get a surprising result,” Perlmutter said. “The blinding could be as low-tech as asking your colleague down the hall to randomize the labels on the different groups in your experiment.”
Because particle physicists and then some cosmologists began using blind analysis earlier than most scientists in other fields, Perlmutter said, he was intrigued to learn that social scientists weren’t already using the technique.
“Big Ideas” course
This realization came while he and MacCoun were co-teaching a course, “Sense and Sensibility and Science,” with philosophy professor John Campbell. Debuting in 2013 and taught each spring, this “Big Ideas” course focuses on critical thinking, the sources of authority of science in a democratic society, and the ways in which we fool ourselves when solving problems.
Once MacCoun and Perlmutter realized that the blind analysis used in physics could help remove many sources of experimenter bias from social science research, the two wrote a book chapter and were later invited to co-author a commentary in Nature. In their journal paper, they address arguments against blind analysis – that it is too much trouble, or would endanger patients during clinical trials – and urge dissemination of best practices to make it broadly accessible.
“Blind analysis is particularly valuable for highly politicized research topics and for empirical questions that emerge in litigation,” MacCoun said, noting that some forensic laboratories are already beginning to adopt simple blind analysis methods. “And in some cases, expert witnesses could apply their preferred analytic methods to blinded data, which would greatly increase the credibility of their conclusions.”
The two also propose that funding agencies offer supplemental grants to encourage researchers to incorporate and test blinding methods in their funded research projects, and that statistical software vendors consider incorporating blinding algorithms in their data analysis packages.
Perlmutter will again co-teach “Sense and Sensibility and Science” next spring, with Campbell and a new faculty member rotating into MacCoun’s slot: Tania Lambrozo, an associate professor of psychology and frequent contributor to the NPR blog 13.7: Cosmos & Culture.