A common concern about publications in scientific journals, even those with very high standards, is called “the file drawer problem.” The problem is that studies that find something interesting are more likely to be selected for publication than studies that find “nothing going on.” So, for example, a study that finds that depression lessens after psychotherapy is more likely to be published than one that finds psychotherapy has no effect. It’s also called “positivity bias.” Knowing this, researchers are less likely to submit an article for possible publication if they “didn’t find anything.”
The problem then, is that studies that find a hypothesis are less likely to be published than studies that find the same hypothesis is supported, even when it really is the case that the hypothesized association does not exist. Over time, studies that mistakenly find that there is an association between two or more variables are more likely to be published than those that find no association. The body of published studies will therefore give a misleading–a biased–picture of how the world really operates.
This is exactly what was found in an important study of studies about the effect of psychotherapy on depression. Ellen Driessen and colleagues in Amsterdam examined results from all 55 studies testing talk therapy for depression between 1972 and 2008. They also examined results from 13 unpublished tests of the same association. Their analysis identified that estimates of the value of psychotherapy had been overestimated by about 10% due to positivity bias (a 20% benefit rather than a 30% benefit).
Read more about this important research at: http://www.nytimes.com/2015/10/01/health/study-finds-psychotherapys-effectiveness-for-depression-overstated.html?ribbon-ad-idx=9&rref=health&module=Ribbon&version=context®ion=Header&action=click&contentCollection=Health&pgtype=article.
How would you explain the file drawer problem (positivity bias)?
What steps do you think journals could take to reduce this problem?