Reproducibility? Not So Much.

When social scientists use exemplary methods and report their findings accurately, we like to think that they have found out something about the social world.  Furthermore, it then seems that if another social scientist conducted the same study again, with the same methods, their findings would be pretty much the same.  In fact, “reproducibility” is one of the goals for social science research.

But in spite of widespread acceptance of the standard of reproducibility, few studies are carefully replicated.  Perhaps that’s mostly because it’s more exciting to try to discover something new, rather than simply to confirm what someone else has already reported.

In a project that began in 2011, University of Virginia psychologist Brian Nosek and a large team decided to put reproducibility to a stringent test.  They designed a research project to repeat 100 studies that had been published in leading psychology journals.  Using the same methods, they were only able to reproduce the same results in 39% of the studies.

Very disappointing results that remind us both of the complexity of the social world we study and of the challenges of social research.  The original studies seemed to have been carried out with rigorous methods.

What accounts for this low level of reproducibility?  Do scientists skip over many of their findings and report only those that are “interesting” to journal editors?  If so, could many reported findings just be due to chance?  Or could social scientists shape their methods in subtle ways that make it more likely they can reach their favored conclusions?

http://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html

What do you think explains the low rate of reproducibility of the psychology studies?

Do you think the rate of reproducibility is likely to be higher in sociological research that involves large representative samples from clearly defined populations?

Posted in Chapter 1, Chapter 16, Chapter 2, Chapter 3, Chapter 7 | Tagged , , , , | Leave a comment

Trash the Focus of Anthropological Research in New York

New York University anthropologist Robin Nagle has found “a gold mine for garbage pickers.” New York’s Department of Sanitation collects almost 3.5 million tons of trash each year. The contents range from discarded photos of a divorced spouse and bottles from those with a drinking problem to diapers from new babies and clothing discarded as no longer in fashion. Professor Nagle realized that she could improve understanding of our modern “thruway culture” by systematically studying this trash. She became the “anthropologist-in-residence” for the sanitation department, has worked as a regular, salaried sanitation worker, had helped to publicize the value of “the most important workers that we have in this city,” and has published books on her findings.

For Further Thought?

  1. What can social researchers understand about the social world by investigating this most unobtrusive of indicators?
  2. What problems do you suppose Professor Nagle has to overcome in this research approach.?

http://www.bostonglobe.com/news/nation/2015/08/30/trash-treatise-nyc-professor-sees-meaning-garbage/ApbxLP4zY27ButOKuJRhqI/story.html

Posted in Chapter 10, Chapter 12, Chapter 13, Chapter 3, Chapter 5 | Tagged , , , | Leave a comment

Can Big Data be a Bad Thing?

Have you ever found yourself changing your behavior just to “score points” with your FitBit bracelet, or something similar?  How much do we really learn from postings on Facebook?  Is it just what people want us to see?  What are we missing?

http://www.nytimes.com/2015/05/03/opinion/sunday/how-not-to-drown-in-numbers.html

Do we need to keep our connection to small data also?

Posted in Chapter 14 | Tagged | Leave a comment

Place matters for poverty

Children who move out of high poverty neighborhoods to low poverty neighborhoods with more resources do better on multiple outcomes, and the younger they are when their families move the better. These conclusions come from a study of the long-term outcomes of the Moving to Opportunity Experiment.

Read some of the details and examine some of the results at:

http://www.equality-of-opportunity.org/

How effective are these data displays in conveying information?

What might be sources of invalidity in such a field experiment?

Posted in Chapter 16, Chapter 6, Chapter 7, Chapter 9 | Tagged , , , , , , , | Leave a comment

Do social scientists do better than pollsters?

One of the concerns that emerged from the recent scandal about apparently fictitious data in a published poll about support for same-sex marriage was whether public pollsters are less transparent in their methods than social scientists.  Polling organizations are often not fully transparent about their methods and may not release survey data until months after survey findings are publicized.

Some are concerned also that polling firms “play it safe” by trying to ensure that their own results don’t differ too much from the findings of other firms.  It’s been called “herding.”

http://www.nytimes.com/2015/05/28/upshot/pollings-secrecy-problem.html?abt=0002&abg=1

How well do you think the academic peer review process works?  Is it sufficient to guard against fraudulent reports?

What standards would you recommend for polling firms?

Posted in Chapter 16, Chapter 3, Chapter 7, Chapter 8 | Tagged , , , | Leave a comment

Increasing retractions, increasing fraud??

There has been a 20-25 percent increase in retractions in a total of 10,000 medical and science journals in the past five years (it’s now up to 500-600 retractions per year).  Data has been distorted, faked, and the methods of getting to data have been misreported.  Is the problem greater dishonesty by scientists, poorer quality reviews by peers, or new tools for detecting plagiarism and other forms of dishonesty?  Greater pressure to get attention for new “discoveries”?  Or all those and more?

Of course this problem is being researched systematically and tips are being submitted to sites like Retraction Watch and MedPage Today.

You can read more about the problem at:

http://www.nytimes.com/2015/06/16/science/retractions-coming-out-from-under-science-rug.html

Do you think the same problem occurs in social science journals?

Could you suggest some adaptations of the peer review process to lessen this problem?

Posted in Chapter 16, Chapter 3 | Tagged , , | Leave a comment

Where are our survey methods when we most need them?

Problems with sampling and response rates in phone surveys due to cell phones and answering machines continue to bedevil survey researchers. As the 2016 presidential election approaches, the reliability of election polling is increasingly a focus of concern.  Predictions in some recent elections have been wrong–Israeli support for Prime Minister Benjamin Netanyahu and Conservative strength in Britain.  Moreover, as the response rate in some major U.S. phone surveys has dropped to 8%, there is no clear solution.

The background you need for understanding these problems is in the sampling and survey research chapters of Investigating the Social World.  You can read a recent analysis at:

http://www.nytimes.com/2015/06/21/opinion/sunday/whats-the-matter-with-polling.html?_r=0

Have you or your friends responded to surveys sent to your cell phone?  Could this be a solution or is it an invasion of your privacy?

What do you see as the advantages and disadvantages of conducting surveys through online websites?

Posted in Chapter 1, Chapter 3, Chapter 5, Chapter 8 | Tagged , , , , , | Leave a comment

Research Findings Too Good to be True

The level of popular acceptance of same-sex marriage has increased dramatically in recent years, but remains low in many areas. What if same-sex marriage proponents sent gay canvassers into neighborhoods to persuade opponents of gay marriage to change their potential votes on the subjects? Would this be more effective than having straight people act as the persuaders? The Los Angeles LGBT Center decided to sponsor an experiment to find out.

The experimental design was simple enough: Gay or straight canvassers were chosen randomly to visit voters in their homes and try to persuade them. The findings favored the use of gay canvassers and were so compelling that after peer review they were published in the prestigious journal, Science.

But now it appears the findings were fraudulent and the article will be retracted. The young researcher who collected the surveys measuring changes in attitudes seems to have been so determined to come up with the positive findings that he made them up, at least in part. He said he paid participants to increase the response rate, but he hadn’t. He was asked for the original data but said he had erased it. He claimed a 12% response rate, but other researchers could not achieve anything like that. He said he had worked with a survey company, but they denied any awareness of the project.

You can read more about this troubling story at http://www.nytimes.com/2015/05/26/science/maligned-study-on-gay-marriage-is-shaking-trust.html?_r=0.

1. How did this failure of honesty and openness happen?

2. Why didn’t the peer review process identify the problems before the paper was published?

3. Are you reassured that another team of researchers tried to replicate the study and ultimately found so many differences in how it was working out that they started to investigate the original study? Or are you troubled that it took this much effort to uncover an apparent fraud? Is science really self-correcting?

4. What recommendations you make to an IRB to minimize the likelihood of another such failure of oversight?

Posted in Chapter 12, Chapter 16, Chapter 3, Chapter 5, Chapter 7, Chapter 8, Uncategorized | Tagged , , , , , | Leave a comment

Affective Realism?

Is seeing believing?  It’s natural to feel that when we observe events, or conduct lengthy interviews to learn what people saw or heard, we’re learning about the social world as it “really is.”  But recent experiments by psychologists demonstrate a direct impact of feelings on perceptions:  People who feel unpleasant in turn perceive others as less likable, less competent, more likely to commit a crime, and so on.  In other words, what we “see” or otherwise perceive is shaped in part by your predictions about the world around you.

Drs. Lisa Feldman Barrett and Jolie Wormwood connect these research findings to troubling questions about police shootings.  Read more at: http://www.nytimes.com/2015/04/19/opinion/sunday/when-a-gun-is-not-a-gun.html

Do these findings make you less trusting of qualitative methods?

Can you imagine ways of designing qualitative research to lessen these problems?

Do you think the impact of “affective realism” would be greater in qualitative or in quantitative research projects?

Does use of scientific methods lessen the problem of affective realism, or just obscure it?

Posted in Chapter 1, Chapter 10, Chapter 12, Chapter 15, Chapter 3, Chapter 4 | Tagged , , , , , , , , , | Leave a comment

Learn (and Teach) by Doing

Learning by “tinkering” has caught on at San Francisco’s Tinkering School. The idea is to enhance education by having children learn by carrying out projects. For example, have students form a construction crew to create a small cardboard city. Or prepare wagons for a cross-country trip during the settling of the West. They can then learn about architecture, physics, chemistry, social relations, …. You get the idea. And I’ll bet you realize that what the children learn during such projects will be understood and learned better than what happens when they listen to a lecture.

Read more about it at http://www.nytimes.com/2015/04/04/opinion/learning-through-tinkering.html

What projects have helped you to learn? What projects could you suggest for your methods class?

Posted in Chapter 1, Chapter 2, Teaching Tips, Uncategorized | Tagged , | Leave a comment