One of the concerns that emerged from the recent scandal about apparently fictitious data in a published poll about support for same-sex marriage was whether public pollsters are less transparent in their methods than social scientists. Polling organizations are often not fully transparent about their methods and may not release survey data until months after survey findings are publicized.
Some are concerned also that polling firms “play it safe” by trying to ensure that their own results don’t differ too much from the findings of other firms. It’s been called “herding.”
How well do you think the academic peer review process works? Is it sufficient to guard against fraudulent reports?
What standards would you recommend for polling firms?