The Myth of Margin of Error

Margin of error as popularly understood overstates the reliability of research results in at least three key ways. First, those interpreting margin of error forget an important caveat. The results are estimates and typically vary within a narrow range around the actual value that would be calculated by completing a census of everyone in a […]

Respondent Engagement: Boosting Survey Satisfaction and Participation

Numerous industry groups have reported that the levels of respondent cooperation and response rates have been dropping over the past 20 years.  Phone surveys have an average answer rate of less than 8% and of those, less than 4% agree to participate.  In the early 2000s Web-based panels produced average response rates of around 48%, […]

The importance of the screener

As I think back to the topics we have covered in recent months regarding research quality, I recall what you might expect. Mobile design considerations. Panel partnership and sourcing. Automating in-survey checks. Reviewing and coding open ends. Bayesian techniques for identifying outliers. But what I haven’t seen enough about, or maybe anything about, is how […]

It’s the sampling, stupid! (Part 3)

In two previous posts I commented on the difficulties that pollsters face getting representative samples regardless of the methodology they choose. Those difficulties vary depending on the broad approach to sampling (probability vs. nonprobability), but in all cases it takes a deep knowledge of the target population, a science-based approach, and a little luck to […]

It’s the sampling, stupid! (Part 1)

The folks over at Pew have put up a blog post of sorts that that starts what will no doubt be a long, torturous, and, if recent history is any guide, ultimately forgettable series of investigations aimed at trying to determine why, once again, the polls were wrong. The Pew post lays out three potential […]