It’s the sampling, stupid! (Part 4)

This is the last in a series of posts arguing that the fundamental problem with the recent US electoral polls was the failure to achieve representative samples of the population that actually showed up to vote. I hope I have been clear that achieving such a sample with current methods is no mean feat given the challenges of high non-response, increased reliance on convenience sampling, and shifting coalitions from one election to the next that make it extremely difficult to predict who will turn out to vote.

There are many other voices in this debate, including those within MR arguing that the key issue is not sampling error at all, but measurement error. They range from calls for better questions to applications of behavioral economics to leveraging big data to using text analytics, to name a few. Improvements in measurement are always welcome, but if the sample you are working with is not representative of the population whose attitudes and behaviors you are trying to understand, you will miss the mark more often than not.

The reality is that the tools we have available to help us understand what people will do and why they will do it are blunt instruments. Yet we crave a precision that mostly is unattainable, at least without spending unrealistic amounts of money over an unacceptable period of time. In a post on this blog back in February about the British polling problems I argued that relying solely on surveys is not enough. We live in a data rich world and we need to bring multiple sources and methods to bear (as well as surveys) if we are to do a better job of getting to something that resembles the truth.

Even then, reporting point estimates as absolutes seems like false precision.

Back in October, I did a webinar for NewMR in which I quoted Philip Tetlock (by way of Nate Silver) and argued that market researchers need to learn to be more multidisciplinary, adaptable, self-critical, complexity tolerant, cautious, and empirical. Afterwards I got an email from David Smith in which he wrote, “Should we be working harder as an industry to develop a new conceptual language to explain what we do? That is away from margin of error and so on – towards talking about the weight, power and safety of evidence?”

I think David is right, but we should not underestimate the difficulty of the task or the resistance we likely will encounter.

Reg Baker is Executive Director of MRII.


Leave a Reply

Your email address will not be published. Required fields are marked *