Fixing Data Quality at the Source: A Researcher Who Became a Respondent

Automation and programmatic technologies have led to efficiencies that have in turn commoditized sample and therefore commoditized people. The systems that route panelists through surveys are designed to optimize distribution with little consideration for the user experience. Having just embarked on the panelist journey myself, I can relate first-hand to the frustrations our respondents face every day.

A few weeks ago, I saw an ad on Twitter for paid surveys. As an experienced market researcher, I jumped on the opportunity to put myself in the respondent’s shoes and eagerly signed up. What happened next has been an eye-opening experience.

The second I signed up, I got email invitations for eight different panels. I enrolled in a couple of panels and started taking surveys. Quickly, I found myself entangled in a loop of pre-screeners that would determine which surveys I could potentially qualify for.

By the time I got to a live survey, I was already exhausted from being bounced around like a ball in a pinball machine.

I was also annoyed and in a bad mood because I had to answer the same screener questions over and over (gender, age, ZIP code, and so on). Despite my goodwill and genuine love for surveys, I hardly had any time, energy, or patience left for the survey I finally qualified for.

Putting my researcher hat back on, I wondered what it meant for our own surveys. On the one hand, sampling technologies are needed to meet the strong demand for supply, and therefore going back to traditional proprietary panels seems unlikely. On the other hand, it is clear that our research practices have to evolve and adapt to this new sampling model. While there are many ways to improve data quality from design to analysis, I wanted to specifically focus this article on how we can mitigate some of the issues caused by the sampling process itself.


1. Short screeners can prevent survey fatigue.

Before this experiment, I had underestimated the survey fatigue caused by repeated pre-screening before panelists even get to our survey. One has to wonder how many times a normal person can go through this funnel before dropping out and what type of respondents have enough stamina to stick around. I am more mindful than ever of questionnaire length, and especially of keeping the screener short, as ours is one of many panelists will go through before they receive a reward for their time.


2. Fresh respondents are more attentive.

While this is arguably an assumption, it’s hard to disagree that cognitive performance is not affected by fatigue so it is helpful to know just how tired a respondent is before starting our survey. Including a self-reported measure that helps us identify respondents who have already spent too much time on surveys to be fully engaged can help us decide if we want them to proceed with the survey or if they should be removed during post-fieldwork cleaning once their answers have been reviewed. For example, we can ask, “Before you got to this survey today, how long (minutes) had you been filling out other surveys?” or more generally “How are you feeling right now (from energized to tired)”?


3. Sample should not be a black box.

Not all panels are created equal. By understanding the various recruiting models, how the panels are managed, how the panelists are incentivized, and how much of the supply they use is not their own, we can make informed decisions about which panels are likely to yield higher quality data.


4. Smaller sample sizes on some projects may make more sense.

While there may be valid statistical reasons a large sample size might be needed, that is not always the case, and large sample sizes may compromise the quality of our data. When we’re scrambling to fill quotas or looking for a needle in a haystack, lower-quality sample sources that we would otherwise avoid may end up finding their way into our dataset.


5. Other methodologies may be better suited to your research needs.

Online research has become the de facto methodology for quant, and for good reasons: It’s efficient and will give you the best “bang for your buck”. However, there are also scenarios where other methodologies like a custom recruit, face-to-face research, or qualitative methods will lead to better insights (e.g. hard to reach audiences, countries where online is not representative, and so forth.).

There are many ways to improve data quality, but like a good chef will tell you how important quality ingredients are in a dish, it starts at the source.

Newsletter Signup

Subscribe to our weekly newsletter below and never miss the latest news.