Survey-Experience Satisfaction Research

This is the ninth in a series of articles on the topic of Participant Engagement and Survey Experience Satisfaction.

As we have discussed in previous posts, shorter, more entertaining surveys tend to be associated with greater participant satisfaction and willingness to respond to future survey invitations.  A common barrier to the goal of shorter surveys cited by many research agencies, is that client sponsors insist on loading up surveys with everything they can in order to maximize the information-per-dollar of budget spent.

A number of years ago, an industry group in cooperation with the CMOR (The Center for Marketing & Opinion Research) National Committee, designed a research program to explore the reasons for survey avoidance and to measure respondents’ satisfaction with the surveys.  The output was to be used as a tool for client feedback on their survey projects.

The objective was to demonstrate that when we torture survey takers, the data quality and opinion of the sponsor deteriorates.

The specific objectives of this research program were to:

  • Determine respondent satisfaction on key survey metrics;
  • Gauge the drivers of a satisfactory survey experience;
  • Measure overall satisfaction with the survey experience;
  • Obtain respondent feedback to enhance the design of future research studies;
  • Use the findings to demonstrate “areas for survey improvement” to client organizations.

Permission was obtained from sponsors before appending the survey satisfaction question to surveys.

The methodology targeted participants who had participated and completed various online research surveys over a three-month period.  These people automatically continued on to the Survey Satisfaction questionnaire at the end of the primary survey.  Note: since only people who completed the primary survey were redirected, we didn’t gain the input from those who mid-terminated—leading us to believe that these satisfaction results are probably inflated to some degree.

  • Participation in the survey satisfaction program was voluntary; respondents had the option to close their browsers and exit the survey at any time.
  • While the number of survey satisfaction completes varied with each associated survey, data indicate that approximately 93% of those who completed the primary survey also completed the survey satisfaction questions.

The approach also captured ancillary variables on the survey characteristics in order to control for external influences on satisfaction.  Some examples included:

  • Sample source (Panel, Non-Panel)
  • Invitation type (Interstitial Pop-up, Email invitation)
  • Sponsor identification (Yes, No)
  • Length of survey in seconds
  • Number of questions
  • Incentive offered (Yes, No)
  • Expected value of incentive if one is offered

Data collected from 7,475 respondents from 17 different research studies were included in this analysis.

Overall satisfaction across all surveys Top Two Box was 70%.  Most respondents agreed that the surveys were well organized (81%) and well written (78%), while fewer agreed that the topics were important to them personally (53%) or that they were being adequately compensated for their time (54%).

 

Chart 1: Survey Satisfaction Ratings

As predicted, the length of the survey was highly correlated with overall satisfaction.  Satisfaction rates were more than 75% Top Two Box (those strongly or somewhat agreeing that the experience was a good one), and significantly higher for surveys less than 13 minutes.  Conversely, significantly more people reported negative reactions to satisfaction on surveys lasting more than 18 minutes.

The intended use for the output from the study was to give clients feedback and advice on how to increase satisfaction (and continued responsiveness) with the survey activity.  The goal for the project was to produce a simple, one-page snapshot of the survey’s characteristics (time and stress) on the participant’s experience.  The client’s survey is compared to national normative data and diagnostic information is rendered for survey design features and possible corrective actions.

 

Exhibit 1: One Page Output

Clients with whom “feedback” information was shared have generally welcomed the information.  In some instances, these type of data are being used by internal corporate research departments to negotiate with stakeholders to reduce the burden of surveys, particularly on client-identified surveys.

Lower satisfaction scores have been successfully used to suggest changes in level of incentive, length of survey (both time and # of questions) and in approach to future invitations.  Participants who have been asked to rate the “survey experience” have been enthusiastic in giving hard-hitting feedback and have made valuable suggestions.

Possible next steps to enhance the usefulness of this tool include:

  • Verifying initial data that suggests that high survey satisfaction scores gives a “brand lift” to identified survey sponsors
  • Using satisfaction scores as a negotiating tool for difficult surveys (if the satisfaction level falls below 50% TTB, the cost of the survey will increase to cover panelist burnout)
  • Adapting industry best-practices for survey characteristics that depress satisfaction
  • Statistically linking engagement and satisfaction to higher data quality.

As always, we appreciate your comments and shared experiences on methods for measuring survey satisfaction and engaging forms of research technology.

Bill MacElroy is Chairman, Socratic Technologies, Inc. www.sotech.com

 

Facebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply

Your email address will not be published. Required fields are marked *