In a previous post we discussed how survey length can indirectly drive up participation recruitment costs. Another often-ignored consequence of long surveys is poor quality data that may or may not be easy to identify. Even conscientious participants lose the desire to be cooperative as a survey drags on. By the sixth page, or the tenth page, or the fifteenth page of an online or paper survey, even the participants who’ve stuck with the task that far start itching to just be done with it. And when that happens, people stop reading as carefully as they were before, or they start skipping questions, or worse, they start responding randomly without reading the questions or response options at all.
Can you tell when this has happened? Sometimes. If you find that you have a large number of incomplete surveys (a good reason to track survey starts as well as finishes), odds are good that some of the complete surveys you got have some garbage at the end. If you have participants who gave the same response to 20 consecutive questions at the end of the survey, those are probably not thoughtful responses. But how do you flag a participant who switched up their response option frequently at the end, but wasn’t actually reading the survey? Recording the time to complete each question as well as the time to complete the whole survey provides an easy check for speeders. Similarly, if you have two similar questions with wildly different responses, you may be looking at a bored participant (also a good way to flag other sorts of cheaters, see our post about this here).
The best way to deal with this situation is to prevent it from happening in the first place. Keep your surveys as brief as you can. That means making sure you have concrete goals for the survey, and that all parties involved in the survey project are on the same page. Surveys written by committees where each member has their own agenda for the survey often grow longer because they’re addressing multiple goals. Figure out what your organization needs from the survey data, and leave open the possibility that you may need to conduct two separate (shorter) surveys to best address your needs. Make sure you know what the results of each question will mean for your organization — what actions will you take if 40% of respondents strongly agree with this statement?
In the end, most organizations will be better off if they ask fewer questions, and know that participants are able to stay focused through the whole survey and respond thoughtfully, rather than squeeze in more questions and have to wonder if the data only shows the random responses of a tired person.