On many of our research projects, the sample size (i.e., number of people who are surveyed) directly relates to research cost. Costs typically increase as we print and mail more surveys or call more people. Normally, the increase in sample size is worth the extra cost because the results are more likely to accurately reflect the population of interest; however, is a big sample size always necessary?
Here is an example of when sample size does not need to be very large in order to draw valuable insights. Let’s say you are the communications manager for an organization that wants to improve public health by increasing broccoli consumption. For the past year, you have diligently been publicizing the message that broccoli is good for your digestive system because it is high in fiber. Lately, your director has wondered if another message may be more effective at persuading people to eat broccoli—maybe a message that touts broccoli’s ample amount of Vitamin-C, which can help fight off the common cold. Switching your communication campaign’s key message would be expensive, but probably worth it if your new message was dramatically more effective at persuading behavior. However, if the Vitamin-C message was only marginally more effective, then it might not be worth spending the money to switch. Your boss tasks you with conducting research to compare the effectiveness of the fiber message to the Vitamin-C message.
If you have undertaken message research in the past, you may have heard that you need a large and randomly drawn sample in order to draw reliable insights. For example, you might survey your population and collect 400 responses from a group who saw the original fiber message and 400 responses from those who saw the new Vitamin-C message. While collecting up to 800 responses might be valuable for some types of analysis, it is probably unwarranted to answer the research question described above. Indeed, you might only need to collect about 130 responses (65 from each group) to answer the question “Which message is more effective?” Why so few?
Sixty-five responses from each group should reveal a statistically significant difference if the effect size is moderate or greater. (In social science research, we use the term effect size as a way to measure effectiveness. For example, is a new message more or less effective than an old message? A small effect size is less effective than a large effect size, and you need to apply careful analysis to detect a small effect, while a large effect is obvious and easy to detect). With 65 responses, analysis should reveal a statistically significant difference if the effect size is at least moderate.
So what does moderate mean? A helpful way (although not technically accurate) to understand effect size is to think of it as a lack of agreement between two groups (e.g., those who saw the fiber message and those who saw the Vitamin-C message). With 65 responses from each group, a statistically significant result would mean there was no more than 66 percent agreement between the groups (technically, we mean less than 66 percent distribution overlap). For most communication managers, that is a substantial effect. If the result pointed to the new Vitamin-C message being more effective, it’s probably worthwhile to spend the money to switch messaging! If analysis did not find a statistically significant difference between the messages, then it’s not advisable to switch because the increased effectiveness (if any) of the new message would be marginal at best.
If cost is no factor, then a bigger sample size is usually better, but I have not yet met a client who said cost didn’t matter. Rather, our clients are typically looking for insights that are going to help them produce meaningful and substantial impacts. They look for a good value and timely results. By understanding the intricacies of selecting an appropriate sample size, we stretch our client’s research dollars. Give us a call if you would like to discuss how we could stretch your research dollars.