RADIANCE BLOG

Category: Quantitative Research

There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.


Turning Passion into Actionable Data

Nonprofits are among my favorite clients that we work with here at Corona for a variety of reasons, but one of the things that I love most is the passion that surrounds nonprofits.  That passion shines through the most in our work when we do research with internal stakeholders for the nonprofit.  This could include donors, board members, volunteers, staff, and program participants.  These groups of people, who are already invested in the organization are passionate about helping to improve it, which is good news when conducting research, as it often makes them more likely to participate and increase response rates.

Prior to joining the Corona team, I worked in the volunteer department of a local animal shelter.  As a data nerd even then, I wanted to know more about who our volunteers were, and how they felt about the volunteer program.  I put together an informal survey, and while I still dream about what nuggets could have been uncovered if we had gone through a more formal Corona-style process, the data we uncovered was still valuable in helping us determine what we were doing well and what we needed to improve on.

That’s just one example, but the possibilities are endless.  Maybe you want to understand what motivated your donors to contribute to your cause, how likely they are to continue donating in the future, and what would motivate them to donate more.  Perhaps you want to evaluate the effectiveness of your programs.  Or, maybe you want to know how satisfied employees are with working at your organization and brainstorm ideas on how to decrease stress and create a better workplace.

While you want to be careful about being too internally focused and ignoring the environment in which your nonprofit operates, there is huge value in leveraging passion by looking internally at your stakeholders to help move your organization forward.

 


Informal Research for Nonprofit Organizations

While Corona serves all three sectors (private, public, and nonprofit) in our work, we have always had a soft spot for our nonprofit clients.  No other type of organization is asked to do more with less, so we love working with nonprofits to help them refine their strategies to be both more effective at fulfilling their missions and more financially stable at the same time.

However, while we are thrilled for the opportunities to work with dozens of nonprofits every year, we know that there are hundreds of other organizations that we don’t work with, many of which simply don’t have the resources to devote to a formal marketing research effort. I’m a huge fan of the Discovery Channel show MythBusters, so I’ll share one of my favorite quotes:

http://www.tested.com/art/makers/557288-origin-only-difference-between-screwing-around-and-science-writing-it-down/
Image from Tested courtesy of DCL

While few would argue that the results found in an episode of MythBusters would qualify as academically rigorous research, I think most would agree that trying a few things out and seeing what happens is at least better than just trusting your gut instinct alone.  Likewise, here are a few ideas for ways that nonprofits can gather at least some basic information to help guide their strategies through informal “market research.”

Informal Interviews

One-on-one interviews are one of the easiest ways to gather feedback from a wide variety of individuals.  Formal interview research involves a third-party randomly recruiting individuals to participate from the entire universe of people you are trying to understand, but simply talking to people one-on-one about the issues or strategies that you are considering can be very insightful.  Here are a few pointers on getting the most out of informal interviews:

  • Dedicate time for the interview. It may seem easy to just chat with someone informally at dinner or at an event, but the multitude of distractions will reduce the value you get out of the conversation.  Find a time that both parties can really focus on the discussion, and you’ll get much better results.
  • Write your questions down in advance. It’s easy to go down a rabbit hole when having a conversation about something you are passionate about, so be sure to think through the questions you need to answer so that you can keep the conversation on track.
  • Record the conversation (or at least take notes). Take the MythBusters’ advice, and document the conversation.  If you’ve talked to a dozen people about your idea, it will be impossible for you to remember it all.  By having documentation of the conversations, you can look back later and have a better understanding of what your interviewees said.

Informal focus groups

Similar to interviews, in an ideal world focus groups should be conducted by a neutral, third-party with an experienced moderator who can effectively guide the group discussion to meet your goals.  However, as with interviews, you can still get a lot of value out of just sitting down with a group and talking through the issues.  In particular, if you have an event or conference where many people are together already, grabbing a few to talk through your ideas can be very informative.  Our suggestions for this type of “research” are similar to those for informal interviews, with slight differences in their implications:

  • Dedicate time for the discussion. As mentioned before, it may be tempting to just say “We’ll talk about this over dinner” or “Maybe if we have time at the end of the day we can get together.”  You’ll get far better results if everyone can plan for the conversation in advance and participate without distractions.
  • Write your questions down in advance. Even more so than for interviews, having a formal plan about what questions you want to ask is imperative.  Group discussions have a tendency of taking on a life of their own, so having a plan can help you to guide the discussion back on topic.
  • Document the results. Again, you may think you can remember everything that was said during a conversation, but a few months down the road, you will be very thankful that you took the time to either record the conversation or take notes about what was said.

Informal Surveys

Surveys are, perhaps, the most difficult of these ideas to implement on an informal basis, but they can nevertheless be very useful.  If you’re just needing some guidance on how members of an organization feel about a topic, asking for a show of hands at a conference is a perfectly viable way of at least getting a general idea of how members feel.  Similarly, if you have a list of email addresses for your constituents, you could simply pose your question in an email and ask people to respond with their “vote.”

The trickiest part is making sure that you understand what the results actually represent.  If your conference is only attended by one type of member, don’t assume that their opinions are the same as other member types.  Likewise, if you only have email addresses for 10 percent of your constituents, be careful with assuming that their opinions reflect those of the other 90 percent.  Even so, these informal types of polling can help you to at least get an idea of how groups feel on the whole.

~

Hopefully these ideas can give your nonprofit organization a place to start when trying to understand reactions to your ideas or strategies.  While these informal ways of gathering data will never be as valuable as going through a formal research process, they can provide at least some guidance as you move forward that you wouldn’t have had otherwise.

And if your issues are complex enough that having true, formal research is necessary to ensure that you are making the best possible decisions for your organization, we’ve got a pretty good recommendation on who you can call…


Who’s Excited for 2016?

Oh man, where did 2015 even go? Sometimes the end of the year makes me anxious because I start thinking about all the things that need to be done between now and December 31st. And then I start thinking about things that I need to do in the upcoming year, like figuring out how to be smarter than robots so that they don’t steal my job and learning a programming language since I’ll probably need it to talk to the robots that I work with in the future. Ugh.2015 Calendar

Feeling anxious and feeling excited share many of the same physical features (e.g., sweaty palms, racing heart, etc.),  and research has shown that it is possible to shift feelings of anxiety to feelings of excitement even by doing something as simple as telling yourself you are excited. So, let me put these clammy hands to use and share some of the things that I am excited about for 2016:

  • Technological advancements for data collection. Changes in phone survey sampling are improving the cell phone component of a survey. Also, we have been looking at so many new, cool ways of collecting data, especially qualitative data. Cell phones, which are super annoying for phone surveys, are simultaneously super exciting for qualitative research. I’m excited to try some of these new techniques in 2016.
  • Improvements in the technology that allows us to more easily connect with both clients and people who work remotely. We use this more and more in our office. I’m not sure if in 2016 we will finally have robots with iPads for heads that allow people to Skype their faces into the office, but I can dream.
  • Work trips! I realize that work trips might be the stuff of nightmares at other jobs. But Coronerds understand the importance of finding humor, delicious food, and sometimes a cocktail during a work trip.
  • New research for clients old and new. This year I’ve learned all sorts of interesting facts about deck contractors, the future of museums, teenage relationships, people’s health behaviors, motorcyclists, business patterns in certain states, how arts can transform a city, and many more! I can’t wait to see what projects we work on next year.
  • Retreat. For people who really love data and planning, there is nothing as soothing as getting together as a firm to pore over a year’s worth of data about our own company and draw insights and plans from it.

Alright, I feel a lot better about 2016. Now I’m off to remind myself that these clammy hands also mean that I’m very excited about holiday travel, last minute shopping, and holiday political discussions with the extended family…


How to Choose your own Adventure when it comes to Research

One of the things we’ve been doing at Corona this year that I’ve really enjoyed is resurrecting our book club. I enjoy it because it’s one way to think about the things we are doing from a bigger picture point of view, which is a welcome contrast to the project-specific thinking we are normally doing. One topic that’s come up repeatedly during our book club meetings is the pros and cons of different types of research methodology.

Knowing what kind of research you need to answer a question can be difficult if you have little experience with research. Understanding the different strengths and weaknesses of different methodologies can make the process a little easier and help ensure that you’re getting the most out of your research. Below I discuss some of the key differences between qualitative and quantitative research.

Qualitative Research

Qualitative research usually consists of interviews or focus groups, although other methodologies exist. The main benefit of qualitative research is that it is so open. Instead of constraining people in their responses, qualitative research generally allows for free-flowing, more natural responses. Focus group moderators and interviewers can respond in the moment to what participants are saying to draw out even deeper thinking about a topic. Qualitative research is great for brainstorming or finding key themes and language.

Qualitative data tend to be very rich, and you can explore many different themes within the data. One nice feature of qualitative research is that you can ask about topics that you have very little information about. For example, you might have a question in a survey that asks, “Which of the following best describes this organization? X, Y, Z, or none of the above.” This quantitative question assumes that X, Y, and Z are the three ways that people describe this organization, which requires at least some knowledge. A qualitative research question for this topic would ask, “How would you describe this organization?”. This is one of the reasons why qualitative research is great for exploratory research.

The primary weakness of qualitative research is that you can’t generate a valid population statistic from it. For example, although you could calculate what percent of people in focus groups said that Y was a barrier to working with your organization, you couldn’t generalize that estimate to the larger population. However, if you just wanted to identify the main barriers, then that would be possible with qualitative research. So even if 30% of focus group participants reported this barrier, we don’t know what percent of people overall would report that same barrier. We would only be able to say that this is a potential barrier. It’s important to think carefully about whether or not this would be a weakness for your research project.

Quantitative Research

The main goals of quantitative research are to estimate population quantities (e.g., 61% of your donors are in Colorado) and test for statistical difference between groups (e.g., donors in Colorado gave more money than those in other states). With quantitative research, you’re often sacrificing depth of understanding for precision.

One of the benefits to quantitative research, aside from being able to estimate population values, is that you can do a lot of interesting statistical analyses. Unlike a small sample of 30 people from focus groups, a large sample of 500 survey respondents allows for all sorts of analyses. You can look for statistical differences between groups, identify key clusters of respondents based on their responses, see if you can predict people’s responses from certain variables, etc.

There usually is not one single best way to answer a question with data, so thinking through your options and the benefits afforded by those options is important. And as always, we’re here to help you make these decisions if the project is complicated.


Does Prison Make People Find Religion?

We recently pondered prison and religious beliefs here at Corona, so we went poking around for data on the subject.  We found a Pew Forum survey of prison chaplains where they estimated the religious affiliation of prison inmates here:  http://www.pewforum.org/2012/03/22/prison-chaplains-perspectives/.  We then compared those proportions to the proportions of religions in the general population, also estimated by the Pew Forum and found here:  http://www.pewforum.org/2015/05/12/americas-changing-religious-landscape/.

What we found was interesting.  On the surface, a look at religious affiliations shows that some religions are strongly overrepresented in prison while others are at least modestly underrepresented.

Does Prison Make People Find Religion.

Some of the figures probably aren’t consistent since the two studies used some different classifications, so we recognize, for instance, that “other non-Christian” religions may be overreported in prisons since that study included more subcategories.  However, the results still show some broad patterns of interest.  In particular, we see that Muslims are strongly overrepresented in prison populations compared to their presence in the general population, as are “other non-Christian” populations.  This second disparity is due in large part to significant numbers of pagans and Native American spiritualists in prison.

In contrast, Catholics appear to be pretty adept at staying out of trouble.

Perhaps the most interesting element, though, is that fact that non-religious people (or at least religiously apathetic people) are much more common outside prison than inside prison.  Are non-religious people more likely to stay out of trouble, or do people discover and embrace religion inside prisons?

There are a number of potential explanations.  An obvious theory is that people enduring trials in their lives may embrace religion, particularly Islam or Protestantism or other religions that are overrepresented behind bars.  Or perhaps religion is embraced nominally due to benefits or accommodations that can be extracted from the prison system.  It could also be that the data collection method – surveys of chaplains – is biased because chaplains disproportionately see or remember religious inmates.

Regardless of the reason, it’s an interesting phenomenon to consider.  Why do we see more religion behind bars than outside them?


Weight on What Matters

In May, Kate and I went to AAPOR’s 70th Annual Conference in Hollywood, FL.  Kate did a more timely job of summarizing our learnings, but now that things have had some time to settle, I thought I’d discuss an issue that came up in several presentations, most memorably in Andy Peytchev’s presentation on Weighting Adjustments Using Substantive Survey Variables.  The issue is deciding which variables to use for weighting.  (And if I butcher the argument here, the errors are my own.)

Let’s take it from the top.  If your survey sample looks exactly like the population from which it was drawn, everything is peachy and there is no need for weighting.

the-hunt-for-the-last-respondentMost of the time, however, survey samples don’t look exactly like the populations from which they were drawn.  A major reason for this is non-response bias – which just means that some types of people are less likely to take the survey than other types of people.  To correct for this, and make sure that the survey results reflect the attitudes and beliefs of the actual population and not just the responding sample, we weight the survey responses up or down according to whether they are from a group that is over- or under-represented among the respondents.

So, it seems like the way to choose weighting variables would be to look for variables where the survey sample differs from the population, right?  Not so fast.  First we have to think about what weighting “costs” the margin of error for your survey.  Weights, in this situation, are measuring the extent of bias in the sample.  The size of the weights “costs” a proportional amount of expansion to the margin of error for the survey.  Meaning the precision of your estimates declines as your weighting effect increases.

What does that mean for selecting weighting variables?  It means you don’t want to do any unnecessary weighting.  Recall, the purpose of weighting is to ensure that survey results reflect the views of the population.  Let’s say the purpose of your survey is to measure preferences for dogs vs. cats in your population.  Before doing any weighting you look to see whether the proportion of dog lovers varies by age or gender or marital status or educational attainment (to keep it simple, let’s pretend you don’t have any complicated response biases, like all of the men in your survey are under 45).  If you find that marital status is correlated with preferences for dogs vs. cats, but age and gender and educational attainment aren’t, then you may want to weight your data by marital status, but not the other variables.

This makes sense, right?  If men and women don’t differ in their opinions on this topic, then it doesn’t matter whether you have a disproportionate number of women in your sample.  If you weight on gender when you don’t need to, you unnecessarily expand your margin of error for the survey without improving the accuracy of your results.  On the other hand, if married people have different preferences than single people, and your sample is skewed toward married people, by weighting on marital status you increase your margin of error, but compensate by improving the accuracy of your results.

The bottom line:  choose weighting variables that are correlated with your variables of interest as well as your non-response bias.

And that’s one to grow on!  (This blog felt reminiscent of an 80’s PSA, right?)


What your response rate says about engagement

Businessman Filling FormWhen we think about tracking customer satisfaction via surveys, the analysis is almost always on the survey responses themselves: how many said they were satisfied, what is driving satisfaction, and so on. (See a related post on 4 ways to report customer satisfaction.)

Not shocking (and of course we should look at the results to questions we ask), but there is another layer of data that can be analyzed: the data about the survey itself.

First and foremost is response rate.  (Quick review: response rate is the proportion of people who respond to an invitation to take a survey; read more here.) Response rate itself is important to reduce non-response bias (i.e., to reduce our concern that the people who do not respond are potentially very different that those who do respond), but it’s also a proxy for engagement. The more engaged your customers are with your organization, the more likely they will be to participate in your research. Therefore, tracking response rate as a separate metric as part of your overall customer dashboard can provide more depth in understanding your current relationship with customers (or citizens or members or…).

So, you’re probably now asking, “What response rate correlates to high engagement?” Short answer – it depends. Industry, topic matter, type of respondent, sampling, etc. can all make an impact on response rates. So while I’ll offer some general rules of thumb, take them with a grain of salt:

  • Less than 10%: Low engagement
  • 10-20%: Average engagement
  • 20-50%: High engagement
  • 50%+: Rock-star territory

Yes, we’ve had over 50% response to our clients’ surveys.

The important caveat here is to be weary of survey fatigue. If you are over surveying your customers, then response rates will decrease over time as people tire of taking surveys (shocking, right?). What is considered surveying too much will vary depending on the length of the survey and subject matter, but surveying monthly (or more frequently) will almost certainly cause fatigue, while surveying yearly (or less frequently) will probably not cause fatigue. One to 12 months? It’s a gray area. (Feel free to contact us for an opinion on your specific case).

Another potential source of survey meta data that you could use to assess engagement is depth of response to open-ended questions. The easiest way to measure this is to use word count as a proxy  – the more they write, presumably the more they care enough to tell you something.

For example, we did a large voter study for a state agency, and when asked their priorities on the given service topic, we received paragraph responses back. This, combined with other results, showed just how engaged they were with the topic (though not necessarily the agency in this case) and how much thought they had given it. Useful, as well as somewhat surprising, information for our client.

The next time you’re looking at survey data, be sure to look at more than just the responses themselves.

 


Graphs: An effective tool, but use them carefully

Ahh…the graph.  Where would the business world be without them?  While some of us are just as content looking through a giant spreadsheet full of numbers, graphs can help to illustrate the story more effectively for number geeks and math haters alike.  However, while graphs can be a great tool, there are certainly times when graphs can make interpreting your data even more difficult to understand or (intentional or not) even misleading.  Here are a few things to think about when creating graphs for your data.

Line charts should represent something linear!

Line charts are a very common way of representing data.  However, in most cases, line charts should only be used if there is some sort of linear relationship in the categories displayed on the horizontal axis of your chart.  For example, if you want to see how responses vary by age of respondents, the year the data was collected, or even satisfaction on a numeric scale, a line chart can be a great way of representing this data.

DO: Customer Satisfaction by Year

However, if you are instead dealing with categorical data, using a line chart suggests a relationship between the categories that may not be true.   In the example below, a line chart implies that Colorado is related to Nebraska in the same way that Nebraska is related to Wyoming.  Clearly this isn’t true, so in this case, a bar chart would likely be a more effective way of presenting the data.

DON'T: Customer Satisfaction by State

Pie charts (and split-bar charts) should add to 100%!

Everyone loves pie charts.  Not only are they the best type of graph to use if you want to represent your favorite food (or video game character), they are an excellent way of presenting the distribution of data in which every data point belongs to one category.

DO: Gender

However, pie charts can cause all sorts of problems in interpretation if the data is not mutually exclusive (that is, if a single data point can belong to multiple categories).  In the example below, the pie chart implies that the chart represents the total population in terms of pet ownership, but some people may have multiple types of animals.  Again, in cases such as these, a bar chart would be a much more clear way of presenting this data.

DON'T: Pets at Home

Be careful with the scales you use!

While bar charts can be a good option for a wide variety of data, the scale you use for your charts can cause confusion in interpretation if you aren’t careful.  In particular, if you let your graphing software choose your scale for you, you may end up with results that tell a very different story than reality!  For example, let’s say you wanted to compare customer satisfaction across customer segments.  If you pulled out trusty old Excel and graphed this data with no modifications, here’s the output you would get:

DON'T: Satisfaction with Automatic Scaling

Even with the percentages listed on the graph, it looks like there is a HUGE difference in the satisfaction of Segments A and B compared to the others.  However, the difference in satisfaction between Segments A and E is really only 13 percent.  Here’s the same data, but using a fixed scale from 0 to 100 percent:

DO: Satisfaction with Fixed Scaling

By ensuring that the scale represents the entire range of possible responses, we can more accurately convey the true differences between segments.

Other things to consider

These are, of course, just a few things to consider when presenting your data.  We haven’t even touched on other topics like overall charting philosophies, how newer visualization techniques can result in pretty, but dysfunctional graphs, or the nuances of more advanced types of visualizations, such as cartography.  However, by keeping in mind what your data is meant to represent and ensuring that your approach avoids some of these pitfalls, you’ll be on your way to more meaningful and accurate graphs.


Asking the “right” people is half the challenge

We’ve been blogging a lot lately about potential problem areas for research, evaluation, and strategy. In thinking about research specifically, making sure you can trust results often boils down to these three points:

  1. Ask the right questions;
  2. Of the right people; and
  3. Analyze the data correctly

As Kevin pointed out in a blog nearly a year ago, #2 is often the crux.  When I say, “of the right people,” I am referring to making sure who you are including in your research represents who you want to study.  Deceptively simple, but there are many examples of research gone awry due to poor sampling.

So, how do you find the right people?

Ideally, you have access to a source of contacts (e.g., all mailing addresses for a geography of interest, email addresses for all members, etc.) and then randomly sample from that source (the “random” part being crucial as it is what allows you later to interpret the results for the overall, larger population).  However, those sources don’t always exist and a purely random sample isn’t possible.  Regardless, here are three steps you can take to ensure a good quality sample:

  1. Don’t let just anyone participate in the research.  As tempting as it is to just email out a link or post a survey on Facebook, you can’t be sure who is actually taking the survey (or how many times they took it).  While these forms of feedback can provide some useful feedback, it cannot be used to say “my audience overall thinks X”.  The fix: Limit access through custom links, personalized invites, and/or passwords.
  2. Respondents should represent your audience. This may sound obvious, but having your respondents truly match your overall audience (e.g., customers, members, etc.) can get tricky.  For example, some groups may be more likely to respond to a survey (e.g., females and older persons are often more likely to take a survey, leaving young males under represented). Similarly, very satisfied or dissatisfied customers may be more likely to voice an opinion, than those who are indifferent or least more passive. The fix: Use proper incentives up front to motivate all potential respondents, screen respondents to make sure they are who you think they are, and statistically weight the results on the backend to help overcome response bias.
  3. Ensure  you have enough coverage.  Coverage refers to the proportion of everyone in your population or audience that you can reach.  For example, if you have contact information for 50% of your customers, then your coverage would only be 50%.  This may or may not be a big deal – it will depend on whether those you can reach are different from those you cannot.  A very real-world example of this is telephone surveys.  The coverage of the general population via landline phones is declining rapidly now nearing only half; more importantly, the type of person you get via landline vs. a cell phone survey is very different.  The fix: The higher the coverage the better.  When you can only reach a small proportion via one mode of research, consider using multiple modes (e.g., online and mail) or look for a better source of contacts.  One general rule we often use is that if we have at least 80% coverage of a population, we’re probably ok, but always ask yourself, “Who would I be missing?”

Sometimes tradeoffs have to be made, and that can be ok when the alternative isn’t feasible.  However, at least being aware of tradeoffs is helpful and can be informative when interpreting results later.  Books have been written on survey sampling, but these initial steps will have you headed down the correct path.

Have questions? Please contact us.  We would be happy to help you reach the “right” people for your research.