RADIANCE BLOG

Category: Surveying Surveys

Research on Research: Boosting Online Survey Response Rates

David Kennedy and Matt Herndon, both Principals here at Corona, will be presenting a webinar for the Market Research Association (MRA) on August 24th.

The topic is how to boost response rates with online surveys. Specifically, they will be presenting research Corona has done to learn how minor changes to such things as survey invites can make an impact on response rates. For instance, who the survey is “from”, the format, and salutation can all make a difference.

Click here to register. You do need to be a member to view the webinar. (We hope to post it, or at least a summary, here on our blog afterwards.)

Even if you can’t make it, rest assured that if you’re a client at least, these lessons are already being applied to your research!


Do you have kids? Wait – let me restate that.

Karla Raines and I had dinner with another couple last week that shares our background and interest in social research.  We were talking about the challenges of understanding the decisions of other people if you don’t understand their background, and how we can have biases that we don’t even realize.

It brought me back to the topic of how we design and ask questions on surveys, and my favorite example of unintentional background bias on the part of the designer.

A common question, both in research and in social conversations, is the ubiquitous, “Do you have kids?”  It’s an easy question to answer, right?  If you ask Ward and June Cleaver, they’ll immediately answer, “We have two, Wally and Beaver”.  (June might go with the more formal ‘Theodore’, but you get the point.)

When we ask the question in a research context, we’re generally asking it for a specific reason.  Children often have a major impact on how people behave, and we’re usually wondering if there’s a correlation on a particular issue.

But ‘do you have kids’ is a question that may capture much more than the classic Wally and Beaver household.  If we ask that question, the Cleaver family will answer ‘yes’, but so will a 75 year-old who has two kids, even if those kids are 50 years old and grandparents of their own.  So ‘do you have kids’ isn’t the question we want to ask in most contexts.

What if we expanded the question to ‘do you have children under 18’?  It gets a bit tricky here if we put ourselves in the minds of respondents, and this is where our unintentional background bias may come into play.  Ward and June will still answer yes, but what about a divorced parent who doesn’t have custody?  He or she may accurately answer yes, but there’s not a child living in their home.  Are we capturing the information that we think we’re capturing?

And what about a person who’s living with a boyfriend and the boyfriend’s two children?  Or the person who has taken a foster child into the home?  Or the grandparent who is raising a grandchild while the parents are serving overseas?  Or the couple whose adult child is temporarily back home with her own kids in tow?

If we’re really trying to figure out how children impact decisions, we need to observe and recognize the incredible diversity of family situations in the modern world, and how that fits into our research goal.  Are we concerned about whether the survey respondent has given birth to a child?  If they’re a formal guardian of a child?  If they’re living in a household that contains children, regardless of the relationship?

The proper question wording will depend on the research goals, of course.  We often are assessing the impact of children within a household when we ask these questions, so we find ourselves simply asking, “How many children under the age of 18 are living in your home?”, perhaps with a followup about the relationship where necessary.  But It’s easy to be blinded by our own life experiences when designing research, and the results can lead to error in our conclusions.

So the next time you’re mingling at a party, we suggest not asking “Do you have kids”, and offer that you should instead ask, “How many children under the age of 18 are living in your home?”  It’s a great conversation starter and will get you much better data about the person you’re chatting with.


There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.


Happy or not

A key challenge for the research industry – and any company seeking feedback from its customers – is gaining participation.

There have been books written on how to reduce non-response (i.e. increase participation), and tactics often include providing incentives, additional touch points, and finely crafted messaging. All good and necessary.

But one trend we’re seeing more of is the minimalist “survey.” One question, maybe two, to gather point-in-time feedback. You see this on rating services (e.g., Square receipts, your Uber driver, at the checkout, etc.), simple email surveys where you respond by clicking a link in the email, and texting feedback, to name a few.

Recently, travelling through Iceland and Norway, I caHappy or notme across this stand asking “happy or not” at various points in the airport (e.g., check-in, bathrooms, and luggage claim). Incredibly simple and easy to do. You don’t even have to stop – just hit the button as you walk by.

A great idea, sure, but did people use it? In short, yes. While I don’t know the actual numbers, based on observation alone (what else does a market researcher do while hanging out in airports?), I did see several people interact with it as they went by. Maybe it’s the novelty of it and in time people will come to ignore it, but it’s a great example of collecting in-the-moment feedback, quickly and effortlessly.

Now, asking one question and getting a checkbox response will not tell you as much as a 10 minute survey will, but if it gets people to actually participate, it is a solid step in the right direction. Nothing says this has to be your only data either. I would assume that, in addition to the score received, there is also a time and date stamp associated with responses (and excessive button pushes at one time should probably be “cleaned” from the data). Taken together, investigating problems becomes easier (maybe people are only unsatisfied in the morning rush at the airport?), and if necessary, additional research can always be conducted as well to further investigate issues.

What other examples, successful or not, are you seeing organizations use to collect feedback?


Turning Passion into Actionable Data

Nonprofits are among my favorite clients that we work with here at Corona for a variety of reasons, but one of the things that I love most is the passion that surrounds nonprofits.  That passion shines through the most in our work when we do research with internal stakeholders for the nonprofit.  This could include donors, board members, volunteers, staff, and program participants.  These groups of people, who are already invested in the organization are passionate about helping to improve it, which is good news when conducting research, as it often makes them more likely to participate and increase response rates.

Prior to joining the Corona team, I worked in the volunteer department of a local animal shelter.  As a data nerd even then, I wanted to know more about who our volunteers were, and how they felt about the volunteer program.  I put together an informal survey, and while I still dream about what nuggets could have been uncovered if we had gone through a more formal Corona-style process, the data we uncovered was still valuable in helping us determine what we were doing well and what we needed to improve on.

That’s just one example, but the possibilities are endless.  Maybe you want to understand what motivated your donors to contribute to your cause, how likely they are to continue donating in the future, and what would motivate them to donate more.  Perhaps you want to evaluate the effectiveness of your programs.  Or, maybe you want to know how satisfied employees are with working at your organization and brainstorm ideas on how to decrease stress and create a better workplace.

While you want to be careful about being too internally focused and ignoring the environment in which your nonprofit operates, there is huge value in leveraging passion by looking internally at your stakeholders to help move your organization forward.

 


Informal Research for Nonprofit Organizations

While Corona serves all three sectors (private, public, and nonprofit) in our work, we have always had a soft spot for our nonprofit clients.  No other type of organization is asked to do more with less, so we love working with nonprofits to help them refine their strategies to be both more effective at fulfilling their missions and more financially stable at the same time.

However, while we are thrilled for the opportunities to work with dozens of nonprofits every year, we know that there are hundreds of other organizations that we don’t work with, many of which simply don’t have the resources to devote to a formal marketing research effort. I’m a huge fan of the Discovery Channel show MythBusters, so I’ll share one of my favorite quotes:

http://www.tested.com/art/makers/557288-origin-only-difference-between-screwing-around-and-science-writing-it-down/
Image from Tested courtesy of DCL

While few would argue that the results found in an episode of MythBusters would qualify as academically rigorous research, I think most would agree that trying a few things out and seeing what happens is at least better than just trusting your gut instinct alone.  Likewise, here are a few ideas for ways that nonprofits can gather at least some basic information to help guide their strategies through informal “market research.”

Informal Interviews

One-on-one interviews are one of the easiest ways to gather feedback from a wide variety of individuals.  Formal interview research involves a third-party randomly recruiting individuals to participate from the entire universe of people you are trying to understand, but simply talking to people one-on-one about the issues or strategies that you are considering can be very insightful.  Here are a few pointers on getting the most out of informal interviews:

  • Dedicate time for the interview. It may seem easy to just chat with someone informally at dinner or at an event, but the multitude of distractions will reduce the value you get out of the conversation.  Find a time that both parties can really focus on the discussion, and you’ll get much better results.
  • Write your questions down in advance. It’s easy to go down a rabbit hole when having a conversation about something you are passionate about, so be sure to think through the questions you need to answer so that you can keep the conversation on track.
  • Record the conversation (or at least take notes). Take the MythBusters’ advice, and document the conversation.  If you’ve talked to a dozen people about your idea, it will be impossible for you to remember it all.  By having documentation of the conversations, you can look back later and have a better understanding of what your interviewees said.

Informal focus groups

Similar to interviews, in an ideal world focus groups should be conducted by a neutral, third-party with an experienced moderator who can effectively guide the group discussion to meet your goals.  However, as with interviews, you can still get a lot of value out of just sitting down with a group and talking through the issues.  In particular, if you have an event or conference where many people are together already, grabbing a few to talk through your ideas can be very informative.  Our suggestions for this type of “research” are similar to those for informal interviews, with slight differences in their implications:

  • Dedicate time for the discussion. As mentioned before, it may be tempting to just say “We’ll talk about this over dinner” or “Maybe if we have time at the end of the day we can get together.”  You’ll get far better results if everyone can plan for the conversation in advance and participate without distractions.
  • Write your questions down in advance. Even more so than for interviews, having a formal plan about what questions you want to ask is imperative.  Group discussions have a tendency of taking on a life of their own, so having a plan can help you to guide the discussion back on topic.
  • Document the results. Again, you may think you can remember everything that was said during a conversation, but a few months down the road, you will be very thankful that you took the time to either record the conversation or take notes about what was said.

Informal Surveys

Surveys are, perhaps, the most difficult of these ideas to implement on an informal basis, but they can nevertheless be very useful.  If you’re just needing some guidance on how members of an organization feel about a topic, asking for a show of hands at a conference is a perfectly viable way of at least getting a general idea of how members feel.  Similarly, if you have a list of email addresses for your constituents, you could simply pose your question in an email and ask people to respond with their “vote.”

The trickiest part is making sure that you understand what the results actually represent.  If your conference is only attended by one type of member, don’t assume that their opinions are the same as other member types.  Likewise, if you only have email addresses for 10 percent of your constituents, be careful with assuming that their opinions reflect those of the other 90 percent.  Even so, these informal types of polling can help you to at least get an idea of how groups feel on the whole.

~

Hopefully these ideas can give your nonprofit organization a place to start when trying to understand reactions to your ideas or strategies.  While these informal ways of gathering data will never be as valuable as going through a formal research process, they can provide at least some guidance as you move forward that you wouldn’t have had otherwise.

And if your issues are complex enough that having true, formal research is necessary to ensure that you are making the best possible decisions for your organization, we’ve got a pretty good recommendation on who you can call…


Who’s Excited for 2016?

Oh man, where did 2015 even go? Sometimes the end of the year makes me anxious because I start thinking about all the things that need to be done between now and December 31st. And then I start thinking about things that I need to do in the upcoming year, like figuring out how to be smarter than robots so that they don’t steal my job and learning a programming language since I’ll probably need it to talk to the robots that I work with in the future. Ugh.2015 Calendar

Feeling anxious and feeling excited share many of the same physical features (e.g., sweaty palms, racing heart, etc.),  and research has shown that it is possible to shift feelings of anxiety to feelings of excitement even by doing something as simple as telling yourself you are excited. So, let me put these clammy hands to use and share some of the things that I am excited about for 2016:

  • Technological advancements for data collection. Changes in phone survey sampling are improving the cell phone component of a survey. Also, we have been looking at so many new, cool ways of collecting data, especially qualitative data. Cell phones, which are super annoying for phone surveys, are simultaneously super exciting for qualitative research. I’m excited to try some of these new techniques in 2016.
  • Improvements in the technology that allows us to more easily connect with both clients and people who work remotely. We use this more and more in our office. I’m not sure if in 2016 we will finally have robots with iPads for heads that allow people to Skype their faces into the office, but I can dream.
  • Work trips! I realize that work trips might be the stuff of nightmares at other jobs. But Coronerds understand the importance of finding humor, delicious food, and sometimes a cocktail during a work trip.
  • New research for clients old and new. This year I’ve learned all sorts of interesting facts about deck contractors, the future of museums, teenage relationships, people’s health behaviors, motorcyclists, business patterns in certain states, how arts can transform a city, and many more! I can’t wait to see what projects we work on next year.
  • Retreat. For people who really love data and planning, there is nothing as soothing as getting together as a firm to pore over a year’s worth of data about our own company and draw insights and plans from it.

Alright, I feel a lot better about 2016. Now I’m off to remind myself that these clammy hands also mean that I’m very excited about holiday travel, last minute shopping, and holiday political discussions with the extended family…


Predicting The Future

In one form or another, much of market research is aimed at predicting the future.  Whether you are considering opening a new line of business, tweaking your advertisements, or just trying to serve your constituents better, the key purpose is almost always some form of “If we do X, then what will happen?”  However, when crafting research questions, it is important to keep in mind that not all questions are created equal when it comes to predicting future behavior.  Respondents tend to respond to surveys rationally, and anyone who has been involved in marketing for long will agree that consumers are anything but rational.

Henry Ford Quote

The key issue to consider when designing research questions to predict future behavior is simply whether human beings are able to accurately answer your question.  Here are a few scenarios to consider:

Scenario 1: Media Choice

Let’s say you wanted to know what types of media would be most effective at reaching your target audience.  It might seem intuitive to simply ask a question such as:

Advertising Question

There’s nothing necessarily wrong with that question, and in fact we at Corona use similar questions here and there when we want to at least get a feel for where consumers might look for information.  However, people are notoriously awful at predicting how they will react to something in the future.  Many will likely name the usual media suspects – TV and radio – without thinking through what they actually pay close attention to.  They may not consider more unique advertising media such as social media, outdoor advertising, direct mail, and many others.  Instead, it might be more reliable to ask the question about what they can recall from the past and make the assumption that their past behavior will likely reflect their future behavior:

Advertising Question 2

In either case, any time you can triangulate your survey findings with other data (e.g., past ad performance, media consumption studies, etc.), the stronger your conclusions will be.

Scenario 2: Likelihood of Purchase

Let’s instead say that you are launching a new product and are trying to forecast how many people will purchase your product.  The most straightforward way of asking that question might simply be:

Purchasing Question

The challenge with that question is that it simplifies an extremely complex purchasing decision into an expanded “yes or no” response.  They may find the product attractive, but what will it cost?  Where will it be sold?  What will the economy be like once the product is available?  Are people already familiar with the product, or will they need to learn more about it to make a decision?  Will other competitive products be available at the same time?  Add to those issues the fact that people almost always tend to overstate how likely they are to purchase something, and you get very tenuous results – the results you get are almost always a best-case scenario.  A respondent will make an objective evaluation of their likelihood to purchase when taking a survey, but the final decision is a very emotional decision that can be influenced by all of these factors and more.

Instead, surveys are more effective at helping you understand the product attributes that will help to drive purchase.  For example, you could instead inform messaging about the product based on reactions to a series of statements about the product in terms of how valuable they seem to consumers.  If it is imperative to forecast future purchase behavior, a different approach, such as A/B testing to compare how different approaches work in the real world, using test markets before your full launch, and other advanced analytical techniques may be more effective.

Scenario 3: Optimal Price Point

As a final scenario to consider, let’s say you are launching a new product and want to know how to price it so that you can both drive sales and maximize your revenue.  You may initially think that simply asking the question outright will be most effective:

Optimal Price Point Question

This question (and a variety of other similar questions you could use) again simplifies a very complex purchasing decision into a straightforward answer.  What will the respondent’s financial situation be at the time of purchase?  Are there sales on competing products?  Will they be attracted by the packaging?  Will the product be sold in a small, boutique shop or a large superstore?  All of these can have significant impacts on what people are willing to pay that would not be reflected in a survey response.

There’s nothing wrong with addressing price in a survey necessarily, but a typical survey will be limited in its ability to give you accurate information about optimal prices.  You can use a straightforward survey’s findings to give you a feel for reactions to prices, but a final decision on pricing should be based on many other data points than the survey results alone.

That said, there is a specific type of survey (called a conjoint survey) that is specifically designed to help determine optimal prices by asking consumers to make choices between a variety of combinations of product or service attributes.  It’s a considerably more complex process, but is by far the most reliable way of understanding the value that consumers place on various attributes and can help to accurately inform your pricing strategy.  Similar to the discussion for Scenario 2, test markets could also be a valuable option to understand how consumers will react to various prices.

~

Despite these challenges with predicting future behavior, surveys remain one of the most valuable tools for informing product/service development and marketing.  Surveys are highly effective at understanding current behaviors, measuring awareness, understanding pain points that a product or service could address, understanding attitudes and perceptions of a product or service, and much more.  However, keeping in mind the types of information that respondents are able to accurately provide will ensure that the survey’s results are as accurate and actionable as possible when developing your future strategy.

 


How to Choose your own Adventure when it comes to Research

One of the things we’ve been doing at Corona this year that I’ve really enjoyed is resurrecting our book club. I enjoy it because it’s one way to think about the things we are doing from a bigger picture point of view, which is a welcome contrast to the project-specific thinking we are normally doing. One topic that’s come up repeatedly during our book club meetings is the pros and cons of different types of research methodology.

Knowing what kind of research you need to answer a question can be difficult if you have little experience with research. Understanding the different strengths and weaknesses of different methodologies can make the process a little easier and help ensure that you’re getting the most out of your research. Below I discuss some of the key differences between qualitative and quantitative research.

Qualitative Research

Qualitative research usually consists of interviews or focus groups, although other methodologies exist. The main benefit of qualitative research is that it is so open. Instead of constraining people in their responses, qualitative research generally allows for free-flowing, more natural responses. Focus group moderators and interviewers can respond in the moment to what participants are saying to draw out even deeper thinking about a topic. Qualitative research is great for brainstorming or finding key themes and language.

Qualitative data tend to be very rich, and you can explore many different themes within the data. One nice feature of qualitative research is that you can ask about topics that you have very little information about. For example, you might have a question in a survey that asks, “Which of the following best describes this organization? X, Y, Z, or none of the above.” This quantitative question assumes that X, Y, and Z are the three ways that people describe this organization, which requires at least some knowledge. A qualitative research question for this topic would ask, “How would you describe this organization?”. This is one of the reasons why qualitative research is great for exploratory research.

The primary weakness of qualitative research is that you can’t generate a valid population statistic from it. For example, although you could calculate what percent of people in focus groups said that Y was a barrier to working with your organization, you couldn’t generalize that estimate to the larger population. However, if you just wanted to identify the main barriers, then that would be possible with qualitative research. So even if 30% of focus group participants reported this barrier, we don’t know what percent of people overall would report that same barrier. We would only be able to say that this is a potential barrier. It’s important to think carefully about whether or not this would be a weakness for your research project.

Quantitative Research

The main goals of quantitative research are to estimate population quantities (e.g., 61% of your donors are in Colorado) and test for statistical difference between groups (e.g., donors in Colorado gave more money than those in other states). With quantitative research, you’re often sacrificing depth of understanding for precision.

One of the benefits to quantitative research, aside from being able to estimate population values, is that you can do a lot of interesting statistical analyses. Unlike a small sample of 30 people from focus groups, a large sample of 500 survey respondents allows for all sorts of analyses. You can look for statistical differences between groups, identify key clusters of respondents based on their responses, see if you can predict people’s responses from certain variables, etc.

There usually is not one single best way to answer a question with data, so thinking through your options and the benefits afforded by those options is important. And as always, we’re here to help you make these decisions if the project is complicated.


Weight on What Matters

In May, Kate and I went to AAPOR’s 70th Annual Conference in Hollywood, FL.  Kate did a more timely job of summarizing our learnings, but now that things have had some time to settle, I thought I’d discuss an issue that came up in several presentations, most memorably in Andy Peytchev’s presentation on Weighting Adjustments Using Substantive Survey Variables.  The issue is deciding which variables to use for weighting.  (And if I butcher the argument here, the errors are my own.)

Let’s take it from the top.  If your survey sample looks exactly like the population from which it was drawn, everything is peachy and there is no need for weighting.

the-hunt-for-the-last-respondentMost of the time, however, survey samples don’t look exactly like the populations from which they were drawn.  A major reason for this is non-response bias – which just means that some types of people are less likely to take the survey than other types of people.  To correct for this, and make sure that the survey results reflect the attitudes and beliefs of the actual population and not just the responding sample, we weight the survey responses up or down according to whether they are from a group that is over- or under-represented among the respondents.

So, it seems like the way to choose weighting variables would be to look for variables where the survey sample differs from the population, right?  Not so fast.  First we have to think about what weighting “costs” the margin of error for your survey.  Weights, in this situation, are measuring the extent of bias in the sample.  The size of the weights “costs” a proportional amount of expansion to the margin of error for the survey.  Meaning the precision of your estimates declines as your weighting effect increases.

What does that mean for selecting weighting variables?  It means you don’t want to do any unnecessary weighting.  Recall, the purpose of weighting is to ensure that survey results reflect the views of the population.  Let’s say the purpose of your survey is to measure preferences for dogs vs. cats in your population.  Before doing any weighting you look to see whether the proportion of dog lovers varies by age or gender or marital status or educational attainment (to keep it simple, let’s pretend you don’t have any complicated response biases, like all of the men in your survey are under 45).  If you find that marital status is correlated with preferences for dogs vs. cats, but age and gender and educational attainment aren’t, then you may want to weight your data by marital status, but not the other variables.

This makes sense, right?  If men and women don’t differ in their opinions on this topic, then it doesn’t matter whether you have a disproportionate number of women in your sample.  If you weight on gender when you don’t need to, you unnecessarily expand your margin of error for the survey without improving the accuracy of your results.  On the other hand, if married people have different preferences than single people, and your sample is skewed toward married people, by weighting on marital status you increase your margin of error, but compensate by improving the accuracy of your results.

The bottom line:  choose weighting variables that are correlated with your variables of interest as well as your non-response bias.

And that’s one to grow on!  (This blog felt reminiscent of an 80’s PSA, right?)