RADIANCE BLOG

Category: Surveying Surveys

Subpopulations in Research

As I’m sure you know, we do a lot of survey research here at Corona. When we provide the results, we try to build the most complete picture for our clients, and that means looking at the data from every which way possible. One of the most effective ways to do this is by looking at subpopulations.

What is a subpopulation?

A subpopulation is essentially a fraction or part of the overall pool of the population you are surveying. A subpopulation can be defined many ways. For example, some of the most common subpopulations to examine in research are gender (e.g. male and female), age (e.g. <35, 35-54, 55+), race/ethnicity, location, etc.  You can effectively define a subpopulation using whatever criteria you like; for instance, you can have a subpopulation that is based on what type of dessert is preferred – those who like cake and heathens those who don’t like cake.

What does it mean to have subpopulations?

When you examine survey results by subpopulations, at a basic level respondents are simply split into the subpopulations or groups (commonly called breakouts) you defined. After being broken into these groups, the results for the survey are compiled for each individual group separately. For example, take the following survey question:

  1. About how many hours a week do you watch sports?
    1. 1 hour or less
    2. 2 to 4 hours
    3. 5 to 7 hours
    4. 8 hours or more

The results would typically have two components: top-level results (results compiled for all respondents to the survey) and breakouts (results by group for any subpopulations that have been defined). For the above example question, the results might look something like this:

In this completely made-up example, you can see the benefit of having subpopulations. While 21 percent of overall respondents watched five to seven hours of sports a week, you can see that male respondents accounted for a hefty chunk, as 26 percent of males watch that much sports, compared to only 16 percent of females. Breaking out questions by subpopulations allows you to more closely examine data and assists in finding those gems of information.

Getting the most out of your survey

Being prepared to utilize subpopulations in your survey analysis means putting your best foot forward and maximizing your investment. Many subpopulations are constructed using questions commonly asked in surveys (gender, age, etc.), but some questions might not otherwise be asked without the foresight of planning to break respondents into subpopulations. For example, a nonprofit might be building a questionnaire to survey their patrons on their messaging; by simply asking if a respondent has donated to the organization, they can examine survey results of donors separately from all patrons. The survey can now not only better inform messaging for the organization overall, but also allows them to better target and communicate to donors, specifically.

Conducting a survey can be a challenging experience, so the more you can get out of a single survey, the better. The next time you are designing a survey, ask around your workplace to see if a few questions can be added to better utilize the information you’re collecting. Now you’re one step closer to conducting the perfect survey!


Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.


Online research is becoming more feasible in smaller locales (and that includes Denver)

Door-to-door, intercept, mail, telephone, online – surveys have evolved with the technology and needs of the times. Online has increased speed and often lowered cost of conducting surveys. For some populations, it has even made conducting surveys more feasible.

CO PushpinHowever, online surveys haven’t always been feasible in a city such as Denver or even statewide in Colorado.

(I should note here that we’re talking about the general public or other populations where we do not have a list. For instance, if we were surveying your customers and you had a database of customers with email addresses, conducting the survey online is almost certainly the way to go.)

Why it’s been tough until now

So why has it been tough until now to conduct public opinion research online in Denver, the Front Range, or even all of Colorado?

Unlike with mail, where huge databases of addresses exist, and telephone, where again lists or RDD sample can be generated, there is no master repository of email addresses or requirement that residents have one official email address. (Many of us probably have multiple emails – I personally have four outside of work.)

The market research industry’s answer to this has been to create databases, but unlike with mail and telephone where lists can be generated via public sources, email addresses generally have to be collected via individuals voluntarily sharing their information. In the industry, companies have specialized in doing just that – recruiting a lot of potential respondents to their online panel in exchange for incentives provided when they complete a survey. In addition to email, these companies generally collect some basic demographic information as well to make targeting more effective.

Now, let’s say a panel had one million U.S. members in their database. Sounds big, doesn’t it? Well, given that Colorado makes up less than 2% of the nation’s population, than means there might be 20,000 Coloradoans in their database. If you wanted Denver Metro only (about half the state’s population), that takes our max potential to 10,000. If only 10% respond to any given survey invite, the most respondents you may be able to receive is 1,000, and that’s before any additional screening (e.g., you’re only looking for commuters). That is simplified summary, but as you can see, it largely becomes a numbers game – you need a very large panel to drill down to smaller geography or subset of the population.

What has changed

These panels are nothing new. Corona has been using them for a decade, but what has changed recently in our home market (and most smaller geographies around the country, for that matter) is that the panels have grown large enough to adequately supply enough respondents to participate in our studies. A few years ago, we could only do online studies nationwide, in regions (e.g., the west, south, etc.), or maybe in very large metropolitan areas. As panels and recruitment continued to grow, we were able to do general population studies (i.e., pretty much everyone qualifies as we don’t have additional criteria for screening), but not smaller segments of the population. Now, while we can still run into difficulty with really niche groups, we can conduct studies with parents, visitors to a certain attraction, and many other groups all within Denver metro or the Front Range.

Still, a note of caution

So, problem solved, right? Unfortunately, online panels come with some caveats. First, compared to a mail or telephone survey, when sample is randomly generated, the results are not considered representative as the sample is not random probability sample. (There are some probability-based samples for online panels, but they’re still in the “only big enough for nationwide studies phase” mostly.) Panels are typically designed to reflect the overall population in terms of demographics, but due to their recruiting method, can’t be considered “random”.

Other concerns need to be taken into account such as how quickly the panel turns over respondents, avoiding respondents who try to game the system just for incentives, and other quality control measures.

For these reasons, Corona still regularly recommends other survey modes, such as mail and telephone (yes, we still do mail!) when we feel they will provide better answers for our clients. Often times, however, online may be the only feasible option given the challenges with telephone (e.g., cell phones) and mail (e.g., slower, static content). Sometimes we’ll propose both to our clients and then discuss the relative tradeoffs with them.

In summary, online is a growing option for Denver and Colorado, as well as other smaller cities, but be sure to pick the mode that is best for your research – not just the one that is easiest.

 


Research on Research: Boosting Online Survey Response Rates

David Kennedy and Matt Herndon, both Principals here at Corona, will be presenting a webinar for the Market Research Association (MRA) on August 24th.

The topic is how to boost response rates with online surveys. Specifically, they will be presenting research Corona has done to learn how minor changes to such things as survey invites can make an impact on response rates. For instance, who the survey is “from”, the format, and salutation can all make a difference.

Click here to register. You do need to be a member to view the webinar. (We hope to post it, or at least a summary, here on our blog afterwards.)

Even if you can’t make it, rest assured that if you’re a client at least, these lessons are already being applied to your research!


Do you have kids? Wait – let me restate that.

Karla Raines and I had dinner with another couple last week that shares our background and interest in social research.  We were talking about the challenges of understanding the decisions of other people if you don’t understand their background, and how we can have biases that we don’t even realize.

It brought me back to the topic of how we design and ask questions on surveys, and my favorite example of unintentional background bias on the part of the designer.

A common question, both in research and in social conversations, is the ubiquitous, “Do you have kids?”  It’s an easy question to answer, right?  If you ask Ward and June Cleaver, they’ll immediately answer, “We have two, Wally and Beaver”.  (June might go with the more formal ‘Theodore’, but you get the point.)

When we ask the question in a research context, we’re generally asking it for a specific reason.  Children often have a major impact on how people behave, and we’re usually wondering if there’s a correlation on a particular issue.

But ‘do you have kids’ is a question that may capture much more than the classic Wally and Beaver household.  If we ask that question, the Cleaver family will answer ‘yes’, but so will a 75 year-old who has two kids, even if those kids are 50 years old and grandparents of their own.  So ‘do you have kids’ isn’t the question we want to ask in most contexts.

What if we expanded the question to ‘do you have children under 18’?  It gets a bit tricky here if we put ourselves in the minds of respondents, and this is where our unintentional background bias may come into play.  Ward and June will still answer yes, but what about a divorced parent who doesn’t have custody?  He or she may accurately answer yes, but there’s not a child living in their home.  Are we capturing the information that we think we’re capturing?

And what about a person who’s living with a boyfriend and the boyfriend’s two children?  Or the person who has taken a foster child into the home?  Or the grandparent who is raising a grandchild while the parents are serving overseas?  Or the couple whose adult child is temporarily back home with her own kids in tow?

If we’re really trying to figure out how children impact decisions, we need to observe and recognize the incredible diversity of family situations in the modern world, and how that fits into our research goal.  Are we concerned about whether the survey respondent has given birth to a child?  If they’re a formal guardian of a child?  If they’re living in a household that contains children, regardless of the relationship?

The proper question wording will depend on the research goals, of course.  We often are assessing the impact of children within a household when we ask these questions, so we find ourselves simply asking, “How many children under the age of 18 are living in your home?”, perhaps with a followup about the relationship where necessary.  But It’s easy to be blinded by our own life experiences when designing research, and the results can lead to error in our conclusions.

So the next time you’re mingling at a party, we suggest not asking “Do you have kids”, and offer that you should instead ask, “How many children under the age of 18 are living in your home?”  It’s a great conversation starter and will get you much better data about the person you’re chatting with.


There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.


Happy or not

A key challenge for the research industry – and any company seeking feedback from its customers – is gaining participation.

There have been books written on how to reduce non-response (i.e. increase participation), and tactics often include providing incentives, additional touch points, and finely crafted messaging. All good and necessary.

But one trend we’re seeing more of is the minimalist “survey.” One question, maybe two, to gather point-in-time feedback. You see this on rating services (e.g., Square receipts, your Uber driver, at the checkout, etc.), simple email surveys where you respond by clicking a link in the email, and texting feedback, to name a few.

Recently, travelling through Iceland and Norway, I caHappy or notme across this stand asking “happy or not” at various points in the airport (e.g., check-in, bathrooms, and luggage claim). Incredibly simple and easy to do. You don’t even have to stop – just hit the button as you walk by.

A great idea, sure, but did people use it? In short, yes. While I don’t know the actual numbers, based on observation alone (what else does a market researcher do while hanging out in airports?), I did see several people interact with it as they went by. Maybe it’s the novelty of it and in time people will come to ignore it, but it’s a great example of collecting in-the-moment feedback, quickly and effortlessly.

Now, asking one question and getting a checkbox response will not tell you as much as a 10 minute survey will, but if it gets people to actually participate, it is a solid step in the right direction. Nothing says this has to be your only data either. I would assume that, in addition to the score received, there is also a time and date stamp associated with responses (and excessive button pushes at one time should probably be “cleaned” from the data). Taken together, investigating problems becomes easier (maybe people are only unsatisfied in the morning rush at the airport?), and if necessary, additional research can always be conducted as well to further investigate issues.

What other examples, successful or not, are you seeing organizations use to collect feedback?


Turning Passion into Actionable Data

Nonprofits are among my favorite clients that we work with here at Corona for a variety of reasons, but one of the things that I love most is the passion that surrounds nonprofits.  That passion shines through the most in our work when we do research with internal stakeholders for the nonprofit.  This could include donors, board members, volunteers, staff, and program participants.  These groups of people, who are already invested in the organization are passionate about helping to improve it, which is good news when conducting research, as it often makes them more likely to participate and increase response rates.

Prior to joining the Corona team, I worked in the volunteer department of a local animal shelter.  As a data nerd even then, I wanted to know more about who our volunteers were, and how they felt about the volunteer program.  I put together an informal survey, and while I still dream about what nuggets could have been uncovered if we had gone through a more formal Corona-style process, the data we uncovered was still valuable in helping us determine what we were doing well and what we needed to improve on.

That’s just one example, but the possibilities are endless.  Maybe you want to understand what motivated your donors to contribute to your cause, how likely they are to continue donating in the future, and what would motivate them to donate more.  Perhaps you want to evaluate the effectiveness of your programs.  Or, maybe you want to know how satisfied employees are with working at your organization and brainstorm ideas on how to decrease stress and create a better workplace.

While you want to be careful about being too internally focused and ignoring the environment in which your nonprofit operates, there is huge value in leveraging passion by looking internally at your stakeholders to help move your organization forward.

 


Informal Research for Nonprofit Organizations

While Corona serves all three sectors (private, public, and nonprofit) in our work, we have always had a soft spot for our nonprofit clients.  No other type of organization is asked to do more with less, so we love working with nonprofits to help them refine their strategies to be both more effective at fulfilling their missions and more financially stable at the same time.

However, while we are thrilled for the opportunities to work with dozens of nonprofits every year, we know that there are hundreds of other organizations that we don’t work with, many of which simply don’t have the resources to devote to a formal marketing research effort. I’m a huge fan of the Discovery Channel show MythBusters, so I’ll share one of my favorite quotes:

http://www.tested.com/art/makers/557288-origin-only-difference-between-screwing-around-and-science-writing-it-down/
Image from Tested courtesy of DCL

While few would argue that the results found in an episode of MythBusters would qualify as academically rigorous research, I think most would agree that trying a few things out and seeing what happens is at least better than just trusting your gut instinct alone.  Likewise, here are a few ideas for ways that nonprofits can gather at least some basic information to help guide their strategies through informal “market research.”

Informal Interviews

One-on-one interviews are one of the easiest ways to gather feedback from a wide variety of individuals.  Formal interview research involves a third-party randomly recruiting individuals to participate from the entire universe of people you are trying to understand, but simply talking to people one-on-one about the issues or strategies that you are considering can be very insightful.  Here are a few pointers on getting the most out of informal interviews:

  • Dedicate time for the interview. It may seem easy to just chat with someone informally at dinner or at an event, but the multitude of distractions will reduce the value you get out of the conversation.  Find a time that both parties can really focus on the discussion, and you’ll get much better results.
  • Write your questions down in advance. It’s easy to go down a rabbit hole when having a conversation about something you are passionate about, so be sure to think through the questions you need to answer so that you can keep the conversation on track.
  • Record the conversation (or at least take notes). Take the MythBusters’ advice, and document the conversation.  If you’ve talked to a dozen people about your idea, it will be impossible for you to remember it all.  By having documentation of the conversations, you can look back later and have a better understanding of what your interviewees said.

Informal focus groups

Similar to interviews, in an ideal world focus groups should be conducted by a neutral, third-party with an experienced moderator who can effectively guide the group discussion to meet your goals.  However, as with interviews, you can still get a lot of value out of just sitting down with a group and talking through the issues.  In particular, if you have an event or conference where many people are together already, grabbing a few to talk through your ideas can be very informative.  Our suggestions for this type of “research” are similar to those for informal interviews, with slight differences in their implications:

  • Dedicate time for the discussion. As mentioned before, it may be tempting to just say “We’ll talk about this over dinner” or “Maybe if we have time at the end of the day we can get together.”  You’ll get far better results if everyone can plan for the conversation in advance and participate without distractions.
  • Write your questions down in advance. Even more so than for interviews, having a formal plan about what questions you want to ask is imperative.  Group discussions have a tendency of taking on a life of their own, so having a plan can help you to guide the discussion back on topic.
  • Document the results. Again, you may think you can remember everything that was said during a conversation, but a few months down the road, you will be very thankful that you took the time to either record the conversation or take notes about what was said.

Informal Surveys

Surveys are, perhaps, the most difficult of these ideas to implement on an informal basis, but they can nevertheless be very useful.  If you’re just needing some guidance on how members of an organization feel about a topic, asking for a show of hands at a conference is a perfectly viable way of at least getting a general idea of how members feel.  Similarly, if you have a list of email addresses for your constituents, you could simply pose your question in an email and ask people to respond with their “vote.”

The trickiest part is making sure that you understand what the results actually represent.  If your conference is only attended by one type of member, don’t assume that their opinions are the same as other member types.  Likewise, if you only have email addresses for 10 percent of your constituents, be careful with assuming that their opinions reflect those of the other 90 percent.  Even so, these informal types of polling can help you to at least get an idea of how groups feel on the whole.

~

Hopefully these ideas can give your nonprofit organization a place to start when trying to understand reactions to your ideas or strategies.  While these informal ways of gathering data will never be as valuable as going through a formal research process, they can provide at least some guidance as you move forward that you wouldn’t have had otherwise.

And if your issues are complex enough that having true, formal research is necessary to ensure that you are making the best possible decisions for your organization, we’ve got a pretty good recommendation on who you can call…


Who’s Excited for 2016?

Oh man, where did 2015 even go? Sometimes the end of the year makes me anxious because I start thinking about all the things that need to be done between now and December 31st. And then I start thinking about things that I need to do in the upcoming year, like figuring out how to be smarter than robots so that they don’t steal my job and learning a programming language since I’ll probably need it to talk to the robots that I work with in the future. Ugh.2015 Calendar

Feeling anxious and feeling excited share many of the same physical features (e.g., sweaty palms, racing heart, etc.),  and research has shown that it is possible to shift feelings of anxiety to feelings of excitement even by doing something as simple as telling yourself you are excited. So, let me put these clammy hands to use and share some of the things that I am excited about for 2016:

  • Technological advancements for data collection. Changes in phone survey sampling are improving the cell phone component of a survey. Also, we have been looking at so many new, cool ways of collecting data, especially qualitative data. Cell phones, which are super annoying for phone surveys, are simultaneously super exciting for qualitative research. I’m excited to try some of these new techniques in 2016.
  • Improvements in the technology that allows us to more easily connect with both clients and people who work remotely. We use this more and more in our office. I’m not sure if in 2016 we will finally have robots with iPads for heads that allow people to Skype their faces into the office, but I can dream.
  • Work trips! I realize that work trips might be the stuff of nightmares at other jobs. But Coronerds understand the importance of finding humor, delicious food, and sometimes a cocktail during a work trip.
  • New research for clients old and new. This year I’ve learned all sorts of interesting facts about deck contractors, the future of museums, teenage relationships, people’s health behaviors, motorcyclists, business patterns in certain states, how arts can transform a city, and many more! I can’t wait to see what projects we work on next year.
  • Retreat. For people who really love data and planning, there is nothing as soothing as getting together as a firm to pore over a year’s worth of data about our own company and draw insights and plans from it.

Alright, I feel a lot better about 2016. Now I’m off to remind myself that these clammy hands also mean that I’m very excited about holiday travel, last minute shopping, and holiday political discussions with the extended family…