RADIANCE BLOG

Category: Surveying Surveys

Thinking strategically about benchmarks

When our clients are thinking about data that they would like to collect to answer a question, we sometimes are asked about external benchmarking data. Basically, when you benchmark your data, you generally are asking how you compare to other organizations or competitors. While external benchmarks can be useful, there are a couple of points to consider when deciding whether benchmarking your data is going to be useful:

  1. Context is key. Comparing yourself to other organizations or competitors can encourage some big picture thinking about your organization. But it is important to remember the context of the benchmark data. Are the benchmark organizations similar to you? Are they serving similar populations? How do they compare in size and budget? Additionally, external benchmark data may only be available in aggregated form. For example, non profit and government organizations may be grouped together. Sometimes these differences are not important, but other times they are an important lens through which you should examine the data.
  2. Benchmark data is inherently past-focused. When you compare your data to that of other organizations, you are comparing yourself to the past. There is a time-lag for any data collection, and the data are reflecting the impacts of changes or policies that have already been implemented. While this can be useful, if your organization is trying to adapt to changes that you see on the horizon, it may not be as useful to compare yourself to the past.
  3. Benchmark data is generally more useful as part of a larger research project. For example, if your organization differs significantly from other external benchmarks, it can be helpful to have data that suggest why that is.
  4. What you can benchmark on may not be the most useful. Often, you are limited in the types of data available about other organizations. These may be certain financial data or visitor data. Sometimes the exact same set of questions is administered to many organizations, and you are limited to those questions for benchmarking.

Like most research, external benchmarking can be useful—it is just a matter of thinking carefully about how and when to best use it.


Getting the most out of your customer survey

There are a multitude of tools available these days that allow organizations to easily ask questions of their customers.  It is certainly not uncommon when Corona begins an engagement for the client to have made internal attempts at conducting surveys in the past.  In some cases, these studies have been relatively sophisticated and have yielded great results. In others, however, the survey’s results were met with a resounding “Why does this matter?”.

The challenge is that conducting a good survey requires a much more strategic view than most realize.  This starts with designing the survey questions themselves.  We always begin our engagements by asking our clients to think through the decisions that will be made, the opportunities to improve, and the possible challenges to be addressed based on the results.  By keeping the answers to these questions in mind as you design your survey questions, you can minimize the amount of “trivia” questions in your survey that might be interesting to know, but won’t really have any influence on your future decisions.

Even after having questions designed, you have to consider how you will get people to participate in the survey.  If you have a database of 100,000 customers, it may be tempting to just send invitations to all of them.  But what if you plan to send out a plea for donations in the next few weeks?  Consider the impact of asking for 15 minutes of time from people who might be asked to support you very soon.  Being careful to appropriately time the survey and perhaps only send it out to a small segment of customers might help to minimize fatigue that could negatively impact your overall business strategy in the near future.

Finally, once you’ve collected the results, simple tabulations will only tell a small part of the story.  Every result should be examined through the lens of the actual strategic impact of the results.  A good question to ask throughout the analysis of your results is, “So what?”.  Keep the focus on the implications of the results rather than the results themselves, your final report of what you learned with have a much better chance of making a meaningful impact on your organization moving forward.

Obviously, we at Corona are here to help walk you through this process in order to ensure the highest-quality result possible, but even if you choose to go it alone, keeping a strategic view of what you need to learn and how it will influence your decisions will help to avoid a lot of wasted effort.


Subpopulations in Research

As I’m sure you know, we do a lot of survey research here at Corona. When we provide the results, we try to build the most complete picture for our clients, and that means looking at the data from every which way possible. One of the most effective ways to do this is by looking at subpopulations.

What is a subpopulation?

A subpopulation is essentially a fraction or part of the overall pool of the population you are surveying. A subpopulation can be defined many ways. For example, some of the most common subpopulations to examine in research are gender (e.g. male and female), age (e.g. <35, 35-54, 55+), race/ethnicity, location, etc.  You can effectively define a subpopulation using whatever criteria you like; for instance, you can have a subpopulation that is based on what type of dessert is preferred – those who like cake and heathens those who don’t like cake.

What does it mean to have subpopulations?

When you examine survey results by subpopulations, at a basic level respondents are simply split into the subpopulations or groups (commonly called breakouts) you defined. After being broken into these groups, the results for the survey are compiled for each individual group separately. For example, take the following survey question:

  1. About how many hours a week do you watch sports?
    1. 1 hour or less
    2. 2 to 4 hours
    3. 5 to 7 hours
    4. 8 hours or more

The results would typically have two components: top-level results (results compiled for all respondents to the survey) and breakouts (results by group for any subpopulations that have been defined). For the above example question, the results might look something like this:

In this completely made-up example, you can see the benefit of having subpopulations. While 21 percent of overall respondents watched five to seven hours of sports a week, you can see that male respondents accounted for a hefty chunk, as 26 percent of males watch that much sports, compared to only 16 percent of females. Breaking out questions by subpopulations allows you to more closely examine data and assists in finding those gems of information.

Getting the most out of your survey

Being prepared to utilize subpopulations in your survey analysis means putting your best foot forward and maximizing your investment. Many subpopulations are constructed using questions commonly asked in surveys (gender, age, etc.), but some questions might not otherwise be asked without the foresight of planning to break respondents into subpopulations. For example, a nonprofit might be building a questionnaire to survey their patrons on their messaging; by simply asking if a respondent has donated to the organization, they can examine survey results of donors separately from all patrons. The survey can now not only better inform messaging for the organization overall, but also allows them to better target and communicate to donors, specifically.

Conducting a survey can be a challenging experience, so the more you can get out of a single survey, the better. The next time you are designing a survey, ask around your workplace to see if a few questions can be added to better utilize the information you’re collecting. Now you’re one step closer to conducting the perfect survey!


Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.


Online research is becoming more feasible in smaller locales (and that includes Denver)

Door-to-door, intercept, mail, telephone, online – surveys have evolved with the technology and needs of the times. Online has increased speed and often lowered cost of conducting surveys. For some populations, it has even made conducting surveys more feasible.

CO PushpinHowever, online surveys haven’t always been feasible in a city such as Denver or even statewide in Colorado.

(I should note here that we’re talking about the general public or other populations where we do not have a list. For instance, if we were surveying your customers and you had a database of customers with email addresses, conducting the survey online is almost certainly the way to go.)

Why it’s been tough until now

So why has it been tough until now to conduct public opinion research online in Denver, the Front Range, or even all of Colorado?

Unlike with mail, where huge databases of addresses exist, and telephone, where again lists or RDD sample can be generated, there is no master repository of email addresses or requirement that residents have one official email address. (Many of us probably have multiple emails – I personally have four outside of work.)

The market research industry’s answer to this has been to create databases, but unlike with mail and telephone where lists can be generated via public sources, email addresses generally have to be collected via individuals voluntarily sharing their information. In the industry, companies have specialized in doing just that – recruiting a lot of potential respondents to their online panel in exchange for incentives provided when they complete a survey. In addition to email, these companies generally collect some basic demographic information as well to make targeting more effective.

Now, let’s say a panel had one million U.S. members in their database. Sounds big, doesn’t it? Well, given that Colorado makes up less than 2% of the nation’s population, than means there might be 20,000 Coloradoans in their database. If you wanted Denver Metro only (about half the state’s population), that takes our max potential to 10,000. If only 10% respond to any given survey invite, the most respondents you may be able to receive is 1,000, and that’s before any additional screening (e.g., you’re only looking for commuters). That is simplified summary, but as you can see, it largely becomes a numbers game – you need a very large panel to drill down to smaller geography or subset of the population.

What has changed

These panels are nothing new. Corona has been using them for a decade, but what has changed recently in our home market (and most smaller geographies around the country, for that matter) is that the panels have grown large enough to adequately supply enough respondents to participate in our studies. A few years ago, we could only do online studies nationwide, in regions (e.g., the west, south, etc.), or maybe in very large metropolitan areas. As panels and recruitment continued to grow, we were able to do general population studies (i.e., pretty much everyone qualifies as we don’t have additional criteria for screening), but not smaller segments of the population. Now, while we can still run into difficulty with really niche groups, we can conduct studies with parents, visitors to a certain attraction, and many other groups all within Denver metro or the Front Range.

Still, a note of caution

So, problem solved, right? Unfortunately, online panels come with some caveats. First, compared to a mail or telephone survey, when sample is randomly generated, the results are not considered representative as the sample is not random probability sample. (There are some probability-based samples for online panels, but they’re still in the “only big enough for nationwide studies phase” mostly.) Panels are typically designed to reflect the overall population in terms of demographics, but due to their recruiting method, can’t be considered “random”.

Other concerns need to be taken into account such as how quickly the panel turns over respondents, avoiding respondents who try to game the system just for incentives, and other quality control measures.

For these reasons, Corona still regularly recommends other survey modes, such as mail and telephone (yes, we still do mail!) when we feel they will provide better answers for our clients. Often times, however, online may be the only feasible option given the challenges with telephone (e.g., cell phones) and mail (e.g., slower, static content). Sometimes we’ll propose both to our clients and then discuss the relative tradeoffs with them.

In summary, online is a growing option for Denver and Colorado, as well as other smaller cities, but be sure to pick the mode that is best for your research – not just the one that is easiest.

 


Research on Research: Boosting Online Survey Response Rates

David Kennedy and Matt Herndon, both Principals here at Corona, will be presenting a webinar for the Market Research Association (MRA) on August 24th.

The topic is how to boost response rates with online surveys. Specifically, they will be presenting research Corona has done to learn how minor changes to such things as survey invites can make an impact on response rates. For instance, who the survey is “from”, the format, and salutation can all make a difference.

Click here to register. You do need to be a member to view the webinar. (We hope to post it, or at least a summary, here on our blog afterwards.)

Even if you can’t make it, rest assured that if you’re a client at least, these lessons are already being applied to your research!


Do you have kids? Wait – let me restate that.

Karla Raines and I had dinner with another couple last week that shares our background and interest in social research.  We were talking about the challenges of understanding the decisions of other people if you don’t understand their background, and how we can have biases that we don’t even realize.

It brought me back to the topic of how we design and ask questions on surveys, and my favorite example of unintentional background bias on the part of the designer.

A common question, both in research and in social conversations, is the ubiquitous, “Do you have kids?”  It’s an easy question to answer, right?  If you ask Ward and June Cleaver, they’ll immediately answer, “We have two, Wally and Beaver”.  (June might go with the more formal ‘Theodore’, but you get the point.)

When we ask the question in a research context, we’re generally asking it for a specific reason.  Children often have a major impact on how people behave, and we’re usually wondering if there’s a correlation on a particular issue.

But ‘do you have kids’ is a question that may capture much more than the classic Wally and Beaver household.  If we ask that question, the Cleaver family will answer ‘yes’, but so will a 75 year-old who has two kids, even if those kids are 50 years old and grandparents of their own.  So ‘do you have kids’ isn’t the question we want to ask in most contexts.

What if we expanded the question to ‘do you have children under 18’?  It gets a bit tricky here if we put ourselves in the minds of respondents, and this is where our unintentional background bias may come into play.  Ward and June will still answer yes, but what about a divorced parent who doesn’t have custody?  He or she may accurately answer yes, but there’s not a child living in their home.  Are we capturing the information that we think we’re capturing?

And what about a person who’s living with a boyfriend and the boyfriend’s two children?  Or the person who has taken a foster child into the home?  Or the grandparent who is raising a grandchild while the parents are serving overseas?  Or the couple whose adult child is temporarily back home with her own kids in tow?

If we’re really trying to figure out how children impact decisions, we need to observe and recognize the incredible diversity of family situations in the modern world, and how that fits into our research goal.  Are we concerned about whether the survey respondent has given birth to a child?  If they’re a formal guardian of a child?  If they’re living in a household that contains children, regardless of the relationship?

The proper question wording will depend on the research goals, of course.  We often are assessing the impact of children within a household when we ask these questions, so we find ourselves simply asking, “How many children under the age of 18 are living in your home?”, perhaps with a followup about the relationship where necessary.  But It’s easy to be blinded by our own life experiences when designing research, and the results can lead to error in our conclusions.

So the next time you’re mingling at a party, we suggest not asking “Do you have kids”, and offer that you should instead ask, “How many children under the age of 18 are living in your home?”  It’s a great conversation starter and will get you much better data about the person you’re chatting with.


There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.


Happy or not

A key challenge for the research industry – and any company seeking feedback from its customers – is gaining participation.

There have been books written on how to reduce non-response (i.e. increase participation), and tactics often include providing incentives, additional touch points, and finely crafted messaging. All good and necessary.

But one trend we’re seeing more of is the minimalist “survey.” One question, maybe two, to gather point-in-time feedback. You see this on rating services (e.g., Square receipts, your Uber driver, at the checkout, etc.), simple email surveys where you respond by clicking a link in the email, and texting feedback, to name a few.

Recently, travelling through Iceland and Norway, I caHappy or notme across this stand asking “happy or not” at various points in the airport (e.g., check-in, bathrooms, and luggage claim). Incredibly simple and easy to do. You don’t even have to stop – just hit the button as you walk by.

A great idea, sure, but did people use it? In short, yes. While I don’t know the actual numbers, based on observation alone (what else does a market researcher do while hanging out in airports?), I did see several people interact with it as they went by. Maybe it’s the novelty of it and in time people will come to ignore it, but it’s a great example of collecting in-the-moment feedback, quickly and effortlessly.

Now, asking one question and getting a checkbox response will not tell you as much as a 10 minute survey will, but if it gets people to actually participate, it is a solid step in the right direction. Nothing says this has to be your only data either. I would assume that, in addition to the score received, there is also a time and date stamp associated with responses (and excessive button pushes at one time should probably be “cleaned” from the data). Taken together, investigating problems becomes easier (maybe people are only unsatisfied in the morning rush at the airport?), and if necessary, additional research can always be conducted as well to further investigate issues.

What other examples, successful or not, are you seeing organizations use to collect feedback?


Turning Passion into Actionable Data

Nonprofits are among my favorite clients that we work with here at Corona for a variety of reasons, but one of the things that I love most is the passion that surrounds nonprofits.  That passion shines through the most in our work when we do research with internal stakeholders for the nonprofit.  This could include donors, board members, volunteers, staff, and program participants.  These groups of people, who are already invested in the organization are passionate about helping to improve it, which is good news when conducting research, as it often makes them more likely to participate and increase response rates.

Prior to joining the Corona team, I worked in the volunteer department of a local animal shelter.  As a data nerd even then, I wanted to know more about who our volunteers were, and how they felt about the volunteer program.  I put together an informal survey, and while I still dream about what nuggets could have been uncovered if we had gone through a more formal Corona-style process, the data we uncovered was still valuable in helping us determine what we were doing well and what we needed to improve on.

That’s just one example, but the possibilities are endless.  Maybe you want to understand what motivated your donors to contribute to your cause, how likely they are to continue donating in the future, and what would motivate them to donate more.  Perhaps you want to evaluate the effectiveness of your programs.  Or, maybe you want to know how satisfied employees are with working at your organization and brainstorm ideas on how to decrease stress and create a better workplace.

While you want to be careful about being too internally focused and ignoring the environment in which your nonprofit operates, there is huge value in leveraging passion by looking internally at your stakeholders to help move your organization forward.