RADIANCE BLOG

Category: Quantitative Research

Thinking strategically about benchmarks

When our clients are thinking about data that they would like to collect to answer a question, we sometimes are asked about external benchmarking data. Basically, when you benchmark your data, you generally are asking how you compare to other organizations or competitors. While external benchmarks can be useful, there are a couple of points to consider when deciding whether benchmarking your data is going to be useful:

  1. Context is key. Comparing yourself to other organizations or competitors can encourage some big picture thinking about your organization. But it is important to remember the context of the benchmark data. Are the benchmark organizations similar to you? Are they serving similar populations? How do they compare in size and budget? Additionally, external benchmark data may only be available in aggregated form. For example, non profit and government organizations may be grouped together. Sometimes these differences are not important, but other times they are an important lens through which you should examine the data.
  2. Benchmark data is inherently past-focused. When you compare your data to that of other organizations, you are comparing yourself to the past. There is a time-lag for any data collection, and the data are reflecting the impacts of changes or policies that have already been implemented. While this can be useful, if your organization is trying to adapt to changes that you see on the horizon, it may not be as useful to compare yourself to the past.
  3. Benchmark data is generally more useful as part of a larger research project. For example, if your organization differs significantly from other external benchmarks, it can be helpful to have data that suggest why that is.
  4. What you can benchmark on may not be the most useful. Often, you are limited in the types of data available about other organizations. These may be certain financial data or visitor data. Sometimes the exact same set of questions is administered to many organizations, and you are limited to those questions for benchmarking.

Like most research, external benchmarking can be useful—it is just a matter of thinking carefully about how and when to best use it.


Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.


Ensuring your graphs are honest

For our firm, the very idea of fake news goes against our mission to:

Provide accurate and unbiased information and counsel to decision makers.

The realm of fake news spans the spectrum of misleading to outright lying. It is the former that got us thinking about how graphs are sometimes twisted to mislead, while not necessarily being wrong.

Below are four recommendations to prevent misinterpretation when making your own graphs (or things to look for when interpreting those seen in the news).

 1. Use the same scales across graphs to be compared

Showing similar data for different groups or from different times? Make the graphs the same scale to aid easy, accurate comparisons.

Take the below examples. Maybe you have two graphs, even on separate pages, used to illustrate the differences between Groups 1 & 2. If someone were to look between them to see differences over time, the visual wouldn’t depict that 2016 saw a doubling of the proportion who “agreed.”  The bar is slightly longer, but not twice as long.

scales-across-graph-example

Sure, including axis and data labels helps, but the benefit of a graph is that you can quickly see the result with little extra interpretation. Poorly designed graphs, no matter the labeling, can still mislead.

2. Start the graph origin at zero.

Similar to above, not starting the graph at a zero point can cause differences to be taken out of scale.

In the below examples, both graphs show exactly the same data but start from different points, making the differences in the first graph look proportionately larger than they are.

zero-point-example

3. Convey the correct magnitude.

Sometimes, a seemingly small amount may have significant meaning (think tenths of a degree in global temperatures), while sometimes a large amount may not (think a million dollars within the Federal budget).

Choosing the proper graph type, design, and what to actually graph all make a difference here.

For example, when graphing global temperatures, graphing the differences may best accentuate the magnitude rather than graphing the actual temperatures, where the relatively small-looking differences fail to communicate the finding.

4. Make it clear who is represented by the data.

Does this data represent the entire population? Only voters? Only likely voters? Only those who responded “yes” to a previous question? Only those home on a Thursday night with a landline? (If it’s the latter, save your time and just ignore it completely.).

Usually, the safest bet is to show results by the whole population, even if the question was only asked to a subset of people due to a skip pattern. This is easiest for people to mentally process and prevents accidentally interpreting the proportion as the whole.

For instance, if 50% of people who were aware of Brand A had seen an ad for the brand, but only 10% of the population were aware of Brand A in the first place (and, therefore, were asked the follow-up about ads), then in reality, probably only 5% of the population has seen the ad. To the casual reader, that subtle difference in who the results represent could be significant.


This, of course, isn’t our first time writing about graph standards. Checkout some of our other blogs on the subject here:

Graphs: An effective tool, but use them carefully

Visualizing data: 5 Best practices


Research on Research: Boosting Online Survey Response Rates

David Kennedy and Matt Herndon, both Principals here at Corona, will be presenting a webinar for the Market Research Association (MRA) on August 24th.

The topic is how to boost response rates with online surveys. Specifically, they will be presenting research Corona has done to learn how minor changes to such things as survey invites can make an impact on response rates. For instance, who the survey is “from”, the format, and salutation can all make a difference.

Click here to register. You do need to be a member to view the webinar. (We hope to post it, or at least a summary, here on our blog afterwards.)

Even if you can’t make it, rest assured that if you’re a client at least, these lessons are already being applied to your research!


Do you have kids? Wait – let me restate that.

Karla Raines and I had dinner with another couple last week that shares our background and interest in social research.  We were talking about the challenges of understanding the decisions of other people if you don’t understand their background, and how we can have biases that we don’t even realize.

It brought me back to the topic of how we design and ask questions on surveys, and my favorite example of unintentional background bias on the part of the designer.

A common question, both in research and in social conversations, is the ubiquitous, “Do you have kids?”  It’s an easy question to answer, right?  If you ask Ward and June Cleaver, they’ll immediately answer, “We have two, Wally and Beaver”.  (June might go with the more formal ‘Theodore’, but you get the point.)

When we ask the question in a research context, we’re generally asking it for a specific reason.  Children often have a major impact on how people behave, and we’re usually wondering if there’s a correlation on a particular issue.

But ‘do you have kids’ is a question that may capture much more than the classic Wally and Beaver household.  If we ask that question, the Cleaver family will answer ‘yes’, but so will a 75 year-old who has two kids, even if those kids are 50 years old and grandparents of their own.  So ‘do you have kids’ isn’t the question we want to ask in most contexts.

What if we expanded the question to ‘do you have children under 18’?  It gets a bit tricky here if we put ourselves in the minds of respondents, and this is where our unintentional background bias may come into play.  Ward and June will still answer yes, but what about a divorced parent who doesn’t have custody?  He or she may accurately answer yes, but there’s not a child living in their home.  Are we capturing the information that we think we’re capturing?

And what about a person who’s living with a boyfriend and the boyfriend’s two children?  Or the person who has taken a foster child into the home?  Or the grandparent who is raising a grandchild while the parents are serving overseas?  Or the couple whose adult child is temporarily back home with her own kids in tow?

If we’re really trying to figure out how children impact decisions, we need to observe and recognize the incredible diversity of family situations in the modern world, and how that fits into our research goal.  Are we concerned about whether the survey respondent has given birth to a child?  If they’re a formal guardian of a child?  If they’re living in a household that contains children, regardless of the relationship?

The proper question wording will depend on the research goals, of course.  We often are assessing the impact of children within a household when we ask these questions, so we find ourselves simply asking, “How many children under the age of 18 are living in your home?”, perhaps with a followup about the relationship where necessary.  But It’s easy to be blinded by our own life experiences when designing research, and the results can lead to error in our conclusions.

So the next time you’re mingling at a party, we suggest not asking “Do you have kids”, and offer that you should instead ask, “How many children under the age of 18 are living in your home?”  It’s a great conversation starter and will get you much better data about the person you’re chatting with.


There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.


Turning Passion into Actionable Data

Nonprofits are among my favorite clients that we work with here at Corona for a variety of reasons, but one of the things that I love most is the passion that surrounds nonprofits.  That passion shines through the most in our work when we do research with internal stakeholders for the nonprofit.  This could include donors, board members, volunteers, staff, and program participants.  These groups of people, who are already invested in the organization are passionate about helping to improve it, which is good news when conducting research, as it often makes them more likely to participate and increase response rates.

Prior to joining the Corona team, I worked in the volunteer department of a local animal shelter.  As a data nerd even then, I wanted to know more about who our volunteers were, and how they felt about the volunteer program.  I put together an informal survey, and while I still dream about what nuggets could have been uncovered if we had gone through a more formal Corona-style process, the data we uncovered was still valuable in helping us determine what we were doing well and what we needed to improve on.

That’s just one example, but the possibilities are endless.  Maybe you want to understand what motivated your donors to contribute to your cause, how likely they are to continue donating in the future, and what would motivate them to donate more.  Perhaps you want to evaluate the effectiveness of your programs.  Or, maybe you want to know how satisfied employees are with working at your organization and brainstorm ideas on how to decrease stress and create a better workplace.

While you want to be careful about being too internally focused and ignoring the environment in which your nonprofit operates, there is huge value in leveraging passion by looking internally at your stakeholders to help move your organization forward.

 


Informal Research for Nonprofit Organizations

While Corona serves all three sectors (private, public, and nonprofit) in our work, we have always had a soft spot for our nonprofit clients.  No other type of organization is asked to do more with less, so we love working with nonprofits to help them refine their strategies to be both more effective at fulfilling their missions and more financially stable at the same time.

However, while we are thrilled for the opportunities to work with dozens of nonprofits every year, we know that there are hundreds of other organizations that we don’t work with, many of which simply don’t have the resources to devote to a formal marketing research effort. I’m a huge fan of the Discovery Channel show MythBusters, so I’ll share one of my favorite quotes:

http://www.tested.com/art/makers/557288-origin-only-difference-between-screwing-around-and-science-writing-it-down/
Image from Tested courtesy of DCL

While few would argue that the results found in an episode of MythBusters would qualify as academically rigorous research, I think most would agree that trying a few things out and seeing what happens is at least better than just trusting your gut instinct alone.  Likewise, here are a few ideas for ways that nonprofits can gather at least some basic information to help guide their strategies through informal “market research.”

Informal Interviews

One-on-one interviews are one of the easiest ways to gather feedback from a wide variety of individuals.  Formal interview research involves a third-party randomly recruiting individuals to participate from the entire universe of people you are trying to understand, but simply talking to people one-on-one about the issues or strategies that you are considering can be very insightful.  Here are a few pointers on getting the most out of informal interviews:

  • Dedicate time for the interview. It may seem easy to just chat with someone informally at dinner or at an event, but the multitude of distractions will reduce the value you get out of the conversation.  Find a time that both parties can really focus on the discussion, and you’ll get much better results.
  • Write your questions down in advance. It’s easy to go down a rabbit hole when having a conversation about something you are passionate about, so be sure to think through the questions you need to answer so that you can keep the conversation on track.
  • Record the conversation (or at least take notes). Take the MythBusters’ advice, and document the conversation.  If you’ve talked to a dozen people about your idea, it will be impossible for you to remember it all.  By having documentation of the conversations, you can look back later and have a better understanding of what your interviewees said.

Informal focus groups

Similar to interviews, in an ideal world focus groups should be conducted by a neutral, third-party with an experienced moderator who can effectively guide the group discussion to meet your goals.  However, as with interviews, you can still get a lot of value out of just sitting down with a group and talking through the issues.  In particular, if you have an event or conference where many people are together already, grabbing a few to talk through your ideas can be very informative.  Our suggestions for this type of “research” are similar to those for informal interviews, with slight differences in their implications:

  • Dedicate time for the discussion. As mentioned before, it may be tempting to just say “We’ll talk about this over dinner” or “Maybe if we have time at the end of the day we can get together.”  You’ll get far better results if everyone can plan for the conversation in advance and participate without distractions.
  • Write your questions down in advance. Even more so than for interviews, having a formal plan about what questions you want to ask is imperative.  Group discussions have a tendency of taking on a life of their own, so having a plan can help you to guide the discussion back on topic.
  • Document the results. Again, you may think you can remember everything that was said during a conversation, but a few months down the road, you will be very thankful that you took the time to either record the conversation or take notes about what was said.

Informal Surveys

Surveys are, perhaps, the most difficult of these ideas to implement on an informal basis, but they can nevertheless be very useful.  If you’re just needing some guidance on how members of an organization feel about a topic, asking for a show of hands at a conference is a perfectly viable way of at least getting a general idea of how members feel.  Similarly, if you have a list of email addresses for your constituents, you could simply pose your question in an email and ask people to respond with their “vote.”

The trickiest part is making sure that you understand what the results actually represent.  If your conference is only attended by one type of member, don’t assume that their opinions are the same as other member types.  Likewise, if you only have email addresses for 10 percent of your constituents, be careful with assuming that their opinions reflect those of the other 90 percent.  Even so, these informal types of polling can help you to at least get an idea of how groups feel on the whole.

~

Hopefully these ideas can give your nonprofit organization a place to start when trying to understand reactions to your ideas or strategies.  While these informal ways of gathering data will never be as valuable as going through a formal research process, they can provide at least some guidance as you move forward that you wouldn’t have had otherwise.

And if your issues are complex enough that having true, formal research is necessary to ensure that you are making the best possible decisions for your organization, we’ve got a pretty good recommendation on who you can call…


Who’s Excited for 2016?

Oh man, where did 2015 even go? Sometimes the end of the year makes me anxious because I start thinking about all the things that need to be done between now and December 31st. And then I start thinking about things that I need to do in the upcoming year, like figuring out how to be smarter than robots so that they don’t steal my job and learning a programming language since I’ll probably need it to talk to the robots that I work with in the future. Ugh.2015 Calendar

Feeling anxious and feeling excited share many of the same physical features (e.g., sweaty palms, racing heart, etc.),  and research has shown that it is possible to shift feelings of anxiety to feelings of excitement even by doing something as simple as telling yourself you are excited. So, let me put these clammy hands to use and share some of the things that I am excited about for 2016:

  • Technological advancements for data collection. Changes in phone survey sampling are improving the cell phone component of a survey. Also, we have been looking at so many new, cool ways of collecting data, especially qualitative data. Cell phones, which are super annoying for phone surveys, are simultaneously super exciting for qualitative research. I’m excited to try some of these new techniques in 2016.
  • Improvements in the technology that allows us to more easily connect with both clients and people who work remotely. We use this more and more in our office. I’m not sure if in 2016 we will finally have robots with iPads for heads that allow people to Skype their faces into the office, but I can dream.
  • Work trips! I realize that work trips might be the stuff of nightmares at other jobs. But Coronerds understand the importance of finding humor, delicious food, and sometimes a cocktail during a work trip.
  • New research for clients old and new. This year I’ve learned all sorts of interesting facts about deck contractors, the future of museums, teenage relationships, people’s health behaviors, motorcyclists, business patterns in certain states, how arts can transform a city, and many more! I can’t wait to see what projects we work on next year.
  • Retreat. For people who really love data and planning, there is nothing as soothing as getting together as a firm to pore over a year’s worth of data about our own company and draw insights and plans from it.

Alright, I feel a lot better about 2016. Now I’m off to remind myself that these clammy hands also mean that I’m very excited about holiday travel, last minute shopping, and holiday political discussions with the extended family…


How to Choose your own Adventure when it comes to Research

One of the things we’ve been doing at Corona this year that I’ve really enjoyed is resurrecting our book club. I enjoy it because it’s one way to think about the things we are doing from a bigger picture point of view, which is a welcome contrast to the project-specific thinking we are normally doing. One topic that’s come up repeatedly during our book club meetings is the pros and cons of different types of research methodology.

Knowing what kind of research you need to answer a question can be difficult if you have little experience with research. Understanding the different strengths and weaknesses of different methodologies can make the process a little easier and help ensure that you’re getting the most out of your research. Below I discuss some of the key differences between qualitative and quantitative research.

Qualitative Research

Qualitative research usually consists of interviews or focus groups, although other methodologies exist. The main benefit of qualitative research is that it is so open. Instead of constraining people in their responses, qualitative research generally allows for free-flowing, more natural responses. Focus group moderators and interviewers can respond in the moment to what participants are saying to draw out even deeper thinking about a topic. Qualitative research is great for brainstorming or finding key themes and language.

Qualitative data tend to be very rich, and you can explore many different themes within the data. One nice feature of qualitative research is that you can ask about topics that you have very little information about. For example, you might have a question in a survey that asks, “Which of the following best describes this organization? X, Y, Z, or none of the above.” This quantitative question assumes that X, Y, and Z are the three ways that people describe this organization, which requires at least some knowledge. A qualitative research question for this topic would ask, “How would you describe this organization?”. This is one of the reasons why qualitative research is great for exploratory research.

The primary weakness of qualitative research is that you can’t generate a valid population statistic from it. For example, although you could calculate what percent of people in focus groups said that Y was a barrier to working with your organization, you couldn’t generalize that estimate to the larger population. However, if you just wanted to identify the main barriers, then that would be possible with qualitative research. So even if 30% of focus group participants reported this barrier, we don’t know what percent of people overall would report that same barrier. We would only be able to say that this is a potential barrier. It’s important to think carefully about whether or not this would be a weakness for your research project.

Quantitative Research

The main goals of quantitative research are to estimate population quantities (e.g., 61% of your donors are in Colorado) and test for statistical difference between groups (e.g., donors in Colorado gave more money than those in other states). With quantitative research, you’re often sacrificing depth of understanding for precision.

One of the benefits to quantitative research, aside from being able to estimate population values, is that you can do a lot of interesting statistical analyses. Unlike a small sample of 30 people from focus groups, a large sample of 500 survey respondents allows for all sorts of analyses. You can look for statistical differences between groups, identify key clusters of respondents based on their responses, see if you can predict people’s responses from certain variables, etc.

There usually is not one single best way to answer a question with data, so thinking through your options and the benefits afforded by those options is important. And as always, we’re here to help you make these decisions if the project is complicated.