RADIANCE BLOG

Category: Market Research

Thinking strategically about benchmarks

When our clients are thinking about data that they would like to collect to answer a question, we sometimes are asked about external benchmarking data. Basically, when you benchmark your data, you generally are asking how you compare to other organizations or competitors. While external benchmarks can be useful, there are a couple of points to consider when deciding whether benchmarking your data is going to be useful:

  1. Context is key. Comparing yourself to other organizations or competitors can encourage some big picture thinking about your organization. But it is important to remember the context of the benchmark data. Are the benchmark organizations similar to you? Are they serving similar populations? How do they compare in size and budget? Additionally, external benchmark data may only be available in aggregated form. For example, non profit and government organizations may be grouped together. Sometimes these differences are not important, but other times they are an important lens through which you should examine the data.
  2. Benchmark data is inherently past-focused. When you compare your data to that of other organizations, you are comparing yourself to the past. There is a time-lag for any data collection, and the data are reflecting the impacts of changes or policies that have already been implemented. While this can be useful, if your organization is trying to adapt to changes that you see on the horizon, it may not be as useful to compare yourself to the past.
  3. Benchmark data is generally more useful as part of a larger research project. For example, if your organization differs significantly from other external benchmarks, it can be helpful to have data that suggest why that is.
  4. What you can benchmark on may not be the most useful. Often, you are limited in the types of data available about other organizations. These may be certain financial data or visitor data. Sometimes the exact same set of questions is administered to many organizations, and you are limited to those questions for benchmarking.

Like most research, external benchmarking can be useful—it is just a matter of thinking carefully about how and when to best use it.


Activating research

Research that just sits on the shelf (or these days, in a digital folder) is research that probably should not have been conducted. If it is not going to be used, then why do it?

Effective research takes many things, from the beginning through the end. We’ve blogged before about the need to start with end in the mind, but what happens when you get to the end? Then what?

Sharing results internally, with the right audiences, and in an effective medium, is key. Here are several ideas of how to do that beyond the common report or PowerPoint deck.

Make it interactive. Can the data (in part or whole) be made to allow for manipulation by users? This could be a fully interactive dashboard where the user gets to select variables to look at, or it could simply be a predefined analysis that users can pull up, filter, and review. For example, Corona often delivers open-ended verbatim responses with a series of filters built in so users can quickly drill down, rather than just reading hundreds or thousands of verbatim comments.

Video summaries. Can you tell the story through video for greater engagement? We have found that video works best in short clips to convey the primary findings and are often best accompanied by more detailed reporting (if users need more). Longer videos can be harder to digest and cause people to disengage. Corona has created short videos to communicate general findings to larger groups of employees who may need to know the general gist of the research, but do not need to know as much detail as core decision makers.

Initial readouts and workshops. Can you involve the users in designing reporting, such as holding a workshop to help build their dashboard so it includes the metrics they want? This not only helps create a more effective dashboard for them, but also creates buy-in since they were involved in its creation. Similarly, sharing preliminary findings can help focus additional analysis and ensure their questions are being addressed in the final report.

Also, consider the following to make any of the above more effective:

  • Who needs what? Who in the organization needs what information. Share what it is most important so critical points don’t get lost in the larger report.
  • How much? Consider the level of detail any one person or team needs. Executives may want top-level metrics with key points and recommendations; analysts may want every tabulation and verbatim response.
  • Who has questions? I think when people read a report or finding, they often think that’s it. Encourage questions and allow for follow-up to make sure everyone has what they need to move forward.

What challenges have you had making use of research? What have you done to try and overcome it? We’d love to hear bellow.


Getting the most out of your customer survey

There are a multitude of tools available these days that allow organizations to easily ask questions of their customers.  It is certainly not uncommon when Corona begins an engagement for the client to have made internal attempts at conducting surveys in the past.  In some cases, these studies have been relatively sophisticated and have yielded great results. In others, however, the survey’s results were met with a resounding “Why does this matter?”.

The challenge is that conducting a good survey requires a much more strategic view than most realize.  This starts with designing the survey questions themselves.  We always begin our engagements by asking our clients to think through the decisions that will be made, the opportunities to improve, and the possible challenges to be addressed based on the results.  By keeping the answers to these questions in mind as you design your survey questions, you can minimize the amount of “trivia” questions in your survey that might be interesting to know, but won’t really have any influence on your future decisions.

Even after having questions designed, you have to consider how you will get people to participate in the survey.  If you have a database of 100,000 customers, it may be tempting to just send invitations to all of them.  But what if you plan to send out a plea for donations in the next few weeks?  Consider the impact of asking for 15 minutes of time from people who might be asked to support you very soon.  Being careful to appropriately time the survey and perhaps only send it out to a small segment of customers might help to minimize fatigue that could negatively impact your overall business strategy in the near future.

Finally, once you’ve collected the results, simple tabulations will only tell a small part of the story.  Every result should be examined through the lens of the actual strategic impact of the results.  A good question to ask throughout the analysis of your results is, “So what?”.  Keep the focus on the implications of the results rather than the results themselves, your final report of what you learned with have a much better chance of making a meaningful impact on your organization moving forward.

Obviously, we at Corona are here to help walk you through this process in order to ensure the highest-quality result possible, but even if you choose to go it alone, keeping a strategic view of what you need to learn and how it will influence your decisions will help to avoid a lot of wasted effort.


How data-driven insights can reveal strategic advantages

A client recently asked us for guidance in the middle of their communication campaign.  They had already created and deployed a series of vivid ads encouraging a specific behavior. (For confidentiality reasons, I can’t state the behavior, but let’s pretend they wanted dog owners to register their dogs with the local humane society). Their desired outcome was to increase the percentage of registered dogs from the baseline, which hadn’t changed for many years. Their strategy was to use mass media messages to motivate all dog owners to register their dogs.

They hired Corona to evaluate the campaign and provide recommendations for improvement.  We found that a small percentage of the population had strong intentions to not register their dog (intention to do something is a relatively good predictor of what people will do).  Based on other scientific research, we know that it is difficult to change peoples’ strong intentions, especially through mass media.  Thus, we suggested that the client stop trying to influence all dog owners, at least in this campaign.

A better strategy was to motivate dog owners with weak intentions or were unsure what they would do.  Our research found that people with weak or unformed intentions had different barriers and reasons to register their dogs. Indeed, those with weak intentions often said they just “never got around to it.” This finding was the keystone of our research because it showed how a strategy shift aimed at influencing this sub-population – rather than all dog owners – would have the biggest impact on increasing overall registration!

Shifting strategy was not easy for this client, but the data and our recommendations compelled them to make the change.  We helped them see their issue from a new perspective, and our guidance made the transition of the communication strategy easier. Quality research and thoughtful analysis can reveal strategic advantages, and every strategic advantage can have a meaningful impact on success.


Market research can be more than just research

When most think about conducting market research, they are just thinking of the core questions they want answered and the information they need to move something forward. They have a goal in mind, and market research is a way to reach it. What many don’t consider is how market research can, and often does, function simultaneously as a marketing tool.

Any research that involves surveying, interviewing, or moderating focus groups of your customers or constituents draws their attention to your organization. You are engaging your customers with your brand, and you’re doing it in a very positive way: by asking for their opinions. You signal that you value their input, ensure you’re providing something of value, and are determining what you can do better.

The degree to which you engage with your customers or constituents varies, of course, dependent on the type of research. Interviews and focus groups create a large amount of bandwidth between you and the customer. Interviews and focus groups are personal and lend to a deeper and less restrictive dive into your customers’ opinions – as a result, your customers feel empowered. Surveying, while impersonal, allows you to give more people a voice, even if it is a smaller one.

In the end, you’re giving a voice to your customers. You’re creating another touchpoint that allows them to share their opinions and be heard. The next time you think of conducting market research, consider how you might be able to fully maximize your investment by best engaging your customers.


Listening isn’t enough

Recently, we’ve been having a few conversations at work about engagement processes, in part because we’ve seen a few requests for proposals that have some focus on engagement with a particular audience. Often, this engagement takes the form of listening in some way to the audience of interest. While hearing from a group of people that you are interested in engaging with is critical, I would argue that it’s just one part of an engagement process.

In fact, if you look at various types of research on engagement with different groups (employees, customers, etc.), there are a couple of similarities that stick out. Based on this research and some our own experiences at Corona, I identified a couple of themes of a successful engagement:

  1. Listening. Listening is a critical part of engagement. It is important to think carefully about which methods of listening will produce the type of information that is most useful. Are you making an attempt to hear from less engaged people? Are you interested in what kinds of ideas/concerns/problems/etc. people are having or are you interested in how common those are? Have you ensured that people feel comfortable being honest? Do people need additional information before giving input?
  1. Reflection. Often, it is easy to get so wrapped up in translating what you hear from the group into action that you forget to reflect what you heard back to the group. Telling people what you have heard from them is an important part of the engagement process. It makes sure that everyone is working with the same base of information and helps people understand why different decisions or changes are being made. Also, demonstrating that you understand what people were telling you can make later criticism less harsh.
  1. Expectations and Accountability. Finally, clarifying expectations and how accountability will be incorporated into the relationship is important. People generally like knowing what is expected of them and why. Initially, this can be as simple as explaining clearly the goals of an engagement process and why the group of interest is so vital to the process. Later in the process, this might be aligning expectations and goals with what you heard when listening to the group. Also, it’s important to think about how you will evaluate whether those expectations and goals are being met.

While there are definitely unique components to engagement processes with certain audiences (e.g., employees, stakeholders, community, etc.), the three components above stood out as common themes to all types of engagement.


4 Steps to Engaging Market Research

Market research can often occur within a silo – someone with an organization has a question, and research is conducted to answer it. While there is nothing wrong with that, it does miss an opportunity to use the research process itself as a means of customer engagement.

How often have you participated in research (e.g., taken a survey, etc.) and after completing it never heard another thing about it? What were the results? Did the company hear you? Were changes made as a result?

Below, we offer 4 Steps to engaging your customers throughout the research process.

Considerations for Engagement in Research

  1. Communicate across functions internally. When research will be conducted in the company’s name, ensure that all parties are aware and, if appropriate, promote the effort. Does the communications/customer service department know about it in case customers call with questions? Does the sales team know their clients may be getting contacted? Is there another department preparing to launch their own research? Can you use these other touch points (e.g., customer service, sales, etc.) to encourage participation? Ensuring everyone is on the same page can prevent confusion internally and externally, and show that your research isn’t an afterthought.
  2. Show that you know them. To the degree that you already know your customer, show it. Ensure that the research is relevant to them. For example, are you asking them questions they cannot answer because their account was just opened? Are you asking basic questions about their account that you should already know and that could easily be linked to their response instead? For instance, a customer’s sales volume could be seeded into their survey, with questions then adapted based on their actual purchase history, rather than asking a respondent to accurately recall that information.
  3. Tell them what you’ve found and how it has made an impact.  If the research is proprietary and results cannot be divulged for competitive reasons, this one may be hard, but closing the loop and showing that you not only received their response, but that you also heard them, can show your customers that their time was well spent. Maybe you can share a few top-line results in your newsletter, or maybe when a change is rolled out that was informed by the research findings you can point that out. Combining this with the above idea, you could reach out to those customers who most wanted that change to inform them of your decision and thank them for their input.
  4. Remember what they’ve already told you. If they already answered a similar, or even the same, question on a prior survey, do you need to ask it again? Rather, link the prior results to the new feedback. If you need to ask again, acknowledge that they’ve answered it before and you want to see if their responses have changed. And are they telling you information that could help you better serve them in the future? If so, can you track that data in your CRM system? Can you use it to place them in the appropriate customer segment? For example, if you know they won’t be in the market for a new product for at least 12 months, flag them so they don’t receive unneeded offers until it’s time. (Do be careful about confidentiality and privacy expectations here. If their responses may be linked to them later, they should be made aware of that upfront, and the survey shouldn’t be branded as “anonymous” or “confidential.”)

One last note. By making research itself more engaging, the need to offer financial incentives for participation will decrease. Knowing their time and feedback is valued, and will actually be used, can be incentive enough.

What research experiences have you seen or personally experienced that you felt were engaging?

For more on survey response rate and engagement, see a previous blog I wrote here.


Co-creating Insights through Participatory Research

“We both know some things; neither of us knows everything. Working together we will both know more and we will both learn more about how to know”

~Patricia Maguire, in Doing Participatory Research

Do you need to hear from more than the usual suspects?  Do you want your research to engage and empower people, rather than just study them like lab rats? Are you willing to step out of your comfort zone to create transformational research that provokes action?

If you answered yes to these questions, you might be interested in embarking in participatory research…and Corona can help!

Participatory research is a collaborative research approach that generates shared knowledge.  The intention is to research with and for participants, rather than about them, and the process is as valuable as the results.

At its heart, participatory research involves engaging with a group of people, typically those who have experienced disenfranchisement, alienation, or oppression. Researchers are participants and participants are researchers; the research questions, methodologies, and analyses are co-created. Embedded in the process are cycles of new questions, reflections, negotiations, and research adjustments. In participatory research, knowledge and understanding are generated rather than discovered.

Language and context are keys to success. The language of participatory research can be informal, personal, and relative to the situation. Safe-spaces are created so that participants and researchers can speak freely and honestly, allowing for greater authenticity and reflection of reality. The contexts of the research, including the purpose, geography, and even funding source and sponsors, are made overt and are relevant to the interpretation.

Participatory research is not the most efficient process; it takes extra time to mutually align project goals and specify research questions.  Additionally, participatory research does not assume that the results are unbiased.  Indeed, it asserts that social research cannot avoid the bias that too often manifests unconsciously and goes unacknowledged. Instead, participatory researchers describe and accept their biases, drawing conclusions through this lens.

Why conduct participatory research?  One reason is that the risks are mutual and the results benefit the participants just as much as they benefit the research conductor/sponsor. Results can also provoke changes such as increased equity, community empowerment, and social emancipation. When done appropriately, participatory research gives a strong and authentic voice to the participants, and hopefully, a greater awareness of their situation will lead to positive transformational changes.


State of Our Cities and Towns – 2017

For many years, Corona has partnered with the Colorado Municipal League to conduct the research that is the foundation of their annual State of Our Cities and Towns report. CML produced the following short video, specifically for municipal officials, about the importance of investing in quality of life:

Learn more, view the full report, and watch additional videos on the State of Our Cities website.


Where to next? Election polling and predictions

The accuracy of election polling is still being heavily discussed, and one point that is worth some pondering was made by Allan Lichtman in an NPR interview the day after the election.  What he said was this:

“Polls are not predictions.”

To some extent this is a semantic argument about how you define prediction, but his point, as I see it, is that polls are not a model defining what factors will drive people to choose one party, or candidate, over another.  Essentially, polls are not theory-driven – they are not a model of “why,” and they do not specify, a priori, what factors will matter.  So, polling estimates rise and fall with every news story and sound bite, but a prediction model would have to say something up front like “we think this type of news will affect behavior in the voting booth in this way.” Lichtman’s model, for example, identifies 13 variables that he predicts will affect whether the party in power continues to hold the White House, including whether there were significant policy changes in the current term, whether there was a big foreign policy triumph, whether the President’s party lost seats during the preceding mid-term election, and so on.

Polls, in contrast, are something like a meta prediction model.  Kate made this point as we were discussing the failure of election polls:  polls are essentially a sample of people trying to tell pollsters what they predict they will do on election day, and people are surprisingly bad at predicting their own behavior.  In other words, each unit (i.e., survey respondent) has its own, likely flawed, prediction model, and survey respondents are feeding the results of those models up to an aggregator (i.e., the poll).  In this sense, a poll as prediction, is sort of like relying on the “wisdom of the crowd” – but if you’ve ever seen what happens when someone uses the “ask the audience” lifeline on Who Wants to Be a Millionaire, you know that is not a foolproof strategy.

Whether a model or a poll is better in any given situation will depend on various things.  A model requires deep expertise in the topic area, and depending on knowledge and available data sources, it will only capture some portion of the variance in the predicted variable.  A model that fails to include an important predictor will not do a great job of predicting.  Polls are a complex effort to ask the right people the right questions to be able to make an accurate estimate of knowledge, beliefs, attitudes, or behaviors.  Polls have a variety of sources of error, including sampling error, nonresponse bias, measurement error, and so on, and each of those sources contribute to the accuracy of estimates coming out of the poll.

The election polling outcomes are a reminder of the importance of hearing from a representative sample of the population, and of designing questions with an understanding of psychology.  For example, it is important to understand what people can or can’t tell you in response to a direct question (e.g., when are people unlikely to have conscious access to their attitudes and motivations; when are knowledge or memory likely to be insufficient), and what people will or won’t tell you in response to a direct question (e.g., when is social desirability likely to affect whether people will tell you the truth).

This election year may have been unusual in the number of psychological factors at play in reporting voting intentions.  There was a lot of reluctant support on both sides, which suggests conflicts between voters’ values and their candidate’s values, and for some, likely conflicts between conscious and unconscious leanings.  Going forward, one interesting route would be for pollsters to capture various psychological factors that might affect accuracy of reporting and incorporate those into their models of election outcomes.

Hopefully in the future we’ll also see more reporting on prediction models in addition to polls.  Already there’s been a rash of data mining in an attempt to explain this year’s election results.  Some of those results might provide interesting ideas for prediction models of the future.  (I feel obliged to note: data mining is not prediction.  Bloomberg View explains.)

Elections are great for all of us in the research field because they provide feedback on accuracy that can help us improve our theories and methods in all types of surveys and models.  (We don’t do much traditional election polling at Corona, but a poll is essentially just a mini-survey – and a lot of the election “polls” are, in fact, surveys.  Confused?  We’ll try to come back to this in a future blog.) We optimistically look forward to seeing how the industry adapts.