RADIANCE BLOG

Category: Market Research

Activating research

Research that just sits on the shelf (or these days, in a digital folder) is research that probably should not have been conducted. If it is not going to be used, then why do it?

Effective research takes many things, from the beginning through the end. We’ve blogged before about the need to start with end in the mind, but what happens when you get to the end? Then what?

Sharing results internally, with the right audiences, and in an effective medium, is key. Here are several ideas of how to do that beyond the common report or PowerPoint deck.

Make it interactive. Can the data (in part or whole) be made to allow for manipulation by users? This could be a fully interactive dashboard where the user gets to select variables to look at, or it could simply be a predefined analysis that users can pull up, filter, and review. For example, Corona often delivers open-ended verbatim responses with a series of filters built in so users can quickly drill down, rather than just reading hundreds or thousands of verbatim comments.

Video summaries. Can you tell the story through video for greater engagement? We have found that video works best in short clips to convey the primary findings and are often best accompanied by more detailed reporting (if users need more). Longer videos can be harder to digest and cause people to disengage. Corona has created short videos to communicate general findings to larger groups of employees who may need to know the general gist of the research, but do not need to know as much detail as core decision makers.

Initial readouts and workshops. Can you involve the users in designing reporting, such as holding a workshop to help build their dashboard so it includes the metrics they want? This not only helps create a more effective dashboard for them, but also creates buy-in since they were involved in its creation. Similarly, sharing preliminary findings can help focus additional analysis and ensure their questions are being addressed in the final report.

Also, consider the following to make any of the above more effective:

  • Who needs what? Who in the organization needs what information. Share what it is most important so critical points don’t get lost in the larger report.
  • How much? Consider the level of detail any one person or team needs. Executives may want top-level metrics with key points and recommendations; analysts may want every tabulation and verbatim response.
  • Who has questions? I think when people read a report or finding, they often think that’s it. Encourage questions and allow for follow-up to make sure everyone has what they need to move forward.

What challenges have you had making use of research? What have you done to try and overcome it? We’d love to hear bellow.


Getting the most our of your customer survey

There are a multitude of tools available these days that allow organizations to easily ask questions of their customers.  It is certainly not uncommon when Corona begins an engagement for the client to have made internal attempts at conducting surveys in the past.  In some cases, these studies have been relatively sophisticated and have yielded great results. In others, however, the survey’s results were met with a resounding “Why does this matter?”.

The challenge is that conducting a good survey requires a much more strategic view than most realize.  This starts with designing the survey questions themselves.  We always begin our engagements by asking our clients to think through the decisions that will be made, the opportunities to improve, and the possible challenges to be addressed based on the results.  By keeping the answers to these questions in mind as you design your survey questions, you can minimize the amount of “trivia” questions in your survey that might be interesting to know, but won’t really have any influence on your future decisions.

Even after having questions designed, you have to consider how you will get people to participate in the survey.  If you have a database of 100,000 customers, it may be tempting to just send invitations to all of them.  But what if you plan to send out a plea for donations in the next few weeks?  Consider the impact of asking for 15 minutes of time from people who might be asked to support you very soon.  Being careful to appropriately time the survey and perhaps only send it out to a small segment of customers might help to minimize fatigue that could negatively impact your overall business strategy in the near future.

Finally, once you’ve collected the results, simple tabulations will only tell a small part of the story.  Every result should be examined through the lens of the actual strategic impact of the results.  A good question to ask throughout the analysis of your results is, “So what?”.  Keep the focus on the implications of the results rather than the results themselves, your final report of what you learned with have a much better chance of making a meaningful impact on your organization moving forward.

Obviously, we at Corona are here to help walk you through this process in order to ensure the highest-quality result possible, but even if you choose to go it alone, keeping a strategic view of what you need to learn and how it will influence your decisions will help to avoid a lot of wasted effort.


Market research can be more than just research

When most think about conducting market research, they are just thinking of the core questions they want answered and the information they need to move something forward. They have a goal in mind, and market research is a way to reach it. What many don’t consider is how market research can, and often does, function simultaneously as a marketing tool.

Any research that involves surveying, interviewing, or moderating focus groups of your customers or constituents draws their attention to your organization. You are engaging your customers with your brand, and you’re doing it in a very positive way: by asking for their opinions. You signal that you value their input, ensure you’re providing something of value, and are determining what you can do better.

The degree to which you engage with your customers or constituents varies, of course, dependent on the type of research. Interviews and focus groups create a large amount of bandwidth between you and the customer. Interviews and focus groups are personal and lend to a deeper and less restrictive dive into your customers’ opinions – as a result, your customers feel empowered. Surveying, while impersonal, allows you to give more people a voice, even if it is a smaller one.

In the end, you’re giving a voice to your customers. You’re creating another touchpoint that allows them to share their opinions and be heard. The next time you think of conducting market research, consider how you might be able to fully maximize your investment by best engaging your customers.


Listening isn’t enough

Recently, we’ve been having a few conversations at work about engagement processes, in part because we’ve seen a few requests for proposals that have some focus on engagement with a particular audience. Often, this engagement takes the form of listening in some way to the audience of interest. While hearing from a group of people that you are interested in engaging with is critical, I would argue that it’s just one part of an engagement process.

In fact, if you look at various types of research on engagement with different groups (employees, customers, etc.), there are a couple of similarities that stick out. Based on this research and some our own experiences at Corona, I identified a couple of themes of a successful engagement:

  1. Listening. Listening is a critical part of engagement. It is important to think carefully about which methods of listening will produce the type of information that is most useful. Are you making an attempt to hear from less engaged people? Are you interested in what kinds of ideas/concerns/problems/etc. people are having or are you interested in how common those are? Have you ensured that people feel comfortable being honest? Do people need additional information before giving input?
  1. Reflection. Often, it is easy to get so wrapped up in translating what you hear from the group into action that you forget to reflect what you heard back to the group. Telling people what you have heard from them is an important part of the engagement process. It makes sure that everyone is working with the same base of information and helps people understand why different decisions or changes are being made. Also, demonstrating that you understand what people were telling you can make later criticism less harsh.
  1. Expectations and Accountability. Finally, clarifying expectations and how accountability will be incorporated into the relationship is important. People generally like knowing what is expected of them and why. Initially, this can be as simple as explaining clearly the goals of an engagement process and why the group of interest is so vital to the process. Later in the process, this might be aligning expectations and goals with what you heard when listening to the group. Also, it’s important to think about how you will evaluate whether those expectations and goals are being met.

While there are definitely unique components to engagement processes with certain audiences (e.g., employees, stakeholders, community, etc.), the three components above stood out as common themes to all types of engagement.


4 Steps to Engaging Market Research

Market research can often occur within a silo – someone with an organization has a question, and research is conducted to answer it. While there is nothing wrong with that, it does miss an opportunity to use the research process itself as a means of customer engagement.

How often have you participated in research (e.g., taken a survey, etc.) and after completing it never heard another thing about it? What were the results? Did the company hear you? Were changes made as a result?

Below, we offer 4 Steps to engaging your customers throughout the research process.

Considerations for Engagement in Research

  1. Communicate across functions internally. When research will be conducted in the company’s name, ensure that all parties are aware and, if appropriate, promote the effort. Does the communications/customer service department know about it in case customers call with questions? Does the sales team know their clients may be getting contacted? Is there another department preparing to launch their own research? Can you use these other touch points (e.g., customer service, sales, etc.) to encourage participation? Ensuring everyone is on the same page can prevent confusion internally and externally, and show that your research isn’t an afterthought.
  2. Show that you know them. To the degree that you already know your customer, show it. Ensure that the research is relevant to them. For example, are you asking them questions they cannot answer because their account was just opened? Are you asking basic questions about their account that you should already know and that could easily be linked to their response instead? For instance, a customer’s sales volume could be seeded into their survey, with questions then adapted based on their actual purchase history, rather than asking a respondent to accurately recall that information.
  3. Tell them what you’ve found and how it has made an impact.  If the research is proprietary and results cannot be divulged for competitive reasons, this one may be hard, but closing the loop and showing that you not only received their response, but that you also heard them, can show your customers that their time was well spent. Maybe you can share a few top-line results in your newsletter, or maybe when a change is rolled out that was informed by the research findings you can point that out. Combining this with the above idea, you could reach out to those customers who most wanted that change to inform them of your decision and thank them for their input.
  4. Remember what they’ve already told you. If they already answered a similar, or even the same, question on a prior survey, do you need to ask it again? Rather, link the prior results to the new feedback. If you need to ask again, acknowledge that they’ve answered it before and you want to see if their responses have changed. And are they telling you information that could help you better serve them in the future? If so, can you track that data in your CRM system? Can you use it to place them in the appropriate customer segment? For example, if you know they won’t be in the market for a new product for at least 12 months, flag them so they don’t receive unneeded offers until it’s time. (Do be careful about confidentiality and privacy expectations here. If their responses may be linked to them later, they should be made aware of that upfront, and the survey shouldn’t be branded as “anonymous” or “confidential.”)

One last note. By making research itself more engaging, the need to offer financial incentives for participation will decrease. Knowing their time and feedback is valued, and will actually be used, can be incentive enough.

What research experiences have you seen or personally experienced that you felt were engaging?

For more on survey response rate and engagement, see a previous blog I wrote here.


Co-creating Insights through Participatory Research

“We both know some things; neither of us knows everything. Working together we will both know more and we will both learn more about how to know”

~Patricia Maguire, in Doing Participatory Research

Do you need to hear from more than the usual suspects?  Do you want your research to engage and empower people, rather than just study them like lab rats? Are you willing to step out of your comfort zone to create transformational research that provokes action?

If you answered yes to these questions, you might be interested in embarking in participatory research…and Corona can help!

Participatory research is a collaborative research approach that generates shared knowledge.  The intention is to research with and for participants, rather than about them, and the process is as valuable as the results.

At its heart, participatory research involves engaging with a group of people, typically those who have experienced disenfranchisement, alienation, or oppression. Researchers are participants and participants are researchers; the research questions, methodologies, and analyses are co-created. Embedded in the process are cycles of new questions, reflections, negotiations, and research adjustments. In participatory research, knowledge and understanding are generated rather than discovered.

Language and context are keys to success. The language of participatory research can be informal, personal, and relative to the situation. Safe-spaces are created so that participants and researchers can speak freely and honestly, allowing for greater authenticity and reflection of reality. The contexts of the research, including the purpose, geography, and even funding source and sponsors, are made overt and are relevant to the interpretation.

Participatory research is not the most efficient process; it takes extra time to mutually align project goals and specify research questions.  Additionally, participatory research does not assume that the results are unbiased.  Indeed, it asserts that social research cannot avoid the bias that too often manifests unconsciously and goes unacknowledged. Instead, participatory researchers describe and accept their biases, drawing conclusions through this lens.

Why conduct participatory research?  One reason is that the risks are mutual and the results benefit the participants just as much as they benefit the research conductor/sponsor. Results can also provoke changes such as increased equity, community empowerment, and social emancipation. When done appropriately, participatory research gives a strong and authentic voice to the participants, and hopefully, a greater awareness of their situation will lead to positive transformational changes.


State of Our Cities and Towns – 2017

For many years, Corona has partnered with the Colorado Municipal League to conduct the research that is the foundation of their annual State of Our Cities and Towns report. CML produced the following short video, specifically for municipal officials, about the importance of investing in quality of life:

Learn more, view the full report, and watch additional videos on the State of Our Cities website.


Where to next? Election polling and predictions

The accuracy of election polling is still being heavily discussed, and one point that is worth some pondering was made by Allan Lichtman in an NPR interview the day after the election.  What he said was this:

“Polls are not predictions.”

To some extent this is a semantic argument about how you define prediction, but his point, as I see it, is that polls are not a model defining what factors will drive people to choose one party, or candidate, over another.  Essentially, polls are not theory-driven – they are not a model of “why,” and they do not specify, a priori, what factors will matter.  So, polling estimates rise and fall with every news story and sound bite, but a prediction model would have to say something up front like “we think this type of news will affect behavior in the voting booth in this way.” Lichtman’s model, for example, identifies 13 variables that he predicts will affect whether the party in power continues to hold the White House, including whether there were significant policy changes in the current term, whether there was a big foreign policy triumph, whether the President’s party lost seats during the preceding mid-term election, and so on.

Polls, in contrast, are something like a meta prediction model.  Kate made this point as we were discussing the failure of election polls:  polls are essentially a sample of people trying to tell pollsters what they predict they will do on election day, and people are surprisingly bad at predicting their own behavior.  In other words, each unit (i.e., survey respondent) has its own, likely flawed, prediction model, and survey respondents are feeding the results of those models up to an aggregator (i.e., the poll).  In this sense, a poll as prediction, is sort of like relying on the “wisdom of the crowd” – but if you’ve ever seen what happens when someone uses the “ask the audience” lifeline on Who Wants to Be a Millionaire, you know that is not a foolproof strategy.

Whether a model or a poll is better in any given situation will depend on various things.  A model requires deep expertise in the topic area, and depending on knowledge and available data sources, it will only capture some portion of the variance in the predicted variable.  A model that fails to include an important predictor will not do a great job of predicting.  Polls are a complex effort to ask the right people the right questions to be able to make an accurate estimate of knowledge, beliefs, attitudes, or behaviors.  Polls have a variety of sources of error, including sampling error, nonresponse bias, measurement error, and so on, and each of those sources contribute to the accuracy of estimates coming out of the poll.

The election polling outcomes are a reminder of the importance of hearing from a representative sample of the population, and of designing questions with an understanding of psychology.  For example, it is important to understand what people can or can’t tell you in response to a direct question (e.g., when are people unlikely to have conscious access to their attitudes and motivations; when are knowledge or memory likely to be insufficient), and what people will or won’t tell you in response to a direct question (e.g., when is social desirability likely to affect whether people will tell you the truth).

This election year may have been unusual in the number of psychological factors at play in reporting voting intentions.  There was a lot of reluctant support on both sides, which suggests conflicts between voters’ values and their candidate’s values, and for some, likely conflicts between conscious and unconscious leanings.  Going forward, one interesting route would be for pollsters to capture various psychological factors that might affect accuracy of reporting and incorporate those into their models of election outcomes.

Hopefully in the future we’ll also see more reporting on prediction models in addition to polls.  Already there’s been a rash of data mining in an attempt to explain this year’s election results.  Some of those results might provide interesting ideas for prediction models of the future.  (I feel obliged to note: data mining is not prediction.  Bloomberg View explains.)

Elections are great for all of us in the research field because they provide feedback on accuracy that can help us improve our theories and methods in all types of surveys and models.  (We don’t do much traditional election polling at Corona, but a poll is essentially just a mini-survey – and a lot of the election “polls” are, in fact, surveys.  Confused?  We’ll try to come back to this in a future blog.) We optimistically look forward to seeing how the industry adapts.


Subpopulations in Research

As I’m sure you know, we do a lot of survey research here at Corona. When we provide the results, we try to build the most complete picture for our clients, and that means looking at the data from every which way possible. One of the most effective ways to do this is by looking at subpopulations.

What is a subpopulation?

A subpopulation is essentially a fraction or part of the overall pool of the population you are surveying. A subpopulation can be defined many ways. For example, some of the most common subpopulations to examine in research are gender (e.g. male and female), age (e.g. <35, 35-54, 55+), race/ethnicity, location, etc.  You can effectively define a subpopulation using whatever criteria you like; for instance, you can have a subpopulation that is based on what type of dessert is preferred – those who like cake and heathens those who don’t like cake.

What does it mean to have subpopulations?

When you examine survey results by subpopulations, at a basic level respondents are simply split into the subpopulations or groups (commonly called breakouts) you defined. After being broken into these groups, the results for the survey are compiled for each individual group separately. For example, take the following survey question:

  1. About how many hours a week do you watch sports?
    1. 1 hour or less
    2. 2 to 4 hours
    3. 5 to 7 hours
    4. 8 hours or more

The results would typically have two components: top-level results (results compiled for all respondents to the survey) and breakouts (results by group for any subpopulations that have been defined). For the above example question, the results might look something like this:

In this completely made-up example, you can see the benefit of having subpopulations. While 21 percent of overall respondents watched five to seven hours of sports a week, you can see that male respondents accounted for a hefty chunk, as 26 percent of males watch that much sports, compared to only 16 percent of females. Breaking out questions by subpopulations allows you to more closely examine data and assists in finding those gems of information.

Getting the most out of your survey

Being prepared to utilize subpopulations in your survey analysis means putting your best foot forward and maximizing your investment. Many subpopulations are constructed using questions commonly asked in surveys (gender, age, etc.), but some questions might not otherwise be asked without the foresight of planning to break respondents into subpopulations. For example, a nonprofit might be building a questionnaire to survey their patrons on their messaging; by simply asking if a respondent has donated to the organization, they can examine survey results of donors separately from all patrons. The survey can now not only better inform messaging for the organization overall, but also allows them to better target and communicate to donors, specifically.

Conducting a survey can be a challenging experience, so the more you can get out of a single survey, the better. The next time you are designing a survey, ask around your workplace to see if a few questions can be added to better utilize the information you’re collecting. Now you’re one step closer to conducting the perfect survey!


Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.