RADIANCE BLOG

Category: Market Research

Listening isn’t enough

Recently, we’ve been having a few conversations at work about engagement processes, in part because we’ve seen a few requests for proposals that have some focus on engagement with a particular audience. Often, this engagement takes the form of listening in some way to the audience of interest. While hearing from a group of people that you are interested in engaging with is critical, I would argue that it’s just one part of an engagement process.

In fact, if you look at various types of research on engagement with different groups (employees, customers, etc.), there are a couple of similarities that stick out. Based on this research and some our own experiences at Corona, I identified a couple of themes of a successful engagement:

  1. Listening. Listening is a critical part of engagement. It is important to think carefully about which methods of listening will produce the type of information that is most useful. Are you making an attempt to hear from less engaged people? Are you interested in what kinds of ideas/concerns/problems/etc. people are having or are you interested in how common those are? Have you ensured that people feel comfortable being honest? Do people need additional information before giving input?
  1. Reflection. Often, it is easy to get so wrapped up in translating what you hear from the group into action that you forget to reflect what you heard back to the group. Telling people what you have heard from them is an important part of the engagement process. It makes sure that everyone is working with the same base of information and helps people understand why different decisions or changes are being made. Also, demonstrating that you understand what people were telling you can make later criticism less harsh.
  1. Expectations and Accountability. Finally, clarifying expectations and how accountability will be incorporated into the relationship is important. People generally like knowing what is expected of them and why. Initially, this can be as simple as explaining clearly the goals of an engagement process and why the group of interest is so vital to the process. Later in the process, this might be aligning expectations and goals with what you heard when listening to the group. Also, it’s important to think about how you will evaluate whether those expectations and goals are being met.

While there are definitely unique components to engagement processes with certain audiences (e.g., employees, stakeholders, community, etc.), the three components above stood out as common themes to all types of engagement.


4 Steps to Engaging Market Research

Market research can often occur within a silo – someone with an organization has a question, and research is conducted to answer it. While there is nothing wrong with that, it does miss an opportunity to use the research process itself as a means of customer engagement.

How often have you participated in research (e.g., taken a survey, etc.) and after completing it never heard another thing about it? What were the results? Did the company hear you? Were changes made as a result?

Below, we offer 4 Steps to engaging your customers throughout the research process.

Considerations for Engagement in Research

  1. Communicate across functions internally. When research will be conducted in the company’s name, ensure that all parties are aware and, if appropriate, promote the effort. Does the communications/customer service department know about it in case customers call with questions? Does the sales team know their clients may be getting contacted? Is there another department preparing to launch their own research? Can you use these other touch points (e.g., customer service, sales, etc.) to encourage participation? Ensuring everyone is on the same page can prevent confusion internally and externally, and show that your research isn’t an afterthought.
  2. Show that you know them. To the degree that you already know your customer, show it. Ensure that the research is relevant to them. For example, are you asking them questions they cannot answer because their account was just opened? Are you asking basic questions about their account that you should already know and that could easily be linked to their response instead? For instance, a customer’s sales volume could be seeded into their survey, with questions then adapted based on their actual purchase history, rather than asking a respondent to accurately recall that information.
  3. Tell them what you’ve found and how it has made an impact.  If the research is proprietary and results cannot be divulged for competitive reasons, this one may be hard, but closing the loop and showing that you not only received their response, but that you also heard them, can show your customers that their time was well spent. Maybe you can share a few top-line results in your newsletter, or maybe when a change is rolled out that was informed by the research findings you can point that out. Combining this with the above idea, you could reach out to those customers who most wanted that change to inform them of your decision and thank them for their input.
  4. Remember what they’ve already told you. If they already answered a similar, or even the same, question on a prior survey, do you need to ask it again? Rather, link the prior results to the new feedback. If you need to ask again, acknowledge that they’ve answered it before and you want to see if their responses have changed. And are they telling you information that could help you better serve them in the future? If so, can you track that data in your CRM system? Can you use it to place them in the appropriate customer segment? For example, if you know they won’t be in the market for a new product for at least 12 months, flag them so they don’t receive unneeded offers until it’s time. (Do be careful about confidentiality and privacy expectations here. If their responses may be linked to them later, they should be made aware of that upfront, and the survey shouldn’t be branded as “anonymous” or “confidential.”)

One last note. By making research itself more engaging, the need to offer financial incentives for participation will decrease. Knowing their time and feedback is valued, and will actually be used, can be incentive enough.

What research experiences have you seen or personally experienced that you felt were engaging?

For more on survey response rate and engagement, see a previous blog I wrote here.


Co-creating Insights through Participatory Research

“We both know some things; neither of us knows everything. Working together we will both know more and we will both learn more about how to know”

~Patricia Maguire, in Doing Participatory Research

Do you need to hear from more than the usual suspects?  Do you want your research to engage and empower people, rather than just study them like lab rats? Are you willing to step out of your comfort zone to create transformational research that provokes action?

If you answered yes to these questions, you might be interested in embarking in participatory research…and Corona can help!

Participatory research is a collaborative research approach that generates shared knowledge.  The intention is to research with and for participants, rather than about them, and the process is as valuable as the results.

At its heart, participatory research involves engaging with a group of people, typically those who have experienced disenfranchisement, alienation, or oppression. Researchers are participants and participants are researchers; the research questions, methodologies, and analyses are co-created. Embedded in the process are cycles of new questions, reflections, negotiations, and research adjustments. In participatory research, knowledge and understanding are generated rather than discovered.

Language and context are keys to success. The language of participatory research can be informal, personal, and relative to the situation. Safe-spaces are created so that participants and researchers can speak freely and honestly, allowing for greater authenticity and reflection of reality. The contexts of the research, including the purpose, geography, and even funding source and sponsors, are made overt and are relevant to the interpretation.

Participatory research is not the most efficient process; it takes extra time to mutually align project goals and specify research questions.  Additionally, participatory research does not assume that the results are unbiased.  Indeed, it asserts that social research cannot avoid the bias that too often manifests unconsciously and goes unacknowledged. Instead, participatory researchers describe and accept their biases, drawing conclusions through this lens.

Why conduct participatory research?  One reason is that the risks are mutual and the results benefit the participants just as much as they benefit the research conductor/sponsor. Results can also provoke changes such as increased equity, community empowerment, and social emancipation. When done appropriately, participatory research gives a strong and authentic voice to the participants, and hopefully, a greater awareness of their situation will lead to positive transformational changes.


State of Our Cities and Towns – 2017

For many years, Corona has partnered with the Colorado Municipal League to conduct the research that is the foundation of their annual State of Our Cities and Towns report. CML produced the following short video, specifically for municipal officials, about the importance of investing in quality of life:

Learn more, view the full report, and watch additional videos on the State of Our Cities website.


Where to next? Election polling and predictions

The accuracy of election polling is still being heavily discussed, and one point that is worth some pondering was made by Allan Lichtman in an NPR interview the day after the election.  What he said was this:

“Polls are not predictions.”

To some extent this is a semantic argument about how you define prediction, but his point, as I see it, is that polls are not a model defining what factors will drive people to choose one party, or candidate, over another.  Essentially, polls are not theory-driven – they are not a model of “why,” and they do not specify, a priori, what factors will matter.  So, polling estimates rise and fall with every news story and sound bite, but a prediction model would have to say something up front like “we think this type of news will affect behavior in the voting booth in this way.” Lichtman’s model, for example, identifies 13 variables that he predicts will affect whether the party in power continues to hold the White House, including whether there were significant policy changes in the current term, whether there was a big foreign policy triumph, whether the President’s party lost seats during the preceding mid-term election, and so on.

Polls, in contrast, are something like a meta prediction model.  Kate made this point as we were discussing the failure of election polls:  polls are essentially a sample of people trying to tell pollsters what they predict they will do on election day, and people are surprisingly bad at predicting their own behavior.  In other words, each unit (i.e., survey respondent) has its own, likely flawed, prediction model, and survey respondents are feeding the results of those models up to an aggregator (i.e., the poll).  In this sense, a poll as prediction, is sort of like relying on the “wisdom of the crowd” – but if you’ve ever seen what happens when someone uses the “ask the audience” lifeline on Who Wants to Be a Millionaire, you know that is not a foolproof strategy.

Whether a model or a poll is better in any given situation will depend on various things.  A model requires deep expertise in the topic area, and depending on knowledge and available data sources, it will only capture some portion of the variance in the predicted variable.  A model that fails to include an important predictor will not do a great job of predicting.  Polls are a complex effort to ask the right people the right questions to be able to make an accurate estimate of knowledge, beliefs, attitudes, or behaviors.  Polls have a variety of sources of error, including sampling error, nonresponse bias, measurement error, and so on, and each of those sources contribute to the accuracy of estimates coming out of the poll.

The election polling outcomes are a reminder of the importance of hearing from a representative sample of the population, and of designing questions with an understanding of psychology.  For example, it is important to understand what people can or can’t tell you in response to a direct question (e.g., when are people unlikely to have conscious access to their attitudes and motivations; when are knowledge or memory likely to be insufficient), and what people will or won’t tell you in response to a direct question (e.g., when is social desirability likely to affect whether people will tell you the truth).

This election year may have been unusual in the number of psychological factors at play in reporting voting intentions.  There was a lot of reluctant support on both sides, which suggests conflicts between voters’ values and their candidate’s values, and for some, likely conflicts between conscious and unconscious leanings.  Going forward, one interesting route would be for pollsters to capture various psychological factors that might affect accuracy of reporting and incorporate those into their models of election outcomes.

Hopefully in the future we’ll also see more reporting on prediction models in addition to polls.  Already there’s been a rash of data mining in an attempt to explain this year’s election results.  Some of those results might provide interesting ideas for prediction models of the future.  (I feel obliged to note: data mining is not prediction.  Bloomberg View explains.)

Elections are great for all of us in the research field because they provide feedback on accuracy that can help us improve our theories and methods in all types of surveys and models.  (We don’t do much traditional election polling at Corona, but a poll is essentially just a mini-survey – and a lot of the election “polls” are, in fact, surveys.  Confused?  We’ll try to come back to this in a future blog.) We optimistically look forward to seeing how the industry adapts.


Subpopulations in Research

As I’m sure you know, we do a lot of survey research here at Corona. When we provide the results, we try to build the most complete picture for our clients, and that means looking at the data from every which way possible. One of the most effective ways to do this is by looking at subpopulations.

What is a subpopulation?

A subpopulation is essentially a fraction or part of the overall pool of the population you are surveying. A subpopulation can be defined many ways. For example, some of the most common subpopulations to examine in research are gender (e.g. male and female), age (e.g. <35, 35-54, 55+), race/ethnicity, location, etc.  You can effectively define a subpopulation using whatever criteria you like; for instance, you can have a subpopulation that is based on what type of dessert is preferred – those who like cake and heathens those who don’t like cake.

What does it mean to have subpopulations?

When you examine survey results by subpopulations, at a basic level respondents are simply split into the subpopulations or groups (commonly called breakouts) you defined. After being broken into these groups, the results for the survey are compiled for each individual group separately. For example, take the following survey question:

  1. About how many hours a week do you watch sports?
    1. 1 hour or less
    2. 2 to 4 hours
    3. 5 to 7 hours
    4. 8 hours or more

The results would typically have two components: top-level results (results compiled for all respondents to the survey) and breakouts (results by group for any subpopulations that have been defined). For the above example question, the results might look something like this:

In this completely made-up example, you can see the benefit of having subpopulations. While 21 percent of overall respondents watched five to seven hours of sports a week, you can see that male respondents accounted for a hefty chunk, as 26 percent of males watch that much sports, compared to only 16 percent of females. Breaking out questions by subpopulations allows you to more closely examine data and assists in finding those gems of information.

Getting the most out of your survey

Being prepared to utilize subpopulations in your survey analysis means putting your best foot forward and maximizing your investment. Many subpopulations are constructed using questions commonly asked in surveys (gender, age, etc.), but some questions might not otherwise be asked without the foresight of planning to break respondents into subpopulations. For example, a nonprofit might be building a questionnaire to survey their patrons on their messaging; by simply asking if a respondent has donated to the organization, they can examine survey results of donors separately from all patrons. The survey can now not only better inform messaging for the organization overall, but also allows them to better target and communicate to donors, specifically.

Conducting a survey can be a challenging experience, so the more you can get out of a single survey, the better. The next time you are designing a survey, ask around your workplace to see if a few questions can be added to better utilize the information you’re collecting. Now you’re one step closer to conducting the perfect survey!


Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.


Writing an RFP

So you’ve finally reached a point where you feel like you need more information to move forward as an organization, and, even more importantly, you’ve been able to secure some amount of funding to do so. Suddenly you find yourself elbow deep in old request-for-proposals (RFPs), both from your organization and others, trying to craft an RFP for your project. Where do you start?

We write a lot of proposals in response to RFPs at Corona, and based on what we’ve seen, here are a few suggestions for what to include in your next RFP:

  • Decision to be made or problem being faced. One of the most important pieces of information that is often difficult to find, if not missing from an RFP, is what decision an organization is trying to make or what problem an organization is trying to overcome. Instead, we often see RFPs asking for a specific methodology, while not describing what an organization is planning to do with the information. While specifying the methodology can sometimes be important (e.g., you want to replicate an online survey of donors, you need to perform an evaluation as part of a grant, etc.), sometimes specifying it might limit what bidders suggest in their proposals.

Part of the reason why you hire a consultant is to have them suggest the best way to gather the information that your organization needs. With that in mind, it might be most useful to describe the decision or problem that your organization is facing in layman’s terms and let bidders propose different ways to address it.

  • Other sources of data/contacts. Do you have data that might be relevant to the proposals? Did your organization conduct similar research in the past that you want to replicate or build upon? Do you have contact information for people who you might want to gather information from for this project? All these might be useful pieces of information to include in an RFP.
  • Important deadlines. If you have key deadlines that will shape this project, be sure to include them in the RFP. Timelines can impact proposals in many ways. For example, if a bidder wants to propose a survey, a timeline can determine whether to do a mail survey, which takes longer, or a phone survey, which is often more expensive but quicker.
  • Include a budget, even a rough one. I think questions about the budget are the number one question I see people ask about an RFP. While a budget might scare off a more expensive firm, it is more likely that including a budget in an RFP helps firms propose tasks that are financially feasible.

Requesting proposals can be a useful way to get a sense of what a project might cost, which might be useful if you are trying to figure out how much funding to secure. If so, it’s often helpful to just state in your RFP that your considering different options and would like pricing for each recommended task, along with the arguments for why it might be useful.

  • Stakeholders. Who has a stake in the results of the project and who will be involved in decisions about the project?  Do you have a single internal person that the contractor will report to or perhaps a small team?  Are there others in the organization who will be using the results of the project?  Do you have external funders who have goals or reporting needs that you hope to be met by the project?  Clarifying who has a stake in the project and what role they will play in the project, whether providing input on goals, or approving questionnaire design, is very helpful. It is useful for the consultant to know who will need to be involved so they can plan to make sure everyone’s needs are addressed.

Writing RFPs can be daunting, but they can also be a good opportunity for thinking about and synthesizing an important decision or problem into words. Hopefully these suggestions can help with that process!


Ensuring your graphs are honest

For our firm, the very idea of fake news goes against our mission to:

Provide accurate and unbiased information and counsel to decision makers.

The realm of fake news spans the spectrum of misleading to outright lying. It is the former that got us thinking about how graphs are sometimes twisted to mislead, while not necessarily being wrong.

Below are four recommendations to prevent misinterpretation when making your own graphs (or things to look for when interpreting those seen in the news).

 1. Use the same scales across graphs to be compared

Showing similar data for different groups or from different times? Make the graphs the same scale to aid easy, accurate comparisons.

Take the below examples. Maybe you have two graphs, even on separate pages, used to illustrate the differences between Groups 1 & 2. If someone were to look between them to see differences over time, the visual wouldn’t depict that 2016 saw a doubling of the proportion who “agreed.”  The bar is slightly longer, but not twice as long.

scales-across-graph-example

Sure, including axis and data labels helps, but the benefit of a graph is that you can quickly see the result with little extra interpretation. Poorly designed graphs, no matter the labeling, can still mislead.

2. Start the graph origin at zero.

Similar to above, not starting the graph at a zero point can cause differences to be taken out of scale.

In the below examples, both graphs show exactly the same data but start from different points, making the differences in the first graph look proportionately larger than they are.

zero-point-example

3. Convey the correct magnitude.

Sometimes, a seemingly small amount may have significant meaning (think tenths of a degree in global temperatures), while sometimes a large amount may not (think a million dollars within the Federal budget).

Choosing the proper graph type, design, and what to actually graph all make a difference here.

For example, when graphing global temperatures, graphing the differences may best accentuate the magnitude rather than graphing the actual temperatures, where the relatively small-looking differences fail to communicate the finding.

4. Make it clear who is represented by the data.

Does this data represent the entire population? Only voters? Only likely voters? Only those who responded “yes” to a previous question? Only those home on a Thursday night with a landline? (If it’s the latter, save your time and just ignore it completely.).

Usually, the safest bet is to show results by the whole population, even if the question was only asked to a subset of people due to a skip pattern. This is easiest for people to mentally process and prevents accidentally interpreting the proportion as the whole.

For instance, if 50% of people who were aware of Brand A had seen an ad for the brand, but only 10% of the population were aware of Brand A in the first place (and, therefore, were asked the follow-up about ads), then in reality, probably only 5% of the population has seen the ad. To the casual reader, that subtle difference in who the results represent could be significant.


This, of course, isn’t our first time writing about graph standards. Checkout some of our other blogs on the subject here:

Graphs: An effective tool, but use them carefully

Visualizing data: 5 Best practices


Predictable Unknowns

Have you ever needed to know what the future will look like?

To create great strategic plans, our clients need to understand what their operating environment will look like in five, ten, or thirty years.  They want to know how the population, jobs, markets, homes, and infrastructure are expected to change. We help these clients by providing reliable projections, often through analysis of preexisting data.  Although we have no crystal ball that tells us exactly what the future holds, we can point clients in the right direction.  Here are a few ways we look at trends and projections to help solve our client’s problems.

Patterns from the Past:

We frequently commence research projects by reviewing the current population profile and looking for patterns from the past that show how we got here.  A common way we do this is by mining demographic data from the U.S. Census. We access tons of demographic estimates across a wide variety of geographies, such as zip codes, census tracts, towns, cities, counties, metro areas…you get the idea.  The amount of demographic information available is amazing.  While examining demographics is a cost-effective way to start to understand an area or population, there are critical limitation to demographics.  Data are a year or two years old by the time they are available to the public. More importantly, there is a problem assuming the future will represent the past. Demographics can get us started, but when we want to peer into the future, we move to other sources.

Forecasting the Future

Several data sources project key variables such as population, jobs, age profiles, homes, and transportation.  A good source for population projections in Colorado is the State Demography Office.  From this website, we can align previously collected population data with future projections to provide a nice continuation from past, to current, to future population trends.  Further, we can break apart the population trend with age profiles that show changes by generation.  We can create such analyses at the state or county levels or any region comprised of counties.  For example, below is the population of the Denver Metropolitan Statistical Area (MSA), which is comprised of ten counties (Adams, Arapahoe, Broomfield, Clear Creek, Denver, Douglas, Elbert, Gilpin, Jefferson, and Park).  You can see the rate of growth in Denver Metro is projected to steadily slow, although remain positive, from 2015 to 2050.

Sometimes our clients are more interested in understanding the future of job growth, including how many jobs are expected, what type of jobs, and where they will be located.  We use a few different sources to answer these questions.  If we are working in Colorado, we pull down job forecasts by county or region.  For example, here is the forecast for total jobs and job growth rate for Larimer County, Colorado.

Other times, our clients would like more detail than total jobs.  We pull occupation forecast data from the Colorado Department of Labor and Employment.  This website provides current and projected occupations by various geographies including counties and metro areas.  For example, a law school marketing department might be interested in projections of the number of lawyers working in various areas in Colorado.  The following table shows that the growth rate of lawyers is expected to be slightly higher in Denver-Aurora Metropolitan Area than in Boulder or Colorado Springs.

These are just a few examples of how we have helped our clients look at the past as well as understand what the future might look like. Of course, many clients have questions that are not so easily answered by secondary data that is already available.  In these cases, we build our own models to measure and predict all sorts of estimates, such as demand for child care, business relocation, and commuting patterns.

If you need to understand what the future might bring to your organization, give us a call and we will see how Corona can help solve your problem.