Sign up for the Corona Observer quarterly e-newsletter

Stay current with Corona Insights by signing up for our quarterly e-newsletter, The Corona Observer.  In it, we share with our readers insights into current topics in market research, evaluation, and strategy that are relevant regardless of your position or field.

View our recent newsletter from January, 2017 here.

* indicates required

We hope you find it to be a valuable resource. Of course, if you wish to unsubscribe, you may do so at any time via the link at the bottom of the newsletter.


Co-creating Insights through Participatory Research

“We both know some things; neither of us knows everything. Working together we will both know more and we will both learn more about how to know”

~Patricia Maguire, in Doing Participatory Research

Do you need to hear from more than the usual suspects?  Do you want your research to engage and empower people, rather than just study them like lab rats? Are you willing to step out of your comfort zone to create transformational research that provokes action?

If you answered yes to these questions, you might be interested in embarking in participatory research…and Corona can help!

Participatory research is a collaborative research approach that generates shared knowledge.  The intention is to research with and for participants, rather than about them, and the process is as valuable as the results.

At its heart, participatory research involves engaging with a group of people, typically those who have experienced disenfranchisement, alienation, or oppression. Researchers are participants and participants are researchers; the research questions, methodologies, and analyses are co-created. Embedded in the process are cycles of new questions, reflections, negotiations, and research adjustments. In participatory research, knowledge and understanding are generated rather than discovered.

Language and context are keys to success. The language of participatory research can be informal, personal, and relative to the situation. Safe-spaces are created so that participants and researchers can speak freely and honestly, allowing for greater authenticity and reflection of reality. The contexts of the research, including the purpose, geography, and even funding source and sponsors, are made overt and are relevant to the interpretation.

Participatory research is not the most efficient process; it takes extra time to mutually align project goals and specify research questions.  Additionally, participatory research does not assume that the results are unbiased.  Indeed, it asserts that social research cannot avoid the bias that too often manifests unconsciously and goes unacknowledged. Instead, participatory researchers describe and accept their biases, drawing conclusions through this lens.

Why conduct participatory research?  One reason is that the risks are mutual and the results benefit the participants just as much as they benefit the research conductor/sponsor. Results can also provoke changes such as increased equity, community empowerment, and social emancipation. When done appropriately, participatory research gives a strong and authentic voice to the participants, and hopefully, a greater awareness of their situation will lead to positive transformational changes.

State of Our Cities and Towns – 2017

For many years, Corona has partnered with the Colorado Municipal League to conduct the research that is the foundation of their annual State of Our Cities and Towns report. CML produced the following short video, specifically for municipal officials, about the importance of investing in quality of life:

Learn more, view the full report, and watch additional videos on the State of Our Cities website.

Where to next? Election polling and predictions

The accuracy of election polling is still being heavily discussed, and one point that is worth some pondering was made by Allan Lichtman in an NPR interview the day after the election.  What he said was this:

“Polls are not predictions.”

To some extent this is a semantic argument about how you define prediction, but his point, as I see it, is that polls are not a model defining what factors will drive people to choose one party, or candidate, over another.  Essentially, polls are not theory-driven – they are not a model of “why,” and they do not specify, a priori, what factors will matter.  So, polling estimates rise and fall with every news story and sound bite, but a prediction model would have to say something up front like “we think this type of news will affect behavior in the voting booth in this way.” Lichtman’s model, for example, identifies 13 variables that he predicts will affect whether the party in power continues to hold the White House, including whether there were significant policy changes in the current term, whether there was a big foreign policy triumph, whether the President’s party lost seats during the preceding mid-term election, and so on.

Polls, in contrast, are something like a meta prediction model.  Kate made this point as we were discussing the failure of election polls:  polls are essentially a sample of people trying to tell pollsters what they predict they will do on election day, and people are surprisingly bad at predicting their own behavior.  In other words, each unit (i.e., survey respondent) has its own, likely flawed, prediction model, and survey respondents are feeding the results of those models up to an aggregator (i.e., the poll).  In this sense, a poll as prediction, is sort of like relying on the “wisdom of the crowd” – but if you’ve ever seen what happens when someone uses the “ask the audience” lifeline on Who Wants to Be a Millionaire, you know that is not a foolproof strategy.

Whether a model or a poll is better in any given situation will depend on various things.  A model requires deep expertise in the topic area, and depending on knowledge and available data sources, it will only capture some portion of the variance in the predicted variable.  A model that fails to include an important predictor will not do a great job of predicting.  Polls are a complex effort to ask the right people the right questions to be able to make an accurate estimate of knowledge, beliefs, attitudes, or behaviors.  Polls have a variety of sources of error, including sampling error, nonresponse bias, measurement error, and so on, and each of those sources contribute to the accuracy of estimates coming out of the poll.

The election polling outcomes are a reminder of the importance of hearing from a representative sample of the population, and of designing questions with an understanding of psychology.  For example, it is important to understand what people can or can’t tell you in response to a direct question (e.g., when are people unlikely to have conscious access to their attitudes and motivations; when are knowledge or memory likely to be insufficient), and what people will or won’t tell you in response to a direct question (e.g., when is social desirability likely to affect whether people will tell you the truth).

This election year may have been unusual in the number of psychological factors at play in reporting voting intentions.  There was a lot of reluctant support on both sides, which suggests conflicts between voters’ values and their candidate’s values, and for some, likely conflicts between conscious and unconscious leanings.  Going forward, one interesting route would be for pollsters to capture various psychological factors that might affect accuracy of reporting and incorporate those into their models of election outcomes.

Hopefully in the future we’ll also see more reporting on prediction models in addition to polls.  Already there’s been a rash of data mining in an attempt to explain this year’s election results.  Some of those results might provide interesting ideas for prediction models of the future.  (I feel obliged to note: data mining is not prediction.  Bloomberg View explains.)

Elections are great for all of us in the research field because they provide feedback on accuracy that can help us improve our theories and methods in all types of surveys and models.  (We don’t do much traditional election polling at Corona, but a poll is essentially just a mini-survey – and a lot of the election “polls” are, in fact, surveys.  Confused?  We’ll try to come back to this in a future blog.) We optimistically look forward to seeing how the industry adapts.

Subpopulations in Research

As I’m sure you know, we do a lot of survey research here at Corona. When we provide the results, we try to build the most complete picture for our clients, and that means looking at the data from every which way possible. One of the most effective ways to do this is by looking at subpopulations.

What is a subpopulation?

A subpopulation is essentially a fraction or part of the overall pool of the population you are surveying. A subpopulation can be defined many ways. For example, some of the most common subpopulations to examine in research are gender (e.g. male and female), age (e.g. <35, 35-54, 55+), race/ethnicity, location, etc.  You can effectively define a subpopulation using whatever criteria you like; for instance, you can have a subpopulation that is based on what type of dessert is preferred – those who like cake and heathens those who don’t like cake.

What does it mean to have subpopulations?

When you examine survey results by subpopulations, at a basic level respondents are simply split into the subpopulations or groups (commonly called breakouts) you defined. After being broken into these groups, the results for the survey are compiled for each individual group separately. For example, take the following survey question:

  1. About how many hours a week do you watch sports?
    1. 1 hour or less
    2. 2 to 4 hours
    3. 5 to 7 hours
    4. 8 hours or more

The results would typically have two components: top-level results (results compiled for all respondents to the survey) and breakouts (results by group for any subpopulations that have been defined). For the above example question, the results might look something like this:

In this completely made-up example, you can see the benefit of having subpopulations. While 21 percent of overall respondents watched five to seven hours of sports a week, you can see that male respondents accounted for a hefty chunk, as 26 percent of males watch that much sports, compared to only 16 percent of females. Breaking out questions by subpopulations allows you to more closely examine data and assists in finding those gems of information.

Getting the most out of your survey

Being prepared to utilize subpopulations in your survey analysis means putting your best foot forward and maximizing your investment. Many subpopulations are constructed using questions commonly asked in surveys (gender, age, etc.), but some questions might not otherwise be asked without the foresight of planning to break respondents into subpopulations. For example, a nonprofit might be building a questionnaire to survey their patrons on their messaging; by simply asking if a respondent has donated to the organization, they can examine survey results of donors separately from all patrons. The survey can now not only better inform messaging for the organization overall, but also allows them to better target and communicate to donors, specifically.

Conducting a survey can be a challenging experience, so the more you can get out of a single survey, the better. The next time you are designing a survey, ask around your workplace to see if a few questions can be added to better utilize the information you’re collecting. Now you’re one step closer to conducting the perfect survey!

Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.

What do you do for a living?

‘Tis the season for holiday get-togethers and for the time honored question of, “So, what exactly do you do for a living?” I won’t speak for all my fellow coworkers or those who loosely fall within our industry, but it’s a perpetual question made worse because our jobs don’t fall into what I call, “the bucket of childhood career fair jobs.” When you say you’re a firefighter, nurse, airline pilot, and so on, people know instantly what you do (ok, they probably don’t really know but they think they know, and that’s what matters here). My wife is a veterinarian. I tell people that and it instantly clicks. (Note, I actually say she’s a small animal surgeon specializing in oncology cases, and I often get puzzled looks.)

So, what do I (we) do? In fact, if you pose that question around our office, you’re likely to get different answers, even save for the fact that our job titles, duties, and specializations vary a little. Ask our clients that question and whatever we did last for them will likely be their response.

For any given day, project, or client, we may be a market research firm, strategic thinkers, data analysts, consultants, evaluators, or social scientists, to name a few. Easy enough to explain, especially with a cocktail in hand at your aunt’s house, right?

Or, some may be tempted to say that we facilitate retreats or do surveys and focus groups. Technically not incorrect, but it’s like defining Colorado by the mountains. Not wrong, but it really misses a lot of the great aspects of the State.

I always instruct new hires at Corona to start broad then hone in on what is relevant to the person you’re talking to. Perhaps, “We’re a research and consulting firm specializing in the nonprofit and government sectors,” followed by, “for example, we’ve done [something more concrete that they may be able to grasp].” Even that probably isn’t perfect, but that’s why we have a holiday season every year to try again.

Writing an RFP

So you’ve finally reached a point where you feel like you need more information to move forward as an organization, and, even more importantly, you’ve been able to secure some amount of funding to do so. Suddenly you find yourself elbow deep in old request-for-proposals (RFPs), both from your organization and others, trying to craft an RFP for your project. Where do you start?

We write a lot of proposals in response to RFPs at Corona, and based on what we’ve seen, here are a few suggestions for what to include in your next RFP:

  • Decision to be made or problem being faced. One of the most important pieces of information that is often difficult to find, if not missing from an RFP, is what decision an organization is trying to make or what problem an organization is trying to overcome. Instead, we often see RFPs asking for a specific methodology, while not describing what an organization is planning to do with the information. While specifying the methodology can sometimes be important (e.g., you want to replicate an online survey of donors, you need to perform an evaluation as part of a grant, etc.), sometimes specifying it might limit what bidders suggest in their proposals.

Part of the reason why you hire a consultant is to have them suggest the best way to gather the information that your organization needs. With that in mind, it might be most useful to describe the decision or problem that your organization is facing in layman’s terms and let bidders propose different ways to address it.

  • Other sources of data/contacts. Do you have data that might be relevant to the proposals? Did your organization conduct similar research in the past that you want to replicate or build upon? Do you have contact information for people who you might want to gather information from for this project? All these might be useful pieces of information to include in an RFP.
  • Important deadlines. If you have key deadlines that will shape this project, be sure to include them in the RFP. Timelines can impact proposals in many ways. For example, if a bidder wants to propose a survey, a timeline can determine whether to do a mail survey, which takes longer, or a phone survey, which is often more expensive but quicker.
  • Include a budget, even a rough one. I think questions about the budget are the number one question I see people ask about an RFP. While a budget might scare off a more expensive firm, it is more likely that including a budget in an RFP helps firms propose tasks that are financially feasible.

Requesting proposals can be a useful way to get a sense of what a project might cost, which might be useful if you are trying to figure out how much funding to secure. If so, it’s often helpful to just state in your RFP that your considering different options and would like pricing for each recommended task, along with the arguments for why it might be useful.

  • Stakeholders. Who has a stake in the results of the project and who will be involved in decisions about the project?  Do you have a single internal person that the contractor will report to or perhaps a small team?  Are there others in the organization who will be using the results of the project?  Do you have external funders who have goals or reporting needs that you hope to be met by the project?  Clarifying who has a stake in the project and what role they will play in the project, whether providing input on goals, or approving questionnaire design, is very helpful. It is useful for the consultant to know who will need to be involved so they can plan to make sure everyone’s needs are addressed.

Writing RFPs can be daunting, but they can also be a good opportunity for thinking about and synthesizing an important decision or problem into words. Hopefully these suggestions can help with that process!

Ensuring your graphs are honest

For our firm, the very idea of fake news goes against our mission to:

Provide accurate and unbiased information and counsel to decision makers.

The realm of fake news spans the spectrum of misleading to outright lying. It is the former that got us thinking about how graphs are sometimes twisted to mislead, while not necessarily being wrong.

Below are four recommendations to prevent misinterpretation when making your own graphs (or things to look for when interpreting those seen in the news).

 1. Use the same scales across graphs to be compared

Showing similar data for different groups or from different times? Make the graphs the same scale to aid easy, accurate comparisons.

Take the below examples. Maybe you have two graphs, even on separate pages, used to illustrate the differences between Groups 1 & 2. If someone were to look between them to see differences over time, the visual wouldn’t depict that 2016 saw a doubling of the proportion who “agreed.”  The bar is slightly longer, but not twice as long.


Sure, including axis and data labels helps, but the benefit of a graph is that you can quickly see the result with little extra interpretation. Poorly designed graphs, no matter the labeling, can still mislead.

2. Start the graph origin at zero.

Similar to above, not starting the graph at a zero point can cause differences to be taken out of scale.

In the below examples, both graphs show exactly the same data but start from different points, making the differences in the first graph look proportionately larger than they are.


3. Convey the correct magnitude.

Sometimes, a seemingly small amount may have significant meaning (think tenths of a degree in global temperatures), while sometimes a large amount may not (think a million dollars within the Federal budget).

Choosing the proper graph type, design, and what to actually graph all make a difference here.

For example, when graphing global temperatures, graphing the differences may best accentuate the magnitude rather than graphing the actual temperatures, where the relatively small-looking differences fail to communicate the finding.

4. Make it clear who is represented by the data.

Does this data represent the entire population? Only voters? Only likely voters? Only those who responded “yes” to a previous question? Only those home on a Thursday night with a landline? (If it’s the latter, save your time and just ignore it completely.).

Usually, the safest bet is to show results by the whole population, even if the question was only asked to a subset of people due to a skip pattern. This is easiest for people to mentally process and prevents accidentally interpreting the proportion as the whole.

For instance, if 50% of people who were aware of Brand A had seen an ad for the brand, but only 10% of the population were aware of Brand A in the first place (and, therefore, were asked the follow-up about ads), then in reality, probably only 5% of the population has seen the ad. To the casual reader, that subtle difference in who the results represent could be significant.

This, of course, isn’t our first time writing about graph standards. Checkout some of our other blogs on the subject here:

Graphs: An effective tool, but use them carefully

Visualizing data: 5 Best practices

Predictable Unknowns

Have you ever needed to know what the future will look like?

To create great strategic plans, our clients need to understand what their operating environment will look like in five, ten, or thirty years.  They want to know how the population, jobs, markets, homes, and infrastructure are expected to change. We help these clients by providing reliable projections, often through analysis of preexisting data.  Although we have no crystal ball that tells us exactly what the future holds, we can point clients in the right direction.  Here are a few ways we look at trends and projections to help solve our client’s problems.

Patterns from the Past:

We frequently commence research projects by reviewing the current population profile and looking for patterns from the past that show how we got here.  A common way we do this is by mining demographic data from the U.S. Census. We access tons of demographic estimates across a wide variety of geographies, such as zip codes, census tracts, towns, cities, counties, metro areas…you get the idea.  The amount of demographic information available is amazing.  While examining demographics is a cost-effective way to start to understand an area or population, there are critical limitation to demographics.  Data are a year or two years old by the time they are available to the public. More importantly, there is a problem assuming the future will represent the past. Demographics can get us started, but when we want to peer into the future, we move to other sources.

Forecasting the Future

Several data sources project key variables such as population, jobs, age profiles, homes, and transportation.  A good source for population projections in Colorado is the State Demography Office.  From this website, we can align previously collected population data with future projections to provide a nice continuation from past, to current, to future population trends.  Further, we can break apart the population trend with age profiles that show changes by generation.  We can create such analyses at the state or county levels or any region comprised of counties.  For example, below is the population of the Denver Metropolitan Statistical Area (MSA), which is comprised of ten counties (Adams, Arapahoe, Broomfield, Clear Creek, Denver, Douglas, Elbert, Gilpin, Jefferson, and Park).  You can see the rate of growth in Denver Metro is projected to steadily slow, although remain positive, from 2015 to 2050.

Sometimes our clients are more interested in understanding the future of job growth, including how many jobs are expected, what type of jobs, and where they will be located.  We use a few different sources to answer these questions.  If we are working in Colorado, we pull down job forecasts by county or region.  For example, here is the forecast for total jobs and job growth rate for Larimer County, Colorado.

Other times, our clients would like more detail than total jobs.  We pull occupation forecast data from the Colorado Department of Labor and Employment.  This website provides current and projected occupations by various geographies including counties and metro areas.  For example, a law school marketing department might be interested in projections of the number of lawyers working in various areas in Colorado.  The following table shows that the growth rate of lawyers is expected to be slightly higher in Denver-Aurora Metropolitan Area than in Boulder or Colorado Springs.

These are just a few examples of how we have helped our clients look at the past as well as understand what the future might look like. Of course, many clients have questions that are not so easily answered by secondary data that is already available.  In these cases, we build our own models to measure and predict all sorts of estimates, such as demand for child care, business relocation, and commuting patterns.

If you need to understand what the future might bring to your organization, give us a call and we will see how Corona can help solve your problem.

Research Without Borders

While Corona Insights is based in Denver, we have conducted research studies in nearly every state in the U.S. as well as many nationwide studies.  We certainly have an expertise in understanding what makes Colorado tick, but the fact of the matter is that today’s technological landscape allows us to effectively design and manage research studies all over the world from our Denver headquarters.  Here is a brief overview of some of the tried and true methodologies that can be conducted from anywhere, as well as some of the more innovative methodologies that have expanded our ability to conduct research remotely in recent years.

  • Mail Surveys Though we at Corona are experts in how to effectively design, manage, and analyze the results of market research, we often rely on partners to assist with some of the fieldwork required for market research. For example, when conducting mail surveys, we rely on a traditional direct mail services vendor to print thousands of surveys and mail them to respondents.  In most cases, we use our long-term, Denver-based partner to provide these services since first-class mail rates are the same no matter where you are sending to and from.  However, should there ever be a need to have a local presence for a mail survey, we are quite comfortable in researching and identifying additional partners in other markets as needed.
  • Telephone Surveys – Similar to mail surveys, Corona rarely conducts the actual phone calls required for a telephone survey in-house. Instead, we rely on phone room vendors to supply the manpower necessary to make thousands of phone calls and complete hundreds of interviews with respondents.  Again, there is very little need to have a local presence for a telephone survey since long-distance calling rates are a negligible cost in telephone surveys compared. However, if we need to have a local presence, we have partners with locations in nearly every state in the U.S.
  • Online Surveys – As one might expect, online surveys are very simple to conduct anywhere in the U.S. (or even the world) from our main office. For our projects that utilize internal lists of customers provided by our clients, this process is straightforward.  Even when we don’t have lists of customers, however, Corona has relationships with a number of worldwide online panel vendors that have databases of survey respondents of every shape and size.  Want to conduct a survey of people with an interest in fitness in China?  Corona can do that from right here in Denver.
  • Focus Groups – All of the tasks necessary to conduct a focus group (from designing the group structure, to recruiting participants, to conducting the groups, to analyzing the results) can be done anywhere. When traditional, in-person focus groups are desired, it is relatively easy to work with local focus group facilities to host the group and simply fly in one of our experienced moderators to conduct the group.  However, when being there face-to-face isn’t necessary, innovative technologies such as online video focus groups allow us to replicate the interaction of a traditional focus group without having to go anywhere.  Similar to a video-conference using software such as Skype, we use specialized software that allows us to talk with participants via video chat, complete with the ability to have “invisible” observers, interactive activities, and much more.

These really just scratch the surface of Corona’s toolkit of methodologies.  Depending on the project, we might recommend online discussion boards, telephone interviews, video ethnographies, and more to best balance the ability to gather solid, actionable data about a topic and the budget required to do so.  No matter where your customers or stakeholders are in the world, Corona can help you understand them from right here in Denver.