Category: In Action

Corona wins Gold Peak Award for Market Research

award winning market researchLast night, the Colorado American Marketing Association (CO+AMA) celebrated Colorado’s first class marketers at their annual Colorado Peak Awards. Corona Insights was honored to take home our 4th Gold Peak Award in the category of Market Research.  This year, we won the award for our member engagement and brand assessment for the American College of Veterinary Medicine (ACVIM).

Market research is fundamentally different from other categories honored at the CO+AMA Peak Awards. Market Research prepares brands and marketing campaigns for take-off. By doing proper research, companies are able to develop a sound marketing strategy that effectively reach their target audience.

In 2013 we were recognized with a Gold Peak for the research we did for Donor Alliance which resulted in a marketing campaign that addresses the trends in the data we helped uncover. In 2010 Corona took home the Silver Peak award for our rebranding and in 2011 Corona won a Gold Peak award for our market research work to inform the University of Denver Sturm College of Law’s strategic plan.

The 26th annual gala was held that Wings Over the Rockies and featured an aerospace theme. Kevin Raines, CEO, and Kassidy Benson, Marketing and Project Assistant accepted the award on behalf of the firm.


A dose of data for your springtime allergies

blooming-springtimeLike many people, I have “seasonal allergies.”  March and April bring sneezing fits and foggy brain days for me.  Often I get a sore throat and headaches.  One year I went through three strep throat tests and a course of antibiotics before my doctor decided my swollen throat was caused by allergies.

Knowing you’re allergic to “something” isn’t all that helpful.  Sure, you can keep antihistamines on hand and treat the symptoms as they arise, but you have no way to predict when symptoms will hit or minimize your exposure to the allergen.

A common first step in identifying the cause is to do a skin allergy test.  Typically, this involves getting pricked in the back with approximately 20 solutions containing the most common allergens.  The doctor marks off a grid pattern on your skin and each box gets pricked with one item and then you wait and see whether any of the pricked areas swell up or show other signs of allergic reaction.

I’ve had this done, but unfortunately (though not uncommonly) I didn’t react to any of the items tested.  Which, doesn’t mean you’re not allergic to something, just that you’re not allergic to one of the things tested.

Research on myself hadn’t provided any usable information, so recently I turned to external data instead.  Where I live, the city provides daily pollen counts for the highest pollen sources from about February through November.  They don’t provide aggregated data, however, so I had to build my own database of their daily postings.  In the part of town where I live, Ash, Juniper, and Mulberry are the most prevalent allergens during the time when my symptoms are greatest.

Last year, my worst day was April 1.  Even with my allergy pills, I sneezed the entire day.  Here’s what the pollen count showed for my area of town during that time:


Ash pollen counts peaked on April 1.  Juniper and Cottonwood were also relatively high, but Juniper had been fairly high for weeks without me having corresponding symptoms.

This year, my allergies were not so bad at all.  I was out of town for a week in mid-March and for two separate weeks in early and mid-April, which certainly helped, but I only had a few foggy-brain days in late March and mid-April.  The pollen counts for this year:

allergies 2014

Ash was lower overall compared to the previous year, and once again seemed to line up best with my symptoms.  This is a correlational analysis, so it doesn’t provide a definitive diagnosis, but because different allergens peak at different times, it offers some ability to rule out other things.  And it’s more efficient (and painless!) compared to the skin test.

Armed with this information, I did some additional research on the predominant types of Ash trees where I live (Modesto and Green Ash), and the geographic range for those species.  If I’m planning to travel to Ash-free zones, I can try to schedule those trips for the spring.  And otherwise, I can keep an eye on the pollen counts and try to stay inside with the windows closed when Ash counts are particularly high.

It’s not perfect data, but like most tough decisions, we have to do the best we can with limited data and our powers of educated inference.  Hopefully less sneezing awaits!


How to ask demographic questions

diversityAsking demographic questions (e.g., age, gender, marital status) should be the easiest of survey questions to ask, right?  What if I told you asking someone how old they are will yield different results than asking in what year they were born, or that asking a sensitive question (e.g., How much money did you make last year?) in the wrong way or at the wrong time may cause a respondent to abandon the survey?

Considering today’s identity security concerns, socially desirable bias, and dipping response rates, asking demographic questions is full of potential pitfalls. Although gathering it might be tricky, demographic data are often critical to revealing key insights.  In this post, we present three tips on how best to ask demographic questions on a survey.

  • When to ask demographic questions: Our general rule-of-thumb is to ask demographic questions at the end of a survey, when survey fatigue is less likely to influence answers. Respondents are more likely to answer demographic questions honestly, and will have a better survey taking experiences, if they have already viewed the other questions in the survey. However, we sometimes find it is best to ask a few easy demographic questions at the beginning, so survey-takers start to feel comfortable and see that their feedback will be useful. For example, when researching a specific place (like a city or county), I like asking participants how long they have lived in this place as the first question on the survey.
  •  How you ask the question will determine the type of data you will collect: It is important to consider how demographic data will be used in analysis before finalizing a survey instrument; not doing so might make it difficult to answer the study’s research questions. One consideration is whether to collect specific numbers (e.g., Please enter the number of years you lived in your current home) or provide a range of values and ask participants to indicate which range best described them (e.g., Have you lived in your current home for less than 1 year, 1-2 years, 3-5 years, etc)?  This decision depends on several factors, the primary factor being how the data will be used in analysis.  Collecting specific numbers (i.e., continuous data) typically allows for more advanced analyses than responses within a range of numbers (i.e., categorical data), but these advanced analyses may not be needed, or even suitable, to answer your research questions.   The sensitivity level inherent in the question is also a factor; for example, survey-takers are more likely to indicate that they are within a range of income levels than they are to provide their exact income. In our experience, the benefit of collecting contentious income data is not worth the cost of causing participants to skip the income question.
  •  Information needed to ensure the results represent the population: It is common for certain groups of people to be more likely to respond to a survey than other groups. To correct for this, we often calculate and apply weights so that the data more closely resemble the actual study population.  If you plan to collect demographic data in order to weigh your results, then you will want to match survey question categories with the categories in the data you will use for weighing.  For example, if you would like to weigh using data from the U.S. Census, then you will want to use the same ranges that are available at the geographic extent of your analysis.  Keep in mind that some demographic variable are lumped into smaller categories for larger geographic areas (e.g., states) and into larger categories for smaller geographies (e.g., census tracts). All of these factors must be considered before collecting data from the population.

Haphazard demographic questions can decrease, rather than increase, the value of a survey. At Corona, we always carefully consider research goals, potential analyses, and the population when we design survey questions. Designing a survey might not be as easy as it appears, and we have many more tips and insights than what we could share here.  If you would like guidance on how best to ask demographic questions on your survey, contact us.  We promise that asking us for guidance is easier than asking demographic questions.

Societal values in music

Words-in-Popular-SongsWe stumbled across the interesting data visualization today, which shows how commonly different words or phrases have appeared in Billboard’s Top 100 songs over the past 50 years or so. 

As we scroll through the tables, the most obvious pattern is the increase in profanity (described as “foul words”) since 1990.  Prior to that era, it was almost unheard of to include profanity in a popular song, but … times have changed.

However, we find some more subtle patterns to be more of interest.  The word “love” has become notably less common since the turn of the century, along with the word “home”, and in its place we now hear more references to “sex” and “money”.  Is this a reflection of a less grounded society?  Or is it a contributing factor?

Imagine 2020 launch

Yesterday, Denver Mayor Michael Hancock revealed Denver’s first cultural plan in 25 years. This strategic plan, written by Corona Insights in partnership with Denver Arts & Venues, will fuel the next era for our city’s art, culture and creativity. What a treat it was to attend the press conference, see the final printed plan and hear firsthand the excitement felt by city leaders and residents.

Corona leveraged its expertise in strategy, data and market research to serve the company’s hometown. The result? A community-centered plan designed to achieve a seven part vision. From finding more art around every corner, to learning over a lifetime and supporting local artists, Denverites hunger for more art.

What can you do? Go to www.imaginedenver2020.org and check out the plan.  There will be an official release party on Thursday at 6pm. Come early to see a presentation of the research behind the plan by Corona Insights that starts at 4pm.  Then stay tuned to see how you can get involved.

We ART Denver.
[SlideDeck2 id=7144]

Is cluster sampling a good fit for your survey?

Block selectionHere at Corona, we strive to help our clients maximize the value of their research budgets, often by suggesting solutions that get the job done faster, better, or at a reduced cost. In survey research, developing an accurate sampling frame (i.e., a list of the study population and their contact information) is instrumental for success, but sometimes developing or acquiring a sampling frame can be time consuming, expensive, or impractical.  Using a cluster sampling technique is one potential solution that can save time or money while maintaining the integrity of the research and results.

What is cluster sampling?  Cluster sampling, as the name implies, groups your total study population into many small clusters, typically defined by a proximity variable.  For example, street blocks in a neighborhood are clusters of households and residents; schools represent clusters of employees that work in the same school district. The main difference between simple random sampling and cluster sampling is instead of selecting a random sample of individuals, you select a random sample of clusters.  This approach provides a representative sample that is appropriate for the use of inferential statistics that draw conclusions about the broader population.

How to use cluster sampling: First, make sure the nature of your research question is compatible with cluster sampling; if your analysis will require completely independent respondents, then this is probably not the best approach. Second, consider the configuration of your population; you must be able to group people by defined boundaries, such as a city blocks or office building floors.  After grouping your population into small clusters, use a random number generator to draw a random sample of clusters (rather than a sample of individuals).  Typically, every individual from those selected clusters are sampled, although you can infuse your sampling plan with other techniques such as stratified or systematic sampling. As long as 1) you can match every person in the population with a cluster, 2) you have an appropriate person to cluster ratio, and 3) assuming you have a complete list of clusters, you can use these groupings as a sampling shortcut.

When might cluster sampling be useful? Cluster sampling is useful when you don’t have enough resources to develop a complete sampling frame or when it takes significant effort to distribute or collect surveys (such as going door-to-door).  For example, if we wanted to survey bus riders within a city, it would be impractical to develop a list of all bus riders on any given day, let alone to find our random sample of individuals and give them all surveys.  Cluster sampling allows us to select a random sample of bus routes and times, and then survey everyone on those buses.  Although individual clusters may not be representative of the population as a whole, when you select enough clusters at random, your sample as a whole will be representative.

Potential problems: Cluster sampling should be applied with caution, and there are some disadvantages to using cluster sampling compared to a simple random approach.  It is better to sample more, smaller clusters than fewer, larger clusters.  For example, for a nationwide survey it is better to cluster by counties than by states. If your clusters are too few and too large, you might draw a sample that does not adequately represent the population.  The size and homogeny of each cluster and your final sample size desired also impact the viability of cluster sampling.

At Corona, we start fresh with each research project, and we are full of solutions that can help maximize the value of your research budget and resources. If you are struggling with how to reach your population of interest, give us a call, maybe we can shed some light on the situation.

Strategic planning and market research fuel IMAGINE 2020

With tremendous pride and a full heart, Karla Raines presented the Denver Commission on Cultural Affairs (DCCA) with IMAGINE 2020: Denver’s Cultural Plan at their January 2014 meeting.  The commission had been strong proponents of a refreshed cultural plan for Denver.  These volunteers served as Corona’s creative muse throughout the 15-month process.  They held firm in their belief that Denver needed a research-driven plan that was strategic by design and held forth a bold vision for Denver.  Their insistence that the process be community-driven resulted in a cultural plan that speaks to the aspirations and expectations of Denver residents.

Corona was happy to host the commission’s monthly meeting in their downtown Denver office (pictured below). A celebratory toast and freshly baked cookies from Maggie and Molly’s Bakery capped off the event.

Stay tuned!  IMAGINE 2020 will be revealed to the public in early March. .  Visit ImagineDenver2020.org for more information.




Pictured:  Ginger White, Deputy Director of Denver Arts & Venues, addresses the Denver Commission on Cultural Affairs, A&V staff and Team Corona.



State of Our Cities & Towns 2014: Transportation

We were honored to work with the Colorado Municipal League (CML) for the fifth year on their annual report (PDF), State of Our Cities and Towns.  Starting this year, the organization has decided to do a deep investigation into a different issue each year. They started with transportation  issues this year and you can expect  future reports on other important issues facing towns and cities in the coming years. To read a quick overview of transportation issues in Colorado, you can view their annual summary report (pdf)

There are nearly 16,000 miles of city streets in Colorado … and every mile is essential to deliver groceries to the store on the corner, get children to school, connect to work, and home. 

To complement the report, CML also produced an easy to digest video highlighting the key themes from this year’s survey findings. The short video is a great way to communicate the important information to a statewide audience.



Quick tip for assessing research quality

I serve on an advisory committee for a college program that trains organizational leaders, and at our last meeting there was a discussion about the curriculum for a research class. The committee chair looked at me and said, “Hey, you work in this field. What’s the most valuable research topic we can teach leaders?”

I’ve waited for that question my whole life or at least for the twenty years I’ve worked as a research consultant. My answer was swift and enthusiastic: “the ability to tell good research from bad research.”

Not everyone is a researcher

It’s great to make data-driven decisions, and in the modern world there’s more data than ever to help us. But if it’s going to add value, that data obviously needs to be correct, and unfortunately, bad research is more common than you think. The proliferation of do-it-yourself data collection tools, such as SurveyMonkey, along with the increased availability of pre-digested data online has made everyone a researcher, whether he/she has the skills or not. Marketing firms, technology firms, grant-writing firms and even money-conscious executive directors now also do research in addition to their core jobs. Unfortunately for everyone, that often results in research that is unreliable and inaccurate. If you base strategic or tactical decisions on bad data, then you’ll probably make some bad decisions even if your intent is good.

Good vs. bad research


There’s a simple rule I preach often to clients: It’s better to have good research than no research, but it’s better to have no research than bad research.

So how do you tell the good research from the bad research? When asked to write this article, I struggled with it a bit because I could write a book on how to critique a research report and it still wouldn’t cover everything a good research reader needs to know. So what value can I provide in 1,000 words or less?

The more I thought about it, though, the more I realized there’s one clue that almost always exists in bad research, and it’s easy to recognize if you know to look for it. When you read a research report, just ask yourself the following question:

“Did the researcher study the right population, or did he/she just study a convenient population?”

This is far and away the most common problem I see with weak research. A wannabe researcher may say, “We need to do a survey to see what people think about this issue. I know–let’s send a SurveyMonkey survey to the people in my Outlook contacts!” Or he/she will say, “’Let’s do a focus group. I’ll get some of my friends together and we’ll see what they think.”

Well, the good news is this is a cheap way to do things. The bad news is you get what you pay for–meaningless results that may actually take you in the wrong direction. The key is to recognize who the research is supposed to represent and who it actually represents. If they’re different, you may have a problem.

As an example, I’ll walk through a couple of case studies of research-gone-wrong incidents that have happened over the past few years.

Case in point #1

A local grantwriting firm was hired a few years back to conduct a survey of nonprofits for a governmental agency. Instead of drawing a sample of area nonprofits (the proper approach), it sent a do-it-yourself survey to nonprofits on its marketing list. So what’s wrong with this? Plenty. Among other weaknesses, this list was undoubtedly skewed toward larger and older organizations, because that’s who usually hires consultants. Second, is that grantwriter’s marketing list skewed toward a particular type of nonprofit it typically works with, such as human services or arts or animals? Is it skewed toward foundations or agencies or organizations that aggressively pursue grant funding?

A savvy research reader will ask, “Would we have gotten different results if we had hired a different consultant and surveyed that second consultant’s marketing list instead?” Probably. So if that’s true, what did the grantwriter really measure? Why not just draw a random sample of nonprofits and do it right?

Case in point #2

In another example with a happier ending, an agency wished to do a survey about health issues. It had a marketing firm on board, and the marketing firm had dollars in its eyes. It offered to conduct a survey, despite having no expertise in the field, and made plans to field a do-it-yourself survey by sending links to individuals in its Outlook Contacts. So what was wrong with that? Pretty much everything. Who’s going to be in the Outlook Contacts for a marketing firm? I can pretty much guarantee it’s going to be working-age people, likely skewing young. It’s going to be people who work in professional services like…you know…marketing. And it’s probably going to consist mostly of people with salaried positions who have college degrees. How many oilfield workers are going to be in that contact list? Fast food workers? Retirees?

It would have been a disaster but fortunately the client was savvy in that case and sought outside advice. The do-it-yourself survey was canceled and the money spent in better ways.

No research is better than bad research

Obviously, there are many different types of research and many other tips I could offer for critiquing research. However, asking yourself a couple of simple questions can go a long toward telling you how trustworthy that research report is: Who are we trying to study, and who exactly did we study? While there are always some compromises that must be made, those compromises should be minimized to the greatest extent possible, and amateur researchers often miss the boat on this simple but powerful truth.

So the next time you need to make a data-driven decision, think about this as a clue to the quality of your data. Because after all, it’s better to have good research than no research, but it’s better to have no research than bad research.

The blog was originally featured on Causeplanet.org.

343 Travel Magnets and Counting

At Corona Insights, we have a quirky collection of magnets. Hundreds of magnets cover every square inch of ferromagnetic material in our kitchen. Being data scientists, we decided our collection needed to be quantified. Below is a detailed analysis of the origins of our collection.

In the spirit of the season, we are giving away magnets for the holiday. To get one for your own refrigerator collection, please email us.


Coronas Magnet Collection

USA Magnets
| Create infographics