Sign up for the Corona Observer quarterly e-newsletter

Stay current with Corona Insights by signing up for our quarterly e-newsletter, The Corona Observer.  In it, we share with our readers insights into current topics in market research, evaluation, and strategy that are relevant regardless of your position or field.

* indicates required

We hope you find it to be a valuable resource. Of course, if you wish to unsubscribe, you may do so at any time via the link at the bottom of the newsletter.


Strategy in an era of unpredictability

The strategic planning process that is used by many today was created decades ago during a more predictable time. By nature, the process is based on an underlying assumption, namely that changes and trends in the external environment can be predicted with some certainty.

So, what about changes that can’t be predicted? Or are unprecedented in our lifetime?  We only have to look to our smart phones for a glimpse into the near future. As a recent piece by the World Economic Forum noted, we are in the midst of the Fourth Industrial Revolution, and what many refer to as the Digital Revolution.

This one is a game changer my friends.

 “There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.”

How to set strategy now?

  1. Drop the SWOT – this tool is out of date and too often focuses on naval gazing instead of a true assessment of the strategic environment.
  2. Create a sense of urgency – too many organizations plod rather than sprint. Our decision making processes are too slow or cumbersome. We don’t have agreement on the big issues (mission, vision, values and strategy) and so we can’t (or don’t) really empower people to act. And act we must.
  3. Fast twitch for the win – I don’t see enough evidence of agility either. Develop your fast twitching muscles as you’re going to need them.
  4. Think map app – today’s map app alerts you in real-time to the best route. Your strategic plan should do the same.

Remember, strategy is a choice. It focuses attention, resources and commitment.

And strategy isn’t static.

Got your latest app downloaded?

Beyond the logic model: Improve program outcomes by mapping causes of success and failure

Logic modeling is common in evaluation work, but did you know there are a variety of other tools that can help visualize important program elements and improve planning to ensure success?

One such tool is success mapping.  A success map can be used to outline the steps needed to implement a successful program.  It can also be used to outline the steps needed to accomplish a particular program improvement.  In a success map the steps are specific activities and events to accomplish, and arrows between steps indicate the sequence of activities, in a flow chart style. Compared to a logic model, a success map puts more emphasis on each step of implementation that must occur to ensure that the program is a success.  This can help the program team ensure that responsibilities, timelines, and other resources are assigned to all of the needed tasks.

A related tool, called fault tree analysis, takes an inverse approach to the success map.  Fault tree analysis starts with a description of an undesirable event (e.g., the program fails to achieve its intended outcome), and then reverse engineers the causal chains that could lead to that failure.  For example, a program may fail to achieve intended outcomes if any one of several components fails (e.g., failure to recruit participants, failure to implement the program as planned, failure of the program design, etc.).  Step-by-step, a fault tree analysis backs out the reasons that particular lines of failure could occur.  This analysis provides a systematic way for the program team to think about which failures are most likely and then to identify steps they can take to reduce the risk of those things occurring.

These are just two of many tools that can help program teams ensure success.  Do you have other favorite tools to use?

What are your organization’s demographics?

It’s probably no secret that we at Corona like to think about demographics a lot (we posted this cool quiz to test your demographic knowledge a few months ago). A few weeks ago, it took all of my self-control to not turn to whoever was sitting next to me on a plane and start discussing this Atlantic article about China’s changing demographics. Not only is the article interesting for political nerds, it is also a great example of how a dramatic shift in demographics over a relatively short amount of time is going to have large effects on China’s status as a world power.

At Corona, we often see this same pattern on a much smaller scale at the organizations we work with. These demographic changes are apparent in both the work force and populations that many of our clients serve. Understanding demographic trends can help organizations plan for the future. If your organization works with children, you may have already had to plan for some significant changes in demographics, given that big changes have already started for younger generations.

There are two key steps that an organization can take to anticipate and grow with changing demographics. First, a demographic analysis can give your organization a good idea of how your customers/supporters compare to the broader population. For example, if your organization provides support for low income children in Denver, it is important to know whether you are serving that population well. A demographic analysis could show you what the population of low income children in Denver look like (e.g., what is their living situation, who are their parents, etc.). Then, your organization could compare the demographics of the children you are currently serving with the broader population to identify any gaps. For example, you may not be serving as many young parents as you would expect given the population in Denver.

Second, an organization can then do target research with a specific demographic to better understand how to reach them, serve them, etc. In the previous example, the organization might want to do focus groups with young, low income parents in Denver to get an idea of what the barriers are to serving that population. These steps can help an organization both better meet the needs of the current population and plan for population shifts in the future.

Online research is becoming more feasible in smaller locales (and that includes Denver)

Door-to-door, intercept, mail, telephone, online – surveys have evolved with the technology and needs of the times. Online has increased speed and often lowered cost of conducting surveys. For some populations, it has even made conducting surveys more feasible.

CO PushpinHowever, online surveys haven’t always been feasible in a city such as Denver or even statewide in Colorado.

(I should note here that we’re talking about the general public or other populations where we do not have a list. For instance, if we were surveying your customers and you had a database of customers with email addresses, conducting the survey online is almost certainly the way to go.)

Why it’s been tough until now

So why has it been tough until now to conduct public opinion research online in Denver, the Front Range, or even all of Colorado?

Unlike with mail, where huge databases of addresses exist, and telephone, where again lists or RDD sample can be generated, there is no master repository of email addresses or requirement that residents have one official email address. (Many of us probably have multiple emails – I personally have four outside of work.)

The market research industry’s answer to this has been to create databases, but unlike with mail and telephone where lists can be generated via public sources, email addresses generally have to be collected via individuals voluntarily sharing their information. In the industry, companies have specialized in doing just that – recruiting a lot of potential respondents to their online panel in exchange for incentives provided when they complete a survey. In addition to email, these companies generally collect some basic demographic information as well to make targeting more effective.

Now, let’s say a panel had one million U.S. members in their database. Sounds big, doesn’t it? Well, given that Colorado makes up less than 2% of the nation’s population, than means there might be 20,000 Coloradoans in their database. If you wanted Denver Metro only (about half the state’s population), that takes our max potential to 10,000. If only 10% respond to any given survey invite, the most respondents you may be able to receive is 1,000, and that’s before any additional screening (e.g., you’re only looking for commuters). That is simplified summary, but as you can see, it largely becomes a numbers game – you need a very large panel to drill down to smaller geography or subset of the population.

What has changed

These panels are nothing new. Corona has been using them for a decade, but what has changed recently in our home market (and most smaller geographies around the country, for that matter) is that the panels have grown large enough to adequately supply enough respondents to participate in our studies. A few years ago, we could only do online studies nationwide, in regions (e.g., the west, south, etc.), or maybe in very large metropolitan areas. As panels and recruitment continued to grow, we were able to do general population studies (i.e., pretty much everyone qualifies as we don’t have additional criteria for screening), but not smaller segments of the population. Now, while we can still run into difficulty with really niche groups, we can conduct studies with parents, visitors to a certain attraction, and many other groups all within Denver metro or the Front Range.

Still, a note of caution

So, problem solved, right? Unfortunately, online panels come with some caveats. First, compared to a mail or telephone survey, when sample is randomly generated, the results are not considered representative as the sample is not random probability sample. (There are some probability-based samples for online panels, but they’re still in the “only big enough for nationwide studies phase” mostly.) Panels are typically designed to reflect the overall population in terms of demographics, but due to their recruiting method, can’t be considered “random”.

Other concerns need to be taken into account such as how quickly the panel turns over respondents, avoiding respondents who try to game the system just for incentives, and other quality control measures.

For these reasons, Corona still regularly recommends other survey modes, such as mail and telephone (yes, we still do mail!) when we feel they will provide better answers for our clients. Often times, however, online may be the only feasible option given the challenges with telephone (e.g., cell phones) and mail (e.g., slower, static content). Sometimes we’ll propose both to our clients and then discuss the relative tradeoffs with them.

In summary, online is a growing option for Denver and Colorado, as well as other smaller cities, but be sure to pick the mode that is best for your research – not just the one that is easiest.


Research on Research: Boosting Online Survey Response Rates

David Kennedy and Matt Herndon, both Principals here at Corona, will be presenting a webinar for the Market Research Association (MRA) on August 24th.

The topic is how to boost response rates with online surveys. Specifically, they will be presenting research Corona has done to learn how minor changes to such things as survey invites can make an impact on response rates. For instance, who the survey is “from”, the format, and salutation can all make a difference.

Click here to register. You do need to be a member to view the webinar. (We hope to post it, or at least a summary, here on our blog afterwards.)

Even if you can’t make it, rest assured that if you’re a client at least, these lessons are already being applied to your research!

Is yesterday’s intermediary ready to become the platform of tomorrow?

Living in America’s “it” city in a year of disruptions across the political spectrum nationally and internationally have led me to contemplate evolution in the nonprofit sector. I’m struck by what I’ve observed recently as an emerging trend – the slow decline of the intermediary organization. This may be a heretical statement, I’ll admit, but it is a shift worth watching.

It may be easiest to observe in healthcare as classic intermediary models born in the 1970s and 1980s discover that they cannot withstand disruptive changes in their marketplaces. Healthcare may be the canary in the coalmine of the nonprofit sector given the pace and scale of change. Executives and board members are learning the hard way that forces of this magnitude cannot be overcome by single organizations. They are finding themselves caught off-guard as the strategy they set in 2015 is already obsolete.

Let’s explore an example. The U.S. model of community-based engagement in cancer clinical trials is in the midst of two game-changing trends. One, it is becoming increasingly difficult to recruit and retain people in clinical trials. Second, healthcare providers are shifting away from collaborative models run by the federal government to in-house options that can adapt more quickly.

Health fairs are another example. Long the staple of communities large and small in states from California to Ohio, these programs were born at a time when people would stand in line to be seen by volunteer health providers after not eating for 12 hours. Today, more people are monitoring their health on a wrist device as technological advances are leapfrogging old screening methods. Add onto that the demographic differences between the oldest Boomers and the oldest Millennials and you’ve realize that OSFA (one-size-fits-all) doesn’t fit most any more.

The emergence of platform-based models such as Uber have changed the competitive landscape for many industries. As entrepreneurs adapt the model to other industries, it’s only a matter of time before platforms are the new normal in the nonprofit sector too. Of course we’ve seen the rise of platforms in fundraising, with crowd-sourced funding and giving days as examples. I’m intrigued by the possibilities of platform-based service delivery.

As described in the April 2016 Harvard Business Review, “a platform provides the infrastructure and rules for a marketplace that brings together producers and consumers.” (Pipelines, Platforms, and the New Rules of Strategy by Marshall W. Van Alstyne, Geoffrey G. Parker, and Sangeet Paul Choudary.)

So, I’m wondering when the definitive nonprofit intermediaries of the 1970s and 1980s – the nonprofit technical assistance provider and trade association – will evolve into the platform model for 2020.

Agility and relevancy are the name of the game. I’ll be watching to see who figures that out the soonest.

The Race to the Rockies – Colorado Migration Part 1

I admit, I am one of many in the horde of people who have recently migrated to Colorado. Indeed, there are tens of thousands of us moving here each year at one of the highest rates in the country. But who really is “us”? Who are the people moving into Colorado in droves? This blog will be part 1 of a 2-part blog series exploring who is moving into Colorado. In this first blog, we’ll be looking at generational migration patterns over time and migration by race and ethnicity.

Generational Movement in Colorado

Utilizing the Census Bureau’s Population Estimates for 2010 to 2015, I broke this question down by two basic demographics, age and sex. In the following graph, generations were grouped roughly using Pew Research Center’s definition for each generation1 Each generation has net migration (those moving to Colorado minus those who left) graphed from 2011 through 2015.

Unsurprisingly, we see that net migration has been positive for each generation since 2013. Most recently, each generation in Colorado had a net positive increase of 16,000 or more. Millennials have been moving in at the highest rate, with over 30,000 having moved into Colorado between 2014 and 2015. Baby Boomers also prefer moving into Colorado rather than out, with a net migration over 25,000.

As a non-native, what I find most interesting is how this has changed over time. From 2010 to 2012 we see more Generation Xers moving out of Colorado, with a net migration loss of nearly 10,000 having left the state in 2011. It wasn’t until 2013 where we saw more moving in, with a large uptick (over 10,000 more) in 2015. Also unexpected was the increase of nearly 40,000 Baby Boomers that occurred in 2011.

Millennials, on the other hand, have been consistently moving into Colorado, with 2015 seeing a strong increase in the number moving into the state. They are also the only generation which show a substantial difference between genders, with about 4,000 more males moving into Colorado than females in 2015. Needless to say, as a male Millennial who has moved into Colorado, I am thankful to already be married.

Race and Ethnicity Movement in Colorado

Using the same data source (Population Estimates), I looked at those moving into Colorado from 2010 to 2015 by race and Hispanic/non-Hispanic ethnicity.

The total percentage change in population from 2010 to 2015 was 8.5%, making Colorado the third ranking state by population growth rate since 2010. When looking at percentages, many Native Hawaiian and Other Pacific Islanders appear to be moving into the state, though 2015 saw an increase of only about 2,000 since 2010. Many Asians have also been moving into the state, with there being a 22 percent increase equating about 30,000 new residents. Those who identify as multi-racial have moved into Colorado at similar numbers.

Colorado also has a large Hispanic population. In fact, we are one of nine states that have a Hispanic population of over 1 million. Between 2010 and 2015, we saw an increase in Hispanic population of just over 12 percent, with an additional 125,000. The Hispanic population currently represents approximately 21% of Colorado’s total population.

Now that we have a better idea of the age, race, and ethnicity of those moving into Colorado, we can get a better idea of some of the characteristics behind our newest residents. In my second and final blog on the topic, I will explore these various characteristics to help complete the picture of these new Coloradans.

1 Due to the data available in the Population Estimates tables, some generations in the graph includes ages +/- 1 or 2 years from Pew’s definition, and the Silent Generation was combined with the Greatest Generation. The graph also doesn’t include those 19 and younger, though the age cutoff between Millennials and the following generation has not yet been determined.

Do you have kids? Wait – let me restate that.

Karla Raines and I had dinner with another couple last week that shares our background and interest in social research.  We were talking about the challenges of understanding the decisions of other people if you don’t understand their background, and how we can have biases that we don’t even realize.

It brought me back to the topic of how we design and ask questions on surveys, and my favorite example of unintentional background bias on the part of the designer.

A common question, both in research and in social conversations, is the ubiquitous, “Do you have kids?”  It’s an easy question to answer, right?  If you ask Ward and June Cleaver, they’ll immediately answer, “We have two, Wally and Beaver”.  (June might go with the more formal ‘Theodore’, but you get the point.)

When we ask the question in a research context, we’re generally asking it for a specific reason.  Children often have a major impact on how people behave, and we’re usually wondering if there’s a correlation on a particular issue.

But ‘do you have kids’ is a question that may capture much more than the classic Wally and Beaver household.  If we ask that question, the Cleaver family will answer ‘yes’, but so will a 75 year-old who has two kids, even if those kids are 50 years old and grandparents of their own.  So ‘do you have kids’ isn’t the question we want to ask in most contexts.

What if we expanded the question to ‘do you have children under 18’?  It gets a bit tricky here if we put ourselves in the minds of respondents, and this is where our unintentional background bias may come into play.  Ward and June will still answer yes, but what about a divorced parent who doesn’t have custody?  He or she may accurately answer yes, but there’s not a child living in their home.  Are we capturing the information that we think we’re capturing?

And what about a person who’s living with a boyfriend and the boyfriend’s two children?  Or the person who has taken a foster child into the home?  Or the grandparent who is raising a grandchild while the parents are serving overseas?  Or the couple whose adult child is temporarily back home with her own kids in tow?

If we’re really trying to figure out how children impact decisions, we need to observe and recognize the incredible diversity of family situations in the modern world, and how that fits into our research goal.  Are we concerned about whether the survey respondent has given birth to a child?  If they’re a formal guardian of a child?  If they’re living in a household that contains children, regardless of the relationship?

The proper question wording will depend on the research goals, of course.  We often are assessing the impact of children within a household when we ask these questions, so we find ourselves simply asking, “How many children under the age of 18 are living in your home?”, perhaps with a followup about the relationship where necessary.  But It’s easy to be blinded by our own life experiences when designing research, and the results can lead to error in our conclusions.

So the next time you’re mingling at a party, we suggest not asking “Do you have kids”, and offer that you should instead ask, “How many children under the age of 18 are living in your home?”  It’s a great conversation starter and will get you much better data about the person you’re chatting with.

How representative is that qualitative data anyway?

When we do qualitative research, our clients often wonder how representative the qualitative data is of the target population they are working with.  It’s a valid question.  To answer, I have to go back to the purpose of conducting qualitative research in the first place.

The purpose of qualitative research is to understand people’s perceptions, opinions, and beliefs, as well as what is causing them to think in this way.  Unlike quantitative research, the purpose is not to generalize the results to the population of interest.  If eight out of ten participants in a focus group share the same opinion, can we say that 80% of people believe that particular opinion?  No, definitely not, but you can be pretty confident that it will be a prevalent opinion in the population.

While qualitative data is not statistically representative of a population, we still have guidelines that we follow to make sure we are capturing reliable data.  For example, we suggest conducting at least three focus groups per unique segment.  Qualitative research is fluid by nature, so data gathered from across three groups allows us to see consistent themes and patterns across groups, and assess if there are any outliers or themes exclusive to one group that may not be representative of the unique segment as a whole.

Still not sure which methodology will best be able to answer your research questions?  We can help you choose!