Category: Trends and News

Human Experience (HX) Research

About a year ago, I stumbled upon a TEDx Talk by Tricia Wang titled “The Human Insights Missing from Big Data”. She eloquently unfurls a story about her experience working at Nokia around the time smartphones were becoming a formidable emergent market. Over the course of several months, Tricia Wang conducted ethnographic research with around 100 youth in China and her conclusion was simple—everyone wanted a smartphone and they would do just about anything to acquire one. Despite her exhaustive research, when she relayed her findings to Nokia they were unimpressed and expressed that big data trends did not indicate there would be a large market for smartphones. Hindsight is 20/20.

One line in particular stuck out to me as I watched her talk— “[r]elying on big data alone increases the chances we’ll miss something, while giving us the illusion we know everything”. Big data offers companies and organizations plentiful data points that haphazardly paint a picture of human behavior and consumption patterns. What big data does not account for is the inherent ever-shifting, fickle nature of humans themselves. While big data continues to dominate quantitative research, qualitative research methods are increasingly shifting to account for the human experience. Often referred to as HX, human experience research aims to capture the singularity of humans and forces researchers to stop looking at customers exclusively as consumers. In human experience research, questions are asked to get at a respondent’s identity and emotions; for instance, asking how respondents relate to an advertising campaign instead of just how they react to the campaign.

The cultivation of HX research in the industry begs the question: what are the larger implications for qualitative research? Perhaps the most obvious answer is that moderators and qualitative researchers need to rethink how research goals are framed and how questions are posed to respondents to capture their unique experience. There are also implications for the recruiting process. The need for quality respondents is paramount in human experience research and will necessitate a shift in recruiting and screening practices. Additionally, qualitative researchers need to ensure that the best methodology is chosen in order to make respondents feel comfortable and vulnerable enough to share valuable insights with researchers.

Human experience research may just now be gaining widespread traction, but the eventual effects will ultimately reshape the industry and provide another tool for qualitative researchers to answer increasingly complex research questions for clients. At Corona, adoption of emerging methodologies and frameworks such as HX means we can increasingly fill knowledge gaps and help our clients better understand the humans behind the research.

Ugh, Millennials

Is anyone else tired of talking about millennials? Millennials have seemingly been on everyone’s mind, with many worrying over their spending habits, charitable giving, large debt, voting behaviors, and other things. Why do we care so much about this generation? Don’t they already have a problem with entitlement and being all about “me me me”; we probably shouldn’t feed into that, right?

Pictured: Gregory (myself) the Millennial
Fun fact: depending on where you draw the line, 70% of Corona staff are classified as millennials.

As annoying as it might be, there are some very good reasons to focus on the millennial generation. The baby boomer generation is now on the decline and currently there are 11 million more millennials. It is estimated that millennials will comprise over a third of adult Americans by 2020,  up to 75% of the American workforce by 2025, and currently account for over one trillion dollars in consumer spending in the U.S. Despite this, millennials have less money to spend and are encumbered with greater debt. Perhaps unsurprisingly, the conclusion is that millennials are important because they are the new money – they are very quickly becoming the largest group of consumers and are therefore greatly impacting all businesses and organizations.

The millennials, as a generation, share some commonly seen characteristics:

… and the facts don’t end there. If you haven’t already, I highly encourage you to pour over some of the linked materials to familiarize yourself with this impactful generation. If they haven’t yet, millennials will be disrupting your organization sometime in the near future, and it’s inescapable that we all need to adapt.

Welcome to the consumer era: When engagement drives change

It’s inescapable. Every day I see more and more examples of it. Consumer behavior is at the core of sweeping industry change. Too often we get caught up in thinking about technology as the disruptor and forget that people are at the heart of the transformations all around us. Here are four examples from this past week:

  • Sears. J.C. Penney. Bebe. Macy’s. Target. Kohl’s. Neiman Marcus. And that’s just for starters. The epic decline of department stores and apparel-focused retail is linked to Amazon’s domination of the online marketplace. At the heart of that change? Consumers who expect something more – and something different. Increasingly, consumers are saying no to the traditional retailer and shopping mall. What do they demand? Convenience, cost and a user-defined experience.
  • Who would have thought that consumer choice would lead colleges to guarantee that their degree will get you a job – and a job with a decent salary? As college students – and their parents – increasingly question the return on investment given staggering student loan debt, higher ed is having to respond creatively to compete for students. Both educator and student take on the risk – and reap the reward.
  • Decades of tradition are on the chopping block as movie studios  plan to release films to online platforms mere weeks after release to theaters. As reported in the Wall Street Journal yesterday, movie companies have experienced declining home entertainment revenues for 2 years straight and global box-office growth has also been slowing. In a quest to increase revenues they are looking to meet consumers where they are – in the comfort of their home on a tablet, pc or tv.
  • And lastly. Bruce Springsteen. We are experiencing the beginning of the end of rock and roll as we’ve known it. Boomer rock stars became global corporations in the days when record companies invested in them like R&D. That era is over. The concert business is bifurcating into festivals and small venues as consumers expect more intimate and novel experiences. No longer satisfied with paying high prices for poor sound quality and a miniscule view of the band, consumers are pursuing other entertainment options. Today’s young stars will build careers in an entirely new era. Welcome to Me, Inc. rock and roll style.

Each of these stories has a common denominator – consumers demand the experience on their terms. They define the where, when, how and how much. These changes are sweeping and have only begun. The question now is, who is next?

What will this mean for other industries and sectors?

State of Our Cities and Towns – 2017

For many years, Corona has partnered with the Colorado Municipal League to conduct the research that is the foundation of their annual State of Our Cities and Towns report. CML produced the following short video, specifically for municipal officials, about the importance of investing in quality of life:

Learn more, view the full report, and watch additional videos on the State of Our Cities website.

Where to next? Election polling and predictions

The accuracy of election polling is still being heavily discussed, and one point that is worth some pondering was made by Allan Lichtman in an NPR interview the day after the election.  What he said was this:

“Polls are not predictions.”

To some extent this is a semantic argument about how you define prediction, but his point, as I see it, is that polls are not a model defining what factors will drive people to choose one party, or candidate, over another.  Essentially, polls are not theory-driven – they are not a model of “why,” and they do not specify, a priori, what factors will matter.  So, polling estimates rise and fall with every news story and sound bite, but a prediction model would have to say something up front like “we think this type of news will affect behavior in the voting booth in this way.” Lichtman’s model, for example, identifies 13 variables that he predicts will affect whether the party in power continues to hold the White House, including whether there were significant policy changes in the current term, whether there was a big foreign policy triumph, whether the President’s party lost seats during the preceding mid-term election, and so on.

Polls, in contrast, are something like a meta prediction model.  Kate made this point as we were discussing the failure of election polls:  polls are essentially a sample of people trying to tell pollsters what they predict they will do on election day, and people are surprisingly bad at predicting their own behavior.  In other words, each unit (i.e., survey respondent) has its own, likely flawed, prediction model, and survey respondents are feeding the results of those models up to an aggregator (i.e., the poll).  In this sense, a poll as prediction, is sort of like relying on the “wisdom of the crowd” – but if you’ve ever seen what happens when someone uses the “ask the audience” lifeline on Who Wants to Be a Millionaire, you know that is not a foolproof strategy.

Whether a model or a poll is better in any given situation will depend on various things.  A model requires deep expertise in the topic area, and depending on knowledge and available data sources, it will only capture some portion of the variance in the predicted variable.  A model that fails to include an important predictor will not do a great job of predicting.  Polls are a complex effort to ask the right people the right questions to be able to make an accurate estimate of knowledge, beliefs, attitudes, or behaviors.  Polls have a variety of sources of error, including sampling error, nonresponse bias, measurement error, and so on, and each of those sources contribute to the accuracy of estimates coming out of the poll.

The election polling outcomes are a reminder of the importance of hearing from a representative sample of the population, and of designing questions with an understanding of psychology.  For example, it is important to understand what people can or can’t tell you in response to a direct question (e.g., when are people unlikely to have conscious access to their attitudes and motivations; when are knowledge or memory likely to be insufficient), and what people will or won’t tell you in response to a direct question (e.g., when is social desirability likely to affect whether people will tell you the truth).

This election year may have been unusual in the number of psychological factors at play in reporting voting intentions.  There was a lot of reluctant support on both sides, which suggests conflicts between voters’ values and their candidate’s values, and for some, likely conflicts between conscious and unconscious leanings.  Going forward, one interesting route would be for pollsters to capture various psychological factors that might affect accuracy of reporting and incorporate those into their models of election outcomes.

Hopefully in the future we’ll also see more reporting on prediction models in addition to polls.  Already there’s been a rash of data mining in an attempt to explain this year’s election results.  Some of those results might provide interesting ideas for prediction models of the future.  (I feel obliged to note: data mining is not prediction.  Bloomberg View explains.)

Elections are great for all of us in the research field because they provide feedback on accuracy that can help us improve our theories and methods in all types of surveys and models.  (We don’t do much traditional election polling at Corona, but a poll is essentially just a mini-survey – and a lot of the election “polls” are, in fact, surveys.  Confused?  We’ll try to come back to this in a future blog.) We optimistically look forward to seeing how the industry adapts.

Ensuring your graphs are honest

For our firm, the very idea of fake news goes against our mission to:

Provide accurate and unbiased information and counsel to decision makers.

The realm of fake news spans the spectrum of misleading to outright lying. It is the former that got us thinking about how graphs are sometimes twisted to mislead, while not necessarily being wrong.

Below are four recommendations to prevent misinterpretation when making your own graphs (or things to look for when interpreting those seen in the news).

 1. Use the same scales across graphs to be compared

Showing similar data for different groups or from different times? Make the graphs the same scale to aid easy, accurate comparisons.

Take the below examples. Maybe you have two graphs, even on separate pages, used to illustrate the differences between Groups 1 & 2. If someone were to look between them to see differences over time, the visual wouldn’t depict that 2016 saw a doubling of the proportion who “agreed.”  The bar is slightly longer, but not twice as long.


Sure, including axis and data labels helps, but the benefit of a graph is that you can quickly see the result with little extra interpretation. Poorly designed graphs, no matter the labeling, can still mislead.

2. Start the graph origin at zero.

Similar to above, not starting the graph at a zero point can cause differences to be taken out of scale.

In the below examples, both graphs show exactly the same data but start from different points, making the differences in the first graph look proportionately larger than they are.


3. Convey the correct magnitude.

Sometimes, a seemingly small amount may have significant meaning (think tenths of a degree in global temperatures), while sometimes a large amount may not (think a million dollars within the Federal budget).

Choosing the proper graph type, design, and what to actually graph all make a difference here.

For example, when graphing global temperatures, graphing the differences may best accentuate the magnitude rather than graphing the actual temperatures, where the relatively small-looking differences fail to communicate the finding.

4. Make it clear who is represented by the data.

Does this data represent the entire population? Only voters? Only likely voters? Only those who responded “yes” to a previous question? Only those home on a Thursday night with a landline? (If it’s the latter, save your time and just ignore it completely.).

Usually, the safest bet is to show results by the whole population, even if the question was only asked to a subset of people due to a skip pattern. This is easiest for people to mentally process and prevents accidentally interpreting the proportion as the whole.

For instance, if 50% of people who were aware of Brand A had seen an ad for the brand, but only 10% of the population were aware of Brand A in the first place (and, therefore, were asked the follow-up about ads), then in reality, probably only 5% of the population has seen the ad. To the casual reader, that subtle difference in who the results represent could be significant.

This, of course, isn’t our first time writing about graph standards. Checkout some of our other blogs on the subject here:

Graphs: An effective tool, but use them carefully

Visualizing data: 5 Best practices

Strategy in an era of unpredictability

The strategic planning process that is used by many today was created decades ago during a more predictable time. By nature, the process is based on an underlying assumption, namely that changes and trends in the external environment can be predicted with some certainty.

So, what about changes that can’t be predicted? Or are unprecedented in our lifetime?  We only have to look to our smart phones for a glimpse into the near future. As a recent piece by the World Economic Forum noted, we are in the midst of the Fourth Industrial Revolution, and what many refer to as the Digital Revolution.

This one is a game changer my friends.

 “There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.”

How to set strategy now?

  1. Drop the SWOT – this tool is out of date and too often focuses on naval gazing instead of a true assessment of the strategic environment.
  2. Create a sense of urgency – too many organizations plod rather than sprint. Our decision making processes are too slow or cumbersome. We don’t have agreement on the big issues (mission, vision, values and strategy) and so we can’t (or don’t) really empower people to act. And act we must.
  3. Fast twitch for the win – I don’t see enough evidence of agility either. Develop your fast twitching muscles as you’re going to need them.
  4. Think map app – today’s map app alerts you in real-time to the best route. Your strategic plan should do the same.

Remember, strategy is a choice. It focuses attention, resources and commitment.

And strategy isn’t static.

Got your latest app downloaded?

What are your organization’s demographics?

It’s probably no secret that we at Corona like to think about demographics a lot (we posted this cool quiz to test your demographic knowledge a few months ago). A few weeks ago, it took all of my self-control to not turn to whoever was sitting next to me on a plane and start discussing this Atlantic article about China’s changing demographics. Not only is the article interesting for political nerds, it is also a great example of how a dramatic shift in demographics over a relatively short amount of time is going to have large effects on China’s status as a world power.

At Corona, we often see this same pattern on a much smaller scale at the organizations we work with. These demographic changes are apparent in both the work force and populations that many of our clients serve. Understanding demographic trends can help organizations plan for the future. If your organization works with children, you may have already had to plan for some significant changes in demographics, given that big changes have already started for younger generations.

There are two key steps that an organization can take to anticipate and grow with changing demographics. First, a demographic analysis can give your organization a good idea of how your customers/supporters compare to the broader population. For example, if your organization provides support for low income children in Denver, it is important to know whether you are serving that population well. A demographic analysis could show you what the population of low income children in Denver look like (e.g., what is their living situation, who are their parents, etc.). Then, your organization could compare the demographics of the children you are currently serving with the broader population to identify any gaps. For example, you may not be serving as many young parents as you would expect given the population in Denver.

Second, an organization can then do target research with a specific demographic to better understand how to reach them, serve them, etc. In the previous example, the organization might want to do focus groups with young, low income parents in Denver to get an idea of what the barriers are to serving that population. These steps can help an organization both better meet the needs of the current population and plan for population shifts in the future.

Online research is becoming more feasible in smaller locales (and that includes Denver)

Door-to-door, intercept, mail, telephone, online – surveys have evolved with the technology and needs of the times. Online has increased speed and often lowered cost of conducting surveys. For some populations, it has even made conducting surveys more feasible.

CO PushpinHowever, online surveys haven’t always been feasible in a city such as Denver or even statewide in Colorado.

(I should note here that we’re talking about the general public or other populations where we do not have a list. For instance, if we were surveying your customers and you had a database of customers with email addresses, conducting the survey online is almost certainly the way to go.)

Why it’s been tough until now

So why has it been tough until now to conduct public opinion research online in Denver, the Front Range, or even all of Colorado?

Unlike with mail, where huge databases of addresses exist, and telephone, where again lists or RDD sample can be generated, there is no master repository of email addresses or requirement that residents have one official email address. (Many of us probably have multiple emails – I personally have four outside of work.)

The market research industry’s answer to this has been to create databases, but unlike with mail and telephone where lists can be generated via public sources, email addresses generally have to be collected via individuals voluntarily sharing their information. In the industry, companies have specialized in doing just that – recruiting a lot of potential respondents to their online panel in exchange for incentives provided when they complete a survey. In addition to email, these companies generally collect some basic demographic information as well to make targeting more effective.

Now, let’s say a panel had one million U.S. members in their database. Sounds big, doesn’t it? Well, given that Colorado makes up less than 2% of the nation’s population, than means there might be 20,000 Coloradoans in their database. If you wanted Denver Metro only (about half the state’s population), that takes our max potential to 10,000. If only 10% respond to any given survey invite, the most respondents you may be able to receive is 1,000, and that’s before any additional screening (e.g., you’re only looking for commuters). That is simplified summary, but as you can see, it largely becomes a numbers game – you need a very large panel to drill down to smaller geography or subset of the population.

What has changed

These panels are nothing new. Corona has been using them for a decade, but what has changed recently in our home market (and most smaller geographies around the country, for that matter) is that the panels have grown large enough to adequately supply enough respondents to participate in our studies. A few years ago, we could only do online studies nationwide, in regions (e.g., the west, south, etc.), or maybe in very large metropolitan areas. As panels and recruitment continued to grow, we were able to do general population studies (i.e., pretty much everyone qualifies as we don’t have additional criteria for screening), but not smaller segments of the population. Now, while we can still run into difficulty with really niche groups, we can conduct studies with parents, visitors to a certain attraction, and many other groups all within Denver metro or the Front Range.

Still, a note of caution

So, problem solved, right? Unfortunately, online panels come with some caveats. First, compared to a mail or telephone survey, when sample is randomly generated, the results are not considered representative as the sample is not random probability sample. (There are some probability-based samples for online panels, but they’re still in the “only big enough for nationwide studies phase” mostly.) Panels are typically designed to reflect the overall population in terms of demographics, but due to their recruiting method, can’t be considered “random”.

Other concerns need to be taken into account such as how quickly the panel turns over respondents, avoiding respondents who try to game the system just for incentives, and other quality control measures.

For these reasons, Corona still regularly recommends other survey modes, such as mail and telephone (yes, we still do mail!) when we feel they will provide better answers for our clients. Often times, however, online may be the only feasible option given the challenges with telephone (e.g., cell phones) and mail (e.g., slower, static content). Sometimes we’ll propose both to our clients and then discuss the relative tradeoffs with them.

In summary, online is a growing option for Denver and Colorado, as well as other smaller cities, but be sure to pick the mode that is best for your research – not just the one that is easiest.


Is yesterday’s intermediary ready to become the platform of tomorrow?

Living in America’s “it” city in a year of disruptions across the political spectrum nationally and internationally have led me to contemplate evolution in the nonprofit sector. I’m struck by what I’ve observed recently as an emerging trend – the slow decline of the intermediary organization. This may be a heretical statement, I’ll admit, but it is a shift worth watching.

It may be easiest to observe in healthcare as classic intermediary models born in the 1970s and 1980s discover that they cannot withstand disruptive changes in their marketplaces. Healthcare may be the canary in the coalmine of the nonprofit sector given the pace and scale of change. Executives and board members are learning the hard way that forces of this magnitude cannot be overcome by single organizations. They are finding themselves caught off-guard as the strategy they set in 2015 is already obsolete.

Let’s explore an example. The U.S. model of community-based engagement in cancer clinical trials is in the midst of two game-changing trends. One, it is becoming increasingly difficult to recruit and retain people in clinical trials. Second, healthcare providers are shifting away from collaborative models run by the federal government to in-house options that can adapt more quickly.

Health fairs are another example. Long the staple of communities large and small in states from California to Ohio, these programs were born at a time when people would stand in line to be seen by volunteer health providers after not eating for 12 hours. Today, more people are monitoring their health on a wrist device as technological advances are leapfrogging old screening methods. Add onto that the demographic differences between the oldest Boomers and the oldest Millennials and you’ve realize that OSFA (one-size-fits-all) doesn’t fit most any more.

The emergence of platform-based models such as Uber have changed the competitive landscape for many industries. As entrepreneurs adapt the model to other industries, it’s only a matter of time before platforms are the new normal in the nonprofit sector too. Of course we’ve seen the rise of platforms in fundraising, with crowd-sourced funding and giving days as examples. I’m intrigued by the possibilities of platform-based service delivery.

As described in the April 2016 Harvard Business Review, “a platform provides the infrastructure and rules for a marketplace that brings together producers and consumers.” (Pipelines, Platforms, and the New Rules of Strategy by Marshall W. Van Alstyne, Geoffrey G. Parker, and Sangeet Paul Choudary.)

So, I’m wondering when the definitive nonprofit intermediaries of the 1970s and 1980s – the nonprofit technical assistance provider and trade association – will evolve into the platform model for 2020.

Agility and relevancy are the name of the game. I’ll be watching to see who figures that out the soonest.