Sign up for the Corona Observer quarterly e-newsletter

Stay current with Corona Insights by signing up for our quarterly e-newsletter, The Corona Observer.  In it, we share with our readers insights into current topics in market research, evaluation, and strategy that are relevant regardless of your position or field.

View our recent newsletter from July, 2017 here.

* indicates required




We hope you find it to be a valuable resource. Of course, if you wish to unsubscribe, you may do so at any time via the link at the bottom of the newsletter.

 


Human Experience (HX) Research

About a year ago, I stumbled upon a TEDx Talk by Tricia Wang titled “The Human Insights Missing from Big Data”. She eloquently unfurls a story about her experience working at Nokia around the time smartphones were becoming a formidable emergent market. Over the course of several months, Tricia Wang conducted ethnographic research with around 100 youth in China and her conclusion was simple—everyone wanted a smartphone and they would do just about anything to acquire one. Despite her exhaustive research, when she relayed her findings to Nokia they were unimpressed and expressed that big data trends did not indicate there would be a large market for smartphones. Hindsight is 20/20.

One line in particular stuck out to me as I watched her talk— “[r]elying on big data alone increases the chances we’ll miss something, while giving us the illusion we know everything”. Big data offers companies and organizations plentiful data points that haphazardly paint a picture of human behavior and consumption patterns. What big data does not account for is the inherent ever-shifting, fickle nature of humans themselves. While big data continues to dominate quantitative research, qualitative research methods are increasingly shifting to account for the human experience. Often referred to as HX, human experience research aims to capture the singularity of humans and forces researchers to stop looking at customers exclusively as consumers. In human experience research, questions are asked to get at a respondent’s identity and emotions; for instance, asking how respondents relate to an advertising campaign instead of just how they react to the campaign.

The cultivation of HX research in the industry begs the question: what are the larger implications for qualitative research? Perhaps the most obvious answer is that moderators and qualitative researchers need to rethink how research goals are framed and how questions are posed to respondents to capture their unique experience. There are also implications for the recruiting process. The need for quality respondents is paramount in human experience research and will necessitate a shift in recruiting and screening practices. Additionally, qualitative researchers need to ensure that the best methodology is chosen in order to make respondents feel comfortable and vulnerable enough to share valuable insights with researchers.

Human experience research may just now be gaining widespread traction, but the eventual effects will ultimately reshape the industry and provide another tool for qualitative researchers to answer increasingly complex research questions for clients. At Corona, adoption of emerging methodologies and frameworks such as HX means we can increasingly fill knowledge gaps and help our clients better understand the humans behind the research.


When Data Collection Changes the Experience

One of the ongoing issues in any research that involves people is whether the data collection process is changing the respondents’ experience. That is, sometimes when you measure an attitude or a behavior, you may inadvertently change the attitude or behavior. For example, asking questions in a certain way may change how people would have naturally thought about a topic. Or if people feel like they are being observed by other people, they may modify their responses and behaviors.

We often think about this issue when asking about sensitive topics or designing research that is done face-to-face. Will people modify their responses if they are in front of a group of people? Or even just one person? For example, asking parents about how they discipline their children may be a sensitive topic, and they might omit some details when they are talking in front of other parents. Even in a survey, respondents may modify their responses to what they think is socially desirable (i.e., say what they think will make them seem like a good person to other people) or may modify them based on who specifically they think will read the responses. They may modify their responses depending on whether they trust whoever is collecting the data.

But beyond social desirability concerns and concerns about being observed, the research experience itself may not match the real-life experience. With surveys, the question order may not match how people naturally experience thinking about a topic. If a survey asks people whether they would be interested in a new service, their answer may change depending on what questions they have already answered. Did the survey ask people to think about competing services before or after this question? Did the survey have people list obstacles to using the new service before or after? Moreover, which of the question orders is most similar to the real-life experience of making a decision about the new service?

As discussed in a previous post, making a research experience as similar to the real life event can make it more likely that the results of the research will generalize to the real event. Collecting observational data or collecting data from third party observers can also maintain the natural experience. For example, if you want to determine whether a program is improving classroom behavior in school, you might collect teachers’ reports of their students’ behavior (instead or in addition to asking students to self-report). You could also record students’ behavior in the classroom and then code the behavior in the video. Technology also has made it easier to collect some data without changing the experience at all. For example, organizations can test different ad campaigns by doing an A/B test. Perhaps two groups of donors receive one of two possible emails soliciting donations. If two separate URLs are set up to track the response to the ad, then differences in response to the ad can be compared without changing the experience of receiving a request for donations.

One of my statistics professors used to say that no data are bad data. We just need to think carefully about what questions the data answer, which is impacted by how the data were collected. Recognizing that sometimes collecting data changes a natural experience can be an important step to understanding what questions your data are actually answering.


Feeding your market’s desire to participate in surveys

I got an online survey the other day from a public organization, and they wanted to know … something.  It doesn’t really matter for the purposes of this post.

I like to participate in surveys for a variety of reasons.  First, I’m naturally curious about what’s being asked, and why.  Maybe I can learn something.  Second, if it’s an issue that I care about, I can offer my opinion and have a voice.  Third, I’m a human being so I just like to share my opinion.  And finally, I have a professional interest in seeing surveys designed by other people, just to compare against my own design ideas.

With the possible exception of the last reason, I would hazard a guess that I’m not uncommon in these motivations.  Most people who respond to surveys do so because they’re curious, because they want a voice, and because they like sharing their opinion.

However, it takes time to complete a survey, and like everyone else, my time is precious.  I want it to be worth my time, which links back to my motivations.  Will participating really give me a voice?  Will I learn something from it?  Will anyone care what I say?  How will my information be used?  I want to trust that something good will come from my participation.

This brings me to another key motivator that is not often mentioned.  In addition to wanting good outcomes from my participation, I also want to be sure that nothing bad will come of it.  I want to trust that the people surveying me are ethical and will protect both me and the results of the survey.

Thinking about these forces, let’s go back to this survey that I received.  It was a topic that I care about, so I was interested to see what questions were being asked.  I could check ‘curiosity’ off my list.  It was from a legitimate organization, so I’d be willing to have a voice in their decisions and share my opinion with them.  I could check those two items off my list.

But then I took a second look at the survey.  It was being done for a legitimate organization, but with “assistance” from a consultant that I was unfamiliar with.  I pulled up Google and tried to look up the company.  Nothing.  They had no web site at all, and only a half-dozen Google hits that were mostly spam.

When I participate in a survey, I want to know that my responses aren’t going to be used against me.  There’s no crime in being a young company, but having no website told me that this wasn’t a legitimate company.  Did these people know what they were doing?  Were they going to protect my information like a legitimate research company would, or would they naively turn over my responses to the client so they could target me for their next fundraising campaign?  I had no idea since I couldn’t identify what the “consultant” actually did.

Beyond that, there was another problem.  I clicked on the link, and it took me to one of the low-cost survey hosting sites.  Based on my experience in the industry, I know that committed researchers don’t use these sites, and that they’re inadvertently tools for public input rather than legitimate research.  (The sampling is usually wrong and there’s often no protection against “stuffing the ballot box”.

I declined to participate in that survey, which made me sad.  I suspected that the end client had noble motivations, but in the end they didn’t meet my criteria for participating.

Given our 18 years in the business, we at Corona are often looking at changes on the margins of surveys.  What can we do in our wording or our delivery mode or our timing to maximize meaningful participation?  This was a good reminder to me that we also have to be careful to satisfy the much more basic and powerful forces that aid or damage participation.  Before anything else, the things that survey researcher have to do are:

  1. Make people feel safe in responding. You must clearly identify who is involved in the study, and make available a web site that clearly shows oversight by a legitimate market research firm.  (This is even more important for qualitative research, which requires a bigger time commitment by respondents.) View Corona’s research privacy policy and Research participant information.
  2. Confirm to people that their opinion is important. Maybe this is my bias as a person in the industry, but if I get a cheaply designed survey on software that is used for entertainment polls from some consultant who is virtually unknown by the World Wide Web, it tells me that the project isn’t a priority for the client.  If the research is important, give your respondents that message by your own actions.
  3. Confirm to people that the survey gives them a voice. You can overtly say this, but you also have to “walk the walk” by giving people confidence.  One thing that I’ve noticed more and more is the use of surveys as marketing tools rather than research tools.  Sending out frequent cheaply produced surveys as a means of “engaging our audience” is not a good idea if the surveys aren’t being used for decision making.  People figure out pretty quickly when participation is a waste of their time, and then they’re less likely to participate when you really need their input.

All in all, we in the research industry talk a lot about long-term declines in participation rates, but many of us are contributing to that by ignoring the powerful motivations that people have to participate in surveys.  People should WANT to participate in our surveys, and we should support that motivation.  We can do that by surveying them only when it’s important, by showing a high level of professionalism and effort in our communications with them, and by helping to reassure them that we’re going to both protect them and carry forward their voice to our clients.


Defining Best Practices and Evidence-Based Programs

The field of evaluation, like any field, has a lot of jargon.  Jargon provides a short-hand for people in the field to talk about complex things without having to use a lot of words or background explanation, but for the same reason, it’s confusing to people outside the field. A couple of phrases that we get frequent questions about are “best practices” and “evidence-based programs”.

“Evidence-based programs” are those that have been found by a rigorous evaluation to result in statistically significant outcomes for the participants. Similarly, “best practices” are evidence-based programs or aspects of evidence-based programs that have been demonstrated through rigorous evaluation to result in the best outcomes for participants.  Sometimes, however, “best practices” is used as umbrella term to refer to a continuum of practices with varying degrees of support, where the label “best practices” anchors the high end of the continuum.  For example, the continuum may include the subcategory of “promising practices,” which typically refer to program components that have some initial support, such as a weakly significant statistical finding, that suggest those practices may help to achieve meaningful outcomes.  Those practices may or may not hold up to further study, but they may be seen as good candidates for additional study.

Does following “best practices” mean your program is guaranteed to have an impact on your participants?  No, it does not.  Similarly, does using the curriculum and following the program manual for an evidence-based program ensure that your program will have an impact on your participants? Again, no.  Following best practices and using evidence-based programs may improve your chances of achieving measurable results for your participants, but if your participants differ demographically (i.e., are older or younger, higher or lower SES, etc.) from the participants in the original study, or if your implementation fidelity does not match the original study, the program/practices may not have the same impact as they did in the original study.  (Further, the original study may have been a type 1 error, but that’s a topic for another day.)  That is why granting agencies ask you to evaluate your program even when you are using an evidence-based program.

To know whether you are making the difference you think you’re making, you need to evaluate the impact of your efforts on your participants.  If you are using an evidence-based program with a different group of people than have been studied previously, you will be contributing to the knowledge base for everyone about whether that program may also work for participants like yours.  And if you want your program to be considered evidence-based, a rigorous evaluation must be conducted that meets established criteria by a certifying organization like the Blueprints program at the University of Colorado Boulder, Institute of Behavioral Science, Center for the Study and Prevention of Violence or the Substance Abuse and Mental Health Services Administration’s (SAMHSA) National Registry of Evidence-based Programs and Practices (NREPP).

So, it is a best practice to use evidence-based programs and practices that have been proven to work through rigorous, empirical study, but doing so doesn’t guarantee success on its own. Continued evaluation is still needed.


When experiences can lead you astray

Many organizations tell me that they hear from their participants all the time telling them how much the program changed their lives.  Understandably, those experiences matter a lot to organizations and they want to capture those experiences in their evaluations.

Recently I heard a podcast that perfectly captured the risks in relying too heavily on those kinds of reports.  There are two related issues here.  The first is that while your program may have changed the lives of a few participants, your evaluation is looking to determine whether you made a difference for the majority of participants.  The second is that you are most likely to hear from participants who feel very strongly about your program, and less likely to hear from those who were less affected by it.  An evaluation will ensure that you are hearing from a representative sample of participants (or all participants) and not just a small group that may be biased in a particular direction.

An evaluation plan can ensure you capture both qualitative and quantitative measures of your impact in a way that accurately reflects the experiences of your participants.


Tuft & Needle: Incredible Mattresses. Incredible research?

If you have ever received a proposal from Corona Insights regarding customer research, you may have seen this line:

“We believe that surveying customers shouldn’t lower customer satisfaction.”

We take the respondent’s experience into account, from the development of our approach through the implementation of the research (e.g., survey design, participant invites, etc.), even in our choice of incentives. We work with our clients on an overall communications plan and discuss with them whether we need to contact all customers or only a small subset, sparing the rest from another email and request. For some clients, we even program “alerts” to notify them of customers that need immediate follow-up.

As such, I’m always interested to see how other companies handle their interactions when it comes to requesting feedback. Is it a poorly thought out SurveyMonkey survey? Personalized phone call? Or something in between?

Recently, I was in the market for a new mattress and wanted to try one of newer entrants shaking up the mattress industry. I went with Tuft & Needle, and while I won’t bore you with details of the shopping experience or delivery, I found the post-purchase follow-up worth sharing (hopefully you’ll agree).

I received an email that appeared to come directly from one of the co-founders. It was a fairly stock email, but not with overdone marketing content or design, and it is easy enough to mask the email to make it appear to come from the founder. In it, it had one simple request:

“If you are interested in sharing, I would love to hear about your shopping experience. What are you looking for in a mattress and did you have trouble finding it?”

The request made clear that I could simply hit reply to answer. So I did.

I assumed that was it, or maybe I’d get another form response, but I actually got a real response. One that was clearly not stock (or at least not 100% stock – it made specific references to my response). It wasn’t the co-founder who had responded, but another employee, but still impressive in my opinion.

So, what did they do right? What can we take away from this?

  • Make a simple request
  • Make it easy to reply to
  • Include a personalized acknowledgement of the customer’s responses

Maybe you think this is something only a start-up would (or should) do, but what if more companies took the time to demonstrate such great service, whether in their research or their everyday customer service?


Based on my experience…

Born from a conversation I had with a coworker earlier this week, I wanted to talk about research methodology and design and how a client relying solely on what they know – their own experience and expertise – might result in subpar research.

Quantitative and qualitative methods have different strengths and weaknesses, many of which we have blogged about before. The survey is the most frequently used quantitative methodology here at Corona, and it’s an incredibly powerful tool when used appropriately. However, one of the hallmarks of a quantitative, closed-ended survey is that there is little room for respondents to tell us their thoughts – we must anticipate possible answers in question design. When designing these questions, we rely on our client’s and our own experience and expertise.

We know how much the value of a statistically valid survey is appreciated – being able to see what percentage of customers believe or perceive or do something, compare results across different subpopulations, or precisely identify what “clicks” with customers is very satisfying and can drive the success of a campaign.  But the survey isn’t always the right choice.

Sometimes, relying on experience and expertise is not enough, perhaps because you can’t predict exactly what responses customers might have or, despite the phrase being somewhat cliché, sometimes you don’t know what you don’t know. This is why I advocate for qualitative research.

Qualitative research is not statistically valid. Its results are not really meant to be applied to your entire population of customers or even a segment of them. However, it is incredibly powerful when you’re trying to learn more about how your customers are thinking about X, Y, or Z. Feel stuck trying to brainstorm marketing materials, finding a way to better meet your customers’ needs, or come up with a solution to a problem you’re having? Your customers probably won’t hand you a solution (though you never know), but what they say will spark creativity, aid in brainstorming, and become a valuable facet in the development of it.

In the end, both serve a significant role in research as a whole. Personally, I’m a big supporter of integrating both into the process. Qualitative research can bridge the gap between your customers and the “colder” quantitative research. It can help you better understand what your customers are doing or thinking and therefore help you better design a quantitative survey that enables you to capture robust data. Alternatively, qualitative research can follow a quantitative survey, allowing you to explore more of the “why” behind certain survey results. In the end, I simply urge you not to underestimate the value of qualitative research.


Phenomenology: One way to Understand the Lived Experience

How do workers experience returning to work after an on-the-job injury? How does a single-mother experience taking her child to the doctor? What is a tourist’s experience on his first visit to Colorado?

These research questions could all be answered by phenomenology, a research approach that describes the lived experience. While not a specific method of research, phenomenology is a series of assumptions that guide research tactics and decisions.Phenomenological research is uncommon in traditional market research, but that may be due to little awareness of it rather than its lack of utility. (However, UX research, which follows many phenomenological assumptions, is quickly gaining popularity). If you have been conducting research, but feel like you are no longer discovering anything new, then a phenomenology approach may shed some fresh insights.

Phenomenology is a qualitative research approach. It derives perspectives defined by experience and context, and the benefit of this research is a deeper and/or boarder understanding of these perspectives. To ensure perspectives are revealed, rather than prescribed, phenomenology avoids abstract concepts. The research doesn’t ask participants to justify their opinions or defend their behaviors. Rather, it investigates the respondents’ own terms in an organic way, assuming that people do not share the same interpretation of words or labels.

In market research, phenomenology is usually explored by unstructured, conversational interviews. Additional data, such as observing behavior (e.g., following a visitor’s path through a welcome center), can supplement the interviews. Interview questions typically do not ask participants to explain why they do, feel, or think something. These “why” questions can cause research participants to respond in ways that they think the researcher wants to hear, which may not be what’s in their head or heart. Instead, phenomenology researchers elicit stories from research participants by asking questions like “Can you tell me an example of when you…?” or, “What was it like when…?” This way, the researcher seeks and values context equally with the action of the experience.

The utility of this type of research may not be obvious at first. Project managers and decision makers may conclude the research project with a frustrating feeling of “now what?” This is a valid downside of phenomenological research. On the other hand, this approach has the power to make decision makers and organization leaders rethink basic assumptions and fundamental beliefs. It can reveal how reality manifests in very different ways.

A phenomenology study is not appropriate in all instances. But it is a niche option in our research arsenal that might best answer the question you are asking. As always, which research approach you use depends on your research question and how you want to use the results.


If you are traveling to watch the eclipse, be prepared

Solar Eclipse. The moon moving in front of the sun. Illustration

As a bit of a space geek (don’t even get me started on my love of SpaceX), I’ve been planning for this weekend for a long time.  I bought my eclipse sunglasses and started looking into lodging over a year ago, so you can imagine how excited I am for this event.

Unfortunately (or fortunately, depending on how you look at it), it seems I’m not the only one who will be traveling north to watch the total solar eclipse.  (Though you can see the sun 92% obscured in Denver, it won’t be anything like the experience of totality.)  This has CDOT issuing all sorts of warnings about traffic over the weekend.  I was curious about how much of a doomsday prediction these warnings were, so I conducted a quick Google Survey to find out.

Though these surveys aren’t near as robust or scientific as the surveys we at Corona do for our clients, they are a great way to get a quick feel for how the public feels about an issue.  In this case, I simply asked 125 Colorado residents if they were planning to travel north and, if so, where they planned to travel to.  The results?

Again, these are very rough numbers, but with 20% of those surveyed saying they are planning to travel to see the eclipse (out of 5.5 million Coloradans), that could mean that as many as 1.1 million Coloradans will be on the highway on Monday.  CDOT’s estimate of 600 thousand traveling to Wyoming doesn’t seem far off from the 550 million estimate this quick survey would indicate.

So what to do?  Here are my recommendations:

  • Go anyway! Seriously, this won’t happen within easy driving distance of Colorado for another 30 years, and everyone I’ve heard speak about the experience says it’s completely surreal and unlike anything else you will experience.
  • Plan for safety. If you have somehow been living under a rock and missed all of the safety warnings about needing eclipse glasses, here’s another one.  Don’t look at the sun (except during totality if you travel north) without using eclipse glasses.    If you don’t already have them, you can try finding them at local libraries or hardware stores.  And if you can’t find them, check out community events where you could borrow from someone.  The process of the eclipse will take a total of almost 3 hours, so it shouldn’t be a problem to trade glasses here and there.  And if you still can’t make that work, there’s always a pinhole camera.
  • Plan for a good location. Many small towns in the path of totality are hosting events to watch the eclipse.  That’s a much better option than planning to just stop at the side of the road somewhere.
  • Plan for the worst-case traffic scenario. Though my hope is that everyone will be spread out enough that traffic won’t be as bad as CDOT fears, it’s a possibility that they’re entirely correct.  Get gas early so that you don’t have to wait at overcrowded gas stations.  Plan a variety of routes to get to and from your destination.  Take food and water in the car so that you don’t have to swarm the handful of restaurants in the area that aren’t equipped to handle this kind of volume.
  • Have fun! Try and relax, take your time when traveling, and enjoy the experience for what it is. Even if it takes way longer than you expect to get home on Monday, this may be the only time in your life that you get to experience something like this.

I’ll be out of the office on Monday, and I hope that many of you will be as well.  Enjoy the experience, and cross your fingers for clear skies!


Hey, did you hear that joke about naturally occurring data going to a bar?

By AdamTheBruce (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/) or FAL], via Wikimedia Commons
There we were cresting a pass along Highway 40 in route to Steamboat Springs. I found myself scanning the beautiful terrain while engrossed in a conversation about one of my favorite research topics – naturally occurring data. Call it research purist meets strategist for the knockout round.

As a consultant who leads data-driven strategy processes I’ve learned that not all data is created equal. I’ve also learned to value experience and intuition just as I value data derived from research. After all, aren’t the most powerful insights those that derive from a combination of data, intuition and experience?

Several years ago, I noticed a pattern. If I didn’t say the thing I believed to be true based on my experience and insights, I often regretted it later. I found myself wondering why I wasn’t valuing my own experience more.

And that same realization bopped me on the head again this year after returning home from Steamboat.

Why wasn’t I valuing my own experience as much as my intuition? Perhaps I’d given my intuition center-stage to honor it when I’m amidst folks who value facts and numbers.

But what about 20 years of experience? I’m not saying experience that is 20 years old. I’m talking about an accumulation of naturally occurring data over 20 years.

The thing I love about naturally occurring data is that it can be as powerful and valuable as data derived from surveys, focus groups, and other constructed research environments. (Thank you, market research profession, for honoring what I knew to be true.) That data exists all around us – and if we can stay attuned to it – we can gather that data up into trends, patterns and insights.

I always remind my clients that the world doesn’t stop when you are engaged in strategic planning. Day-to-day operations go on while we contemplate bold aspirations for the future. And the naturally occurring data we gather along the way can serve either the day-to-day, or the strategic, or both.

Experience has taught me that all forms of data are powerful – and together they can be synergistic.

Fact. Perception. Intuition.

Add it all up to experience. Naturally occurring.