Category: Market Research

When Data Collection Changes the Experience

One of the ongoing issues in any research that involves people is whether the data collection process is changing the respondents’ experience. That is, sometimes when you measure an attitude or a behavior, you may inadvertently change the attitude or behavior. For example, asking questions in a certain way may change how people would have naturally thought about a topic. Or if people feel like they are being observed by other people, they may modify their responses and behaviors.

We often think about this issue when asking about sensitive topics or designing research that is done face-to-face. Will people modify their responses if they are in front of a group of people? Or even just one person? For example, asking parents about how they discipline their children may be a sensitive topic, and they might omit some details when they are talking in front of other parents. Even in a survey, respondents may modify their responses to what they think is socially desirable (i.e., say what they think will make them seem like a good person to other people) or may modify them based on who specifically they think will read the responses. They may modify their responses depending on whether they trust whoever is collecting the data.

But beyond social desirability concerns and concerns about being observed, the research experience itself may not match the real-life experience. With surveys, the question order may not match how people naturally experience thinking about a topic. If a survey asks people whether they would be interested in a new service, their answer may change depending on what questions they have already answered. Did the survey ask people to think about competing services before or after this question? Did the survey have people list obstacles to using the new service before or after? Moreover, which of the question orders is most similar to the real-life experience of making a decision about the new service?

As discussed in a previous post, making a research experience as similar to the real life event can make it more likely that the results of the research will generalize to the real event. Collecting observational data or collecting data from third party observers can also maintain the natural experience. For example, if you want to determine whether a program is improving classroom behavior in school, you might collect teachers’ reports of their students’ behavior (instead or in addition to asking students to self-report). You could also record students’ behavior in the classroom and then code the behavior in the video. Technology also has made it easier to collect some data without changing the experience at all. For example, organizations can test different ad campaigns by doing an A/B test. Perhaps two groups of donors receive one of two possible emails soliciting donations. If two separate URLs are set up to track the response to the ad, then differences in response to the ad can be compared without changing the experience of receiving a request for donations.

One of my statistics professors used to say that no data are bad data. We just need to think carefully about what questions the data answer, which is impacted by how the data were collected. Recognizing that sometimes collecting data changes a natural experience can be an important step to understanding what questions your data are actually answering.

Feeding your market’s desire to participate in surveys

I got an online survey the other day from a public organization, and they wanted to know … something.  It doesn’t really matter for the purposes of this post.

I like to participate in surveys for a variety of reasons.  First, I’m naturally curious about what’s being asked, and why.  Maybe I can learn something.  Second, if it’s an issue that I care about, I can offer my opinion and have a voice.  Third, I’m a human being so I just like to share my opinion.  And finally, I have a professional interest in seeing surveys designed by other people, just to compare against my own design ideas.

With the possible exception of the last reason, I would hazard a guess that I’m not uncommon in these motivations.  Most people who respond to surveys do so because they’re curious, because they want a voice, and because they like sharing their opinion.

However, it takes time to complete a survey, and like everyone else, my time is precious.  I want it to be worth my time, which links back to my motivations.  Will participating really give me a voice?  Will I learn something from it?  Will anyone care what I say?  How will my information be used?  I want to trust that something good will come from my participation.

This brings me to another key motivator that is not often mentioned.  In addition to wanting good outcomes from my participation, I also want to be sure that nothing bad will come of it.  I want to trust that the people surveying me are ethical and will protect both me and the results of the survey.

Thinking about these forces, let’s go back to this survey that I received.  It was a topic that I care about, so I was interested to see what questions were being asked.  I could check ‘curiosity’ off my list.  It was from a legitimate organization, so I’d be willing to have a voice in their decisions and share my opinion with them.  I could check those two items off my list.

But then I took a second look at the survey.  It was being done for a legitimate organization, but with “assistance” from a consultant that I was unfamiliar with.  I pulled up Google and tried to look up the company.  Nothing.  They had no web site at all, and only a half-dozen Google hits that were mostly spam.

When I participate in a survey, I want to know that my responses aren’t going to be used against me.  There’s no crime in being a young company, but having no website told me that this wasn’t a legitimate company.  Did these people know what they were doing?  Were they going to protect my information like a legitimate research company would, or would they naively turn over my responses to the client so they could target me for their next fundraising campaign?  I had no idea since I couldn’t identify what the “consultant” actually did.

Beyond that, there was another problem.  I clicked on the link, and it took me to one of the low-cost survey hosting sites.  Based on my experience in the industry, I know that committed researchers don’t use these sites, and that they’re inadvertently tools for public input rather than legitimate research.  (The sampling is usually wrong and there’s often no protection against “stuffing the ballot box”.

I declined to participate in that survey, which made me sad.  I suspected that the end client had noble motivations, but in the end they didn’t meet my criteria for participating.

Given our 18 years in the business, we at Corona are often looking at changes on the margins of surveys.  What can we do in our wording or our delivery mode or our timing to maximize meaningful participation?  This was a good reminder to me that we also have to be careful to satisfy the much more basic and powerful forces that aid or damage participation.  Before anything else, the things that survey researcher have to do are:

  1. Make people feel safe in responding. You must clearly identify who is involved in the study, and make available a web site that clearly shows oversight by a legitimate market research firm.  (This is even more important for qualitative research, which requires a bigger time commitment by respondents.) View Corona’s research privacy policy and Research participant information.
  2. Confirm to people that their opinion is important. Maybe this is my bias as a person in the industry, but if I get a cheaply designed survey on software that is used for entertainment polls from some consultant who is virtually unknown by the World Wide Web, it tells me that the project isn’t a priority for the client.  If the research is important, give your respondents that message by your own actions.
  3. Confirm to people that the survey gives them a voice. You can overtly say this, but you also have to “walk the walk” by giving people confidence.  One thing that I’ve noticed more and more is the use of surveys as marketing tools rather than research tools.  Sending out frequent cheaply produced surveys as a means of “engaging our audience” is not a good idea if the surveys aren’t being used for decision making.  People figure out pretty quickly when participation is a waste of their time, and then they’re less likely to participate when you really need their input.

All in all, we in the research industry talk a lot about long-term declines in participation rates, but many of us are contributing to that by ignoring the powerful motivations that people have to participate in surveys.  People should WANT to participate in our surveys, and we should support that motivation.  We can do that by surveying them only when it’s important, by showing a high level of professionalism and effort in our communications with them, and by helping to reassure them that we’re going to both protect them and carry forward their voice to our clients.

Tuft & Needle: Incredible Mattresses. Incredible research?

If you have ever received a proposal from Corona Insights regarding customer research, you may have seen this line:

“We believe that surveying customers shouldn’t lower customer satisfaction.”

We take the respondent’s experience into account, from the development of our approach through the implementation of the research (e.g., survey design, participant invites, etc.), even in our choice of incentives. We work with our clients on an overall communications plan and discuss with them whether we need to contact all customers or only a small subset, sparing the rest from another email and request. For some clients, we even program “alerts” to notify them of customers that need immediate follow-up.

As such, I’m always interested to see how other companies handle their interactions when it comes to requesting feedback. Is it a poorly thought out SurveyMonkey survey? Personalized phone call? Or something in between?

Recently, I was in the market for a new mattress and wanted to try one of newer entrants shaking up the mattress industry. I went with Tuft & Needle, and while I won’t bore you with details of the shopping experience or delivery, I found the post-purchase follow-up worth sharing (hopefully you’ll agree).

I received an email that appeared to come directly from one of the co-founders. It was a fairly stock email, but not with overdone marketing content or design, and it is easy enough to mask the email to make it appear to come from the founder. In it, it had one simple request:

“If you are interested in sharing, I would love to hear about your shopping experience. What are you looking for in a mattress and did you have trouble finding it?”

The request made clear that I could simply hit reply to answer. So I did.

I assumed that was it, or maybe I’d get another form response, but I actually got a real response. One that was clearly not stock (or at least not 100% stock – it made specific references to my response). It wasn’t the co-founder who had responded, but another employee, but still impressive in my opinion.

So, what did they do right? What can we take away from this?

  • Make a simple request
  • Make it easy to reply to
  • Include a personalized acknowledgement of the customer’s responses

Maybe you think this is something only a start-up would (or should) do, but what if more companies took the time to demonstrate such great service, whether in their research or their everyday customer service?

Based on my experience…

Born from a conversation I had with a coworker earlier this week, I wanted to talk about research methodology and design and how a client relying solely on what they know – their own experience and expertise – might result in subpar research.

Quantitative and qualitative methods have different strengths and weaknesses, many of which we have blogged about before. The survey is the most frequently used quantitative methodology here at Corona, and it’s an incredibly powerful tool when used appropriately. However, one of the hallmarks of a quantitative, closed-ended survey is that there is little room for respondents to tell us their thoughts – we must anticipate possible answers in question design. When designing these questions, we rely on our client’s and our own experience and expertise.

We know how much the value of a statistically valid survey is appreciated – being able to see what percentage of customers believe or perceive or do something, compare results across different subpopulations, or precisely identify what “clicks” with customers is very satisfying and can drive the success of a campaign.  But the survey isn’t always the right choice.

Sometimes, relying on experience and expertise is not enough, perhaps because you can’t predict exactly what responses customers might have or, despite the phrase being somewhat cliché, sometimes you don’t know what you don’t know. This is why I advocate for qualitative research.

Qualitative research is not statistically valid. Its results are not really meant to be applied to your entire population of customers or even a segment of them. However, it is incredibly powerful when you’re trying to learn more about how your customers are thinking about X, Y, or Z. Feel stuck trying to brainstorm marketing materials, finding a way to better meet your customers’ needs, or come up with a solution to a problem you’re having? Your customers probably won’t hand you a solution (though you never know), but what they say will spark creativity, aid in brainstorming, and become a valuable facet in the development of it.

In the end, both serve a significant role in research as a whole. Personally, I’m a big supporter of integrating both into the process. Qualitative research can bridge the gap between your customers and the “colder” quantitative research. It can help you better understand what your customers are doing or thinking and therefore help you better design a quantitative survey that enables you to capture robust data. Alternatively, qualitative research can follow a quantitative survey, allowing you to explore more of the “why” behind certain survey results. In the end, I simply urge you not to underestimate the value of qualitative research.

Phenomenology: One way to Understand the Lived Experience

How do workers experience returning to work after an on-the-job injury? How does a single-mother experience taking her child to the doctor? What is a tourist’s experience on his first visit to Colorado?

These research questions could all be answered by phenomenology, a research approach that describes the lived experience. While not a specific method of research, phenomenology is a series of assumptions that guide research tactics and decisions.Phenomenological research is uncommon in traditional market research, but that may be due to little awareness of it rather than its lack of utility. (However, UX research, which follows many phenomenological assumptions, is quickly gaining popularity). If you have been conducting research, but feel like you are no longer discovering anything new, then a phenomenology approach may shed some fresh insights.

Phenomenology is a qualitative research approach. It derives perspectives defined by experience and context, and the benefit of this research is a deeper and/or boarder understanding of these perspectives. To ensure perspectives are revealed, rather than prescribed, phenomenology avoids abstract concepts. The research doesn’t ask participants to justify their opinions or defend their behaviors. Rather, it investigates the respondents’ own terms in an organic way, assuming that people do not share the same interpretation of words or labels.

In market research, phenomenology is usually explored by unstructured, conversational interviews. Additional data, such as observing behavior (e.g., following a visitor’s path through a welcome center), can supplement the interviews. Interview questions typically do not ask participants to explain why they do, feel, or think something. These “why” questions can cause research participants to respond in ways that they think the researcher wants to hear, which may not be what’s in their head or heart. Instead, phenomenology researchers elicit stories from research participants by asking questions like “Can you tell me an example of when you…?” or, “What was it like when…?” This way, the researcher seeks and values context equally with the action of the experience.

The utility of this type of research may not be obvious at first. Project managers and decision makers may conclude the research project with a frustrating feeling of “now what?” This is a valid downside of phenomenological research. On the other hand, this approach has the power to make decision makers and organization leaders rethink basic assumptions and fundamental beliefs. It can reveal how reality manifests in very different ways.

A phenomenology study is not appropriate in all instances. But it is a niche option in our research arsenal that might best answer the question you are asking. As always, which research approach you use depends on your research question and how you want to use the results.

Hey, did you hear that joke about naturally occurring data going to a bar?

By AdamTheBruce (Own work) [GFDL (http://www.gnu.org/copyleft/fdl.html), CC-BY-SA-3.0 (http://creativecommons.org/licenses/by-sa/3.0/) or FAL], via Wikimedia Commons
There we were cresting a pass along Highway 40 in route to Steamboat Springs. I found myself scanning the beautiful terrain while engrossed in a conversation about one of my favorite research topics – naturally occurring data. Call it research purist meets strategist for the knockout round.

As a consultant who leads data-driven strategy processes I’ve learned that not all data is created equal. I’ve also learned to value experience and intuition just as I value data derived from research. After all, aren’t the most powerful insights those that derive from a combination of data, intuition and experience?

Several years ago, I noticed a pattern. If I didn’t say the thing I believed to be true based on my experience and insights, I often regretted it later. I found myself wondering why I wasn’t valuing my own experience more.

And that same realization bopped me on the head again this year after returning home from Steamboat.

Why wasn’t I valuing my own experience as much as my intuition? Perhaps I’d given my intuition center-stage to honor it when I’m amidst folks who value facts and numbers.

But what about 20 years of experience? I’m not saying experience that is 20 years old. I’m talking about an accumulation of naturally occurring data over 20 years.

The thing I love about naturally occurring data is that it can be as powerful and valuable as data derived from surveys, focus groups, and other constructed research environments. (Thank you, market research profession, for honoring what I knew to be true.) That data exists all around us – and if we can stay attuned to it – we can gather that data up into trends, patterns and insights.

I always remind my clients that the world doesn’t stop when you are engaged in strategic planning. Day-to-day operations go on while we contemplate bold aspirations for the future. And the naturally occurring data we gather along the way can serve either the day-to-day, or the strategic, or both.

Experience has taught me that all forms of data are powerful – and together they can be synergistic.

Fact. Perception. Intuition.

Add it all up to experience. Naturally occurring.

Understanding customer satisfaction requires understanding the customer experience

When people think of doing market research on a new idea, many think it works like this:

The problem with this mentality is that humans are notoriously awful at forecasting their own behavior.  It’s easy to say “Sure, I would buy that!” in when clicking a button while taking a survey or when sitting in a focus group.  When it comes down to the actual experience of standing in an aisle in a store and comparing one product to a dozen others, though, the decision gets to be considerably more difficult.  There have been plenty of market research failures over the years, and marketers’ failure to put themselves in the shoes of their customers are often a key reason why.

So how do you get around this issue?  Here are a few possible solutions:

  • Replicate the purchasing decision as closely as possible. Rather than putting your product (or service) in front of people and asking for feedback in a vacuum, ask participants to compare your offering with those of your competitors.  Or better yet, don’t even tell them which is yours at first and see which they pick out and why.
  • Approach the problem from a variety of perspectives. Interest in a product or service has a wide variety of dimensions.  While the overall reaction you get may be positive, you may be able to identify areas for improvement if you break interest down into key components, such as the look and feel, usability, ease of use, price, etc.
  • Get abstract. We at Corona will sometimes make use of questions like “If [this product or service] were an animal, what would it be and why?”  While questions like this seem a little silly, it can be extremely informative to know if your offering is more of a cheetah or a walrus.  And the explanations of why participants chose their animal can be even more informative.  (Fair warning, though: some participants hate this question and will just refuse to answer.  That’s OK.)
  • Consider advanced techniques. There are statistical techniques that have been developed over the years that can help to evaluate the relative weight that survey respondents place on various attributes of a product or service, such as a conjoint or MaxDiff analysis.  The details of how these work are outside the scope of our humble little blog, but each of these ask participants to make a decision that requires comparing sets of choices to one another rather than just saying “Yes, I’m interested in A.”

Market research, as valuable as it is, will never be a silver bullet that will absolutely guarantee the success of a product or service launch.  However, by considering the experience of making a decision when designing your approach, you’ll have a very strong chance of making the best decisions possible to make your launch a success.

Thinking strategically about benchmarks

When our clients are thinking about data that they would like to collect to answer a question, we sometimes are asked about external benchmarking data. Basically, when you benchmark your data, you generally are asking how you compare to other organizations or competitors. While external benchmarks can be useful, there are a couple of points to consider when deciding whether benchmarking your data is going to be useful:

  1. Context is key. Comparing yourself to other organizations or competitors can encourage some big picture thinking about your organization. But it is important to remember the context of the benchmark data. Are the benchmark organizations similar to you? Are they serving similar populations? How do they compare in size and budget? Additionally, external benchmark data may only be available in aggregated form. For example, non profit and government organizations may be grouped together. Sometimes these differences are not important, but other times they are an important lens through which you should examine the data.
  2. Benchmark data is inherently past-focused. When you compare your data to that of other organizations, you are comparing yourself to the past. There is a time-lag for any data collection, and the data are reflecting the impacts of changes or policies that have already been implemented. While this can be useful, if your organization is trying to adapt to changes that you see on the horizon, it may not be as useful to compare yourself to the past.
  3. Benchmark data is generally more useful as part of a larger research project. For example, if your organization differs significantly from other external benchmarks, it can be helpful to have data that suggest why that is.
  4. What you can benchmark on may not be the most useful. Often, you are limited in the types of data available about other organizations. These may be certain financial data or visitor data. Sometimes the exact same set of questions is administered to many organizations, and you are limited to those questions for benchmarking.

Like most research, external benchmarking can be useful—it is just a matter of thinking carefully about how and when to best use it.

Activating research

Research that just sits on the shelf (or these days, in a digital folder) is research that probably should not have been conducted. If it is not going to be used, then why do it?

Effective research takes many things, from the beginning through the end. We’ve blogged before about the need to start with end in the mind, but what happens when you get to the end? Then what?

Sharing results internally, with the right audiences, and in an effective medium, is key. Here are several ideas of how to do that beyond the common report or PowerPoint deck.

Make it interactive. Can the data (in part or whole) be made to allow for manipulation by users? This could be a fully interactive dashboard where the user gets to select variables to look at, or it could simply be a predefined analysis that users can pull up, filter, and review. For example, Corona often delivers open-ended verbatim responses with a series of filters built in so users can quickly drill down, rather than just reading hundreds or thousands of verbatim comments.

Video summaries. Can you tell the story through video for greater engagement? We have found that video works best in short clips to convey the primary findings and are often best accompanied by more detailed reporting (if users need more). Longer videos can be harder to digest and cause people to disengage. Corona has created short videos to communicate general findings to larger groups of employees who may need to know the general gist of the research, but do not need to know as much detail as core decision makers.

Initial readouts and workshops. Can you involve the users in designing reporting, such as holding a workshop to help build their dashboard so it includes the metrics they want? This not only helps create a more effective dashboard for them, but also creates buy-in since they were involved in its creation. Similarly, sharing preliminary findings can help focus additional analysis and ensure their questions are being addressed in the final report.

Also, consider the following to make any of the above more effective:

  • Who needs what? Who in the organization needs what information. Share what it is most important so critical points don’t get lost in the larger report.
  • How much? Consider the level of detail any one person or team needs. Executives may want top-level metrics with key points and recommendations; analysts may want every tabulation and verbatim response.
  • Who has questions? I think when people read a report or finding, they often think that’s it. Encourage questions and allow for follow-up to make sure everyone has what they need to move forward.

What challenges have you had making use of research? What have you done to try and overcome it? We’d love to hear bellow.

Getting the most out of your customer survey

There are a multitude of tools available these days that allow organizations to easily ask questions of their customers.  It is certainly not uncommon when Corona begins an engagement for the client to have made internal attempts at conducting surveys in the past.  In some cases, these studies have been relatively sophisticated and have yielded great results. In others, however, the survey’s results were met with a resounding “Why does this matter?”.

The challenge is that conducting a good survey requires a much more strategic view than most realize.  This starts with designing the survey questions themselves.  We always begin our engagements by asking our clients to think through the decisions that will be made, the opportunities to improve, and the possible challenges to be addressed based on the results.  By keeping the answers to these questions in mind as you design your survey questions, you can minimize the amount of “trivia” questions in your survey that might be interesting to know, but won’t really have any influence on your future decisions.

Even after having questions designed, you have to consider how you will get people to participate in the survey.  If you have a database of 100,000 customers, it may be tempting to just send invitations to all of them.  But what if you plan to send out a plea for donations in the next few weeks?  Consider the impact of asking for 15 minutes of time from people who might be asked to support you very soon.  Being careful to appropriately time the survey and perhaps only send it out to a small segment of customers might help to minimize fatigue that could negatively impact your overall business strategy in the near future.

Finally, once you’ve collected the results, simple tabulations will only tell a small part of the story.  Every result should be examined through the lens of the actual strategic impact of the results.  A good question to ask throughout the analysis of your results is, “So what?”.  Keep the focus on the implications of the results rather than the results themselves, your final report of what you learned with have a much better chance of making a meaningful impact on your organization moving forward.

Obviously, we at Corona are here to help walk you through this process in order to ensure the highest-quality result possible, but even if you choose to go it alone, keeping a strategic view of what you need to learn and how it will influence your decisions will help to avoid a lot of wasted effort.