RADIANCE BLOG

Category: Market Research

Breaking down the wall between quant and qual

Recently we had a project involving a large survey with numerous open-end questions. Taking the divide and conquer approach, it was all hands-on deck to quickly code the thousands of responses. As a qualitative researcher, coding survey responses can feel like a foreign process and I often found myself overthinking both my codes and the nuanced content of responses. When I had finished, I typed up a summary of my findings and even pulled out a few ‘rock star’ quotes that illustrated key trends and takeaways. The experience left me wondering—why is content analysis of survey open-ends not more common? It is qualitative data after all.

Simply put, the purpose of content analysis is the elicitation of themes or content in a body of written or other pointed media. Like many qualitative approaches, it does not produce numerical measurements; rather, content analysis measures patterns and trends in the data. Incorporating qualitative analysis techniques such as content analysis into traditionally quantitative studies better contextualizes survey results and produces greater insights.

Imagine a classic branding survey where participants are asked sentiment questions such as ‘what is your impression of Brand X’? Often, the questions are designed as a Likert scales with defined categories (e.g. very positive, somewhat positive, neutral, etc.). While this provides general insight into attitudes and impressions of the brand, it does not necessarily highlight the broader insights or implications of the research findings. When Corona does a brand survey, we regularly ask an open-end question for qualitative content analysis as a follow-up, such as ‘What specifically about Brand X do you find appealing?’ or, conversely, ‘What specifically about Brand X do you find unappealing?’. Inclusion of qualitative follow-up provides additional framing to the quantitatively designed Likert scale question and augments insights. Additionally, if the survey shows a sizeable negative sentiment towards a brand, incorporating qualitatively designed open-ends can uncover issues or problems that were unknown prior to the research, and perhaps outside of the original research scope.

Historically, quantitative and qualitative research has been bifurcated, both in design and in analysis. However, hybrid approaches such as the one described above are quickly gaining ground and the true value is being realized. Based on our experience here at Corona, for content analysis to be effectively done in a quantitative-dominant survey, it is best for this to be decided early in the research design phase.

A few things to keep in mind when designing open-ended questions for content analysis:

  • Clearly define research objectives and goals for the open-end questions that will be qualitative analyzed.
  • Construct questions with these objectives in mind and incorporate phrasing the invites nuanced responses.
  • Plainly state your expectations for responses and if possible, institute character minimums or maximums as needed.

In addition to the points mentioned above, it is important to note that there are some avoidable pitfalls. First off, this method is best suited for surveys with a smaller sample size, preferably under 1000 respondents. Also, the survey itself must not be too time intensive. It is well known that surveys which extend beyond 15 to 20 minutes often lead to participants dropping out or not fully completing the survey. Keep these time limits in mind and be selective about the number of open-ends to be include. Lastly, it is important to keep the participant engaged in the survey. If multiple open-ends are incorporated in to the survey, phrase the questions differently or ask them about different topics in an effort to keep participants from feeling as  though they are repeating themselves.

In an ideal world, quantitative and qualitative approaches could meld together seamlessly, but we all know this isn’t an ideal world. Time constraints, budgets, research objectives are just a handful of reasons why a hybrid approach such as the one discussed here may not be the right choice. If it is though, hybrid approaches provide participants an opportunity to think deeper about the topic at hand and also can create a sense of active engagement between the participant and the end-client. In other words—they feel like their voice is being heard and the end-client gains a better understanding of their customer.


The Four Cornerstones of Survey Measurement: Part 2

Part Two: Reliability and Validity

The first blog in this series argued that precision, accuracy, reliability, and validity are key indicators of good survey measurement.  It described precision and accuracy and how the researcher aims to balance the two based on the research goals and desired outcome.  This second blog will explore reliability and validity.

Reliability

In addition to precision and accuracy, (and non-measurement factors such as sampling, response rate, etc.) the ability to be confident in findings relies on the consistency of survey responses. Consistent answers to a set of questions designed to measure a specific concept (e.g., attitude) or behavior are probably reliable, although not necessarily valid.  Picture an archer shooting arrows at a target, each arrow representing a survey question and where they land representing the question answers. If the arrows consistently land close together, but far from the bulls-eye, we would still say the archer was reliable (i.e., the survey questions were reliable). But being far from the bulls-eye is problematic; it means the archer didn’t fulfill his intentions (i.e., the survey questions didn’t measure what they were intended to measure).

One way to increase survey measurement reliability (specifically, internal consistency) is to ask several questions that are trying to “get at” the same concept. A silly example is Q1) How old are you, Q2) how many years ago were you born, Q3) for how many years have you lived on Earth. If the answers to these three questions are the same, we have high reliability.

The challenge with achieving high internal reliability is the lack of space on a survey to ask similar questions. Sometimes, we ask just one or two questions to measure a concept. This isn’t necessarily good or bad, it just illustrates the inevitable trade-offs when balancing all indicators.  To quote my former professor Dr. Ham, “Asking just one question to measure a concept doesn’t mean you have measurement error, it just means you are more likely to have error.”

Validity

Broadly, validity represents the accuracy of generalizations (not the accuracy of the answers). In other words, do the data represent the concept of interest? Can we use the data to make inferences, develop insights, and recommend actions that will actually work? Validity is the most abstract of the four indicators, and it can be evaluated on several levels.

  • Content validity: Answers from survey questions represent what they were intended to measure.  A good way to ensure content validity is to precede the survey research with open-ended or qualitative research to develop an understanding of all top-of-mind aspects of a concept.
  • Predictive or criterion validity: Variables should be related in the expected direction. For example, ACT/SAT scores have been relatively good predictors of how students perform later in college. The higher the score, the more likely the student did well in college.  Therefore, the questions asked on the ACT/SAT, and how they are scored, have high predictive validity.
  • Construct validity: There should be an appropriate link between the survey question and the concept it is trying to represent.  Remember that concepts, and constructs, are just that, they are conceptual. Surveys don’t measure concepts, they measure variables that try to represent concepts.  The extent that the variable effectively represents the concept of interest demonstrates construct validity.

High validity suggests greater generalizability; measurements hold up regardless of factors such as race, gender, geography, or time.  Greater generalizability leads to greater usefulness because the results have broader use and a longer shelf-life.  If you are investing in research, you might as well get a lot of use out of it.

This short series described four indicators of good measurement.  At Corona Insights, we strive to maximize these indicators, while realizing and balancing the inevitable tradeoffs. Research survey design is much more than a list of questions, it’s more like a complex and interconnected machine, and we are the mechanics that are working hard to get you back on the road.


Keeping it constant: 3 things to keep in mind with your trackers

When conducting a program evaluation or customer tracker (e.g., brand, satisfaction, etc.), we are often collecting input at two different points in time and then measuring the difference. While the concept is straightforward, the challenge is keeping everything as consistent as possible so we can say that the actual change is NOT a result of how we conducted the survey.

Because we can be math nerds sometimes, take the following equation:

A change to any part of the equation to the left of the equal sign will result in changes to your results. Our goal then is to keep all the survey components consistent so any change can be attributed to the thing you want to measure.

These include:

  1. Asking the same questions
  2. Asking them the same way (i.e. research mode)
  3. And asking them to a comparable group

Let’s look at each of these in more detail.

Asking the same questions

This may sound obvious, but it’s too easy to have slight (or major) edits creep into your survey. The problem is, we then cannot say if the change we observed between survey periods is a result of actual change that occurred in the market, or if the change was a result of the changing question (i.e., people interpreted the question slightly differently).

Should you never add or change a question? Not necessarily. If the underlying goal of that question has changed, then it may need to be updated to get you the best information going forward. Sure, you may not be able to compare it looking back, but getting the best information today may outweigh the goal of measuring change on the previous question.

If you are going to change or add questions to the survey, try to keep them at the end of the survey so the experience of the first part of the survey is similar.

Asking them the same way

Just as changing the actual question can cause issues in your tracker, changing how you’re asking them can also make an impact. Moving from telephone to online, from in-person to self-administered, and so on can cause changes due to how respondents understand the question and other social factors. For instance, respondents may give more socially desirable answers when talking to a live interviewer than they will online. Reading a question yourself can lead to a different understanding of the question than when it is read to you.

 

Similarly, training your data collectors with consistent instructions and expectations makes a difference for research via live interviewers as well. Just because the mode is the same (e.g., intercept surveys, in-class student surveys, etc.) doesn’t mean it’s being implemented the same way.

Asking a comparable group

Again, this may seem obvious, but small changes in who you are asking can impact your results. For instance, if you’re researching your customers, and on one survey you only get feedback from customers who have contacted your help line, and on another survey you surveyed a random sample of all customers, the two groups, despite both being customers, are not in fact the same. The ones who have contacted your help line likely had different experiences – good or bad – that the broader customer base may not have.

~

So, that’s all great in theory, but we recognize that real-life sometimes gets in the way.

For example, one of the key issues we’ve seen is with changing survey modes (i.e., Asking them the same way) and who we are reaching (i.e., Asking a comparable group). Years ago, many of our public surveys were done via telephone. It was quick and reached the majority of the population at a reasonable budget. As cell phones became more dominant and landlines started to disappear, while we could have held the mode constant, the group we were reaching would change as a result. Our first adjustment was to include cell phones along with landlines. This increased costs significantly, but brought us back closer to reaching the same group as before while also benefiting from keeping the overall mode the same (i.e., interviews via telephone).

Today, depending on the exact audience we’re trying to reach, we’re commonly combining modes, meaning we may do phone (landline + cell), mail, and/or online all for one survey. This increases our coverage (http://www.coronainsights.com/2016/05/there-is-more-to-a-quality-survey-than-margin-of-error/), though it does introduce other challenges as we may have to ask questions a little differently between survey modes. But in the end, we feel it a worthy tradeoff to have a quality sample of respondents. When we have to change modes midway through a tracker, we work to diminish the possible downsides while drawing on the strengths to improve our sampling accuracy overall.


The Four Cornerstones of Survey Measurement: Part 1

Part One: Precision and Accuracy

Years ago, I worked in an environmental lab where I measured the amount of silt in water samples by forcing the water through a filter, drying the filters in an oven, then weighing the filters on a calibrated scale. I followed very specific procedures to ensure the results were precise, accurate, reliable, and valid; the cornerstones of scientific measurement.

As a social-science researcher today, I still use precision, accuracy, reliability, and validity as indicators of good survey measurement. The ability of decision makers to draw useful conclusions and make confident data-driven decisions from a survey depends greatly on these indicators.

To introduce these concepts, I’ll use the metaphor of figuring out how to travel from one destination to another, say from your house to a new restaurant you want to try. How would you find your way there? You probably wouldn’t use a desktop globe to guide you, it’s not precise enough. You probably wouldn’t use a map drawn in the 1600’s, it wouldn’t be accurate. You probably shouldn’t ask a friend who has a horrible memory or sense of direction, their help would not be reliable. What you would likely do is “Google It,” which is a valid way most of us get directions these days.

This two-part blog will unpack the meaning within these indicators. Let’s start with precision and accuracy. Part-two will cover reliability and validity.

Precision

Precision refers to the granularity of data and estimates. Data from an open-ended question that asked how many cigarettes someone smoked in the past 24 hours would be more precise than data from a similar closed-ended question that listed a handful of categories, such as 0, 1-5, 6-10, 11-15, 16 or more. The open-ended data would be more precise because it would be more specific, more detailed. High precision is desirable, all things being equal, but there are often “costs” associated with increasing precisions, such as increased time to take a survey, that might not outweigh the benefit of greater precision.

Accuracy

Accuracy refers to the degree that the data are true. If someone who smoked 15 cigarettes in the past 24 hours gave the answer ‘5’ to the open-ended survey question, the data generated would be precise but not accurate. There are many possible reasons for this inaccuracy. Maybe the respondent truly believed they only smoked five cigarettes in the past 24 hours, or maybe they said five because that’s what they thought the researcher wanted to hear. Twenty-four hours may have been too long of a time span to remember all the cigarettes they smoked, or maybe they simply misread the question. If they had answered “between 1 and 20,” the data would have been accurate, because it was true, but it wouldn’t have been very precise.

Trade-offs

Many times, an increase in precision can result in a decrease in accuracy, and vice-versa. Decision makers can be confident in accurate data, but it might not be useful. Precise data typically give researchers more utility and flexibility, especially in analysis. But what good is flexible data if there is little confidence in its accuracy. Good researchers will strive for an appropriate balance between precision and accuracy, based on the research goals and desired outcomes.

Now that we have a better understanding of precision and accuracy, the second blog in this series will explore reliability and validity.


Human Experience (HX) Research

About a year ago, I stumbled upon a TEDx Talk by Tricia Wang titled “The Human Insights Missing from Big Data”. She eloquently unfurls a story about her experience working at Nokia around the time smartphones were becoming a formidable emergent market. Over the course of several months, Tricia Wang conducted ethnographic research with around 100 youth in China and her conclusion was simple—everyone wanted a smartphone and they would do just about anything to acquire one. Despite her exhaustive research, when she relayed her findings to Nokia they were unimpressed and expressed that big data trends did not indicate there would be a large market for smartphones. Hindsight is 20/20.

One line in particular stuck out to me as I watched her talk— “[r]elying on big data alone increases the chances we’ll miss something, while giving us the illusion we know everything”. Big data offers companies and organizations plentiful data points that haphazardly paint a picture of human behavior and consumption patterns. What big data does not account for is the inherent ever-shifting, fickle nature of humans themselves. While big data continues to dominate quantitative research, qualitative research methods are increasingly shifting to account for the human experience. Often referred to as HX, human experience research aims to capture the singularity of humans and forces researchers to stop looking at customers exclusively as consumers. In human experience research, questions are asked to get at a respondent’s identity and emotions; for instance, asking how respondents relate to an advertising campaign instead of just how they react to the campaign.

The cultivation of HX research in the industry begs the question: what are the larger implications for qualitative research? Perhaps the most obvious answer is that moderators and qualitative researchers need to rethink how research goals are framed and how questions are posed to respondents to capture their unique experience. There are also implications for the recruiting process. The need for quality respondents is paramount in human experience research and will necessitate a shift in recruiting and screening practices. Additionally, qualitative researchers need to ensure that the best methodology is chosen in order to make respondents feel comfortable and vulnerable enough to share valuable insights with researchers.

Human experience research may just now be gaining widespread traction, but the eventual effects will ultimately reshape the industry and provide another tool for qualitative researchers to answer increasingly complex research questions for clients. At Corona, adoption of emerging methodologies and frameworks such as HX means we can increasingly fill knowledge gaps and help our clients better understand the humans behind the research.


When Data Collection Changes the Experience

One of the ongoing issues in any research that involves people is whether the data collection process is changing the respondents’ experience. That is, sometimes when you measure an attitude or a behavior, you may inadvertently change the attitude or behavior. For example, asking questions in a certain way may change how people would have naturally thought about a topic. Or if people feel like they are being observed by other people, they may modify their responses and behaviors.

We often think about this issue when asking about sensitive topics or designing research that is done face-to-face. Will people modify their responses if they are in front of a group of people? Or even just one person? For example, asking parents about how they discipline their children may be a sensitive topic, and they might omit some details when they are talking in front of other parents. Even in a survey, respondents may modify their responses to what they think is socially desirable (i.e., say what they think will make them seem like a good person to other people) or may modify them based on who specifically they think will read the responses. They may modify their responses depending on whether they trust whoever is collecting the data.

But beyond social desirability concerns and concerns about being observed, the research experience itself may not match the real-life experience. With surveys, the question order may not match how people naturally experience thinking about a topic. If a survey asks people whether they would be interested in a new service, their answer may change depending on what questions they have already answered. Did the survey ask people to think about competing services before or after this question? Did the survey have people list obstacles to using the new service before or after? Moreover, which of the question orders is most similar to the real-life experience of making a decision about the new service?

As discussed in a previous post, making a research experience as similar to the real life event can make it more likely that the results of the research will generalize to the real event. Collecting observational data or collecting data from third party observers can also maintain the natural experience. For example, if you want to determine whether a program is improving classroom behavior in school, you might collect teachers’ reports of their students’ behavior (instead or in addition to asking students to self-report). You could also record students’ behavior in the classroom and then code the behavior in the video. Technology also has made it easier to collect some data without changing the experience at all. For example, organizations can test different ad campaigns by doing an A/B test. Perhaps two groups of donors receive one of two possible emails soliciting donations. If two separate URLs are set up to track the response to the ad, then differences in response to the ad can be compared without changing the experience of receiving a request for donations.

One of my statistics professors used to say that no data are bad data. We just need to think carefully about what questions the data answer, which is impacted by how the data were collected. Recognizing that sometimes collecting data changes a natural experience can be an important step to understanding what questions your data are actually answering.


Feeding your market’s desire to participate in surveys

I got an online survey the other day from a public organization, and they wanted to know … something.  It doesn’t really matter for the purposes of this post.

I like to participate in surveys for a variety of reasons.  First, I’m naturally curious about what’s being asked, and why.  Maybe I can learn something.  Second, if it’s an issue that I care about, I can offer my opinion and have a voice.  Third, I’m a human being so I just like to share my opinion.  And finally, I have a professional interest in seeing surveys designed by other people, just to compare against my own design ideas.

With the possible exception of the last reason, I would hazard a guess that I’m not uncommon in these motivations.  Most people who respond to surveys do so because they’re curious, because they want a voice, and because they like sharing their opinion.

However, it takes time to complete a survey, and like everyone else, my time is precious.  I want it to be worth my time, which links back to my motivations.  Will participating really give me a voice?  Will I learn something from it?  Will anyone care what I say?  How will my information be used?  I want to trust that something good will come from my participation.

This brings me to another key motivator that is not often mentioned.  In addition to wanting good outcomes from my participation, I also want to be sure that nothing bad will come of it.  I want to trust that the people surveying me are ethical and will protect both me and the results of the survey.

Thinking about these forces, let’s go back to this survey that I received.  It was a topic that I care about, so I was interested to see what questions were being asked.  I could check ‘curiosity’ off my list.  It was from a legitimate organization, so I’d be willing to have a voice in their decisions and share my opinion with them.  I could check those two items off my list.

But then I took a second look at the survey.  It was being done for a legitimate organization, but with “assistance” from a consultant that I was unfamiliar with.  I pulled up Google and tried to look up the company.  Nothing.  They had no web site at all, and only a half-dozen Google hits that were mostly spam.

When I participate in a survey, I want to know that my responses aren’t going to be used against me.  There’s no crime in being a young company, but having no website told me that this wasn’t a legitimate company.  Did these people know what they were doing?  Were they going to protect my information like a legitimate research company would, or would they naively turn over my responses to the client so they could target me for their next fundraising campaign?  I had no idea since I couldn’t identify what the “consultant” actually did.

Beyond that, there was another problem.  I clicked on the link, and it took me to one of the low-cost survey hosting sites.  Based on my experience in the industry, I know that committed researchers don’t use these sites, and that they’re inadvertently tools for public input rather than legitimate research.  (The sampling is usually wrong and there’s often no protection against “stuffing the ballot box”.

I declined to participate in that survey, which made me sad.  I suspected that the end client had noble motivations, but in the end they didn’t meet my criteria for participating.

Given our 18 years in the business, we at Corona are often looking at changes on the margins of surveys.  What can we do in our wording or our delivery mode or our timing to maximize meaningful participation?  This was a good reminder to me that we also have to be careful to satisfy the much more basic and powerful forces that aid or damage participation.  Before anything else, the things that survey researcher have to do are:

  1. Make people feel safe in responding. You must clearly identify who is involved in the study, and make available a web site that clearly shows oversight by a legitimate market research firm.  (This is even more important for qualitative research, which requires a bigger time commitment by respondents.) View Corona’s research privacy policy and Research participant information.
  2. Confirm to people that their opinion is important. Maybe this is my bias as a person in the industry, but if I get a cheaply designed survey on software that is used for entertainment polls from some consultant who is virtually unknown by the World Wide Web, it tells me that the project isn’t a priority for the client.  If the research is important, give your respondents that message by your own actions.
  3. Confirm to people that the survey gives them a voice. You can overtly say this, but you also have to “walk the walk” by giving people confidence.  One thing that I’ve noticed more and more is the use of surveys as marketing tools rather than research tools.  Sending out frequent cheaply produced surveys as a means of “engaging our audience” is not a good idea if the surveys aren’t being used for decision making.  People figure out pretty quickly when participation is a waste of their time, and then they’re less likely to participate when you really need their input.

All in all, we in the research industry talk a lot about long-term declines in participation rates, but many of us are contributing to that by ignoring the powerful motivations that people have to participate in surveys.  People should WANT to participate in our surveys, and we should support that motivation.  We can do that by surveying them only when it’s important, by showing a high level of professionalism and effort in our communications with them, and by helping to reassure them that we’re going to both protect them and carry forward their voice to our clients.


Tuft & Needle: Incredible Mattresses. Incredible research?

If you have ever received a proposal from Corona Insights regarding customer research, you may have seen this line:

“We believe that surveying customers shouldn’t lower customer satisfaction.”

We take the respondent’s experience into account, from the development of our approach through the implementation of the research (e.g., survey design, participant invites, etc.), even in our choice of incentives. We work with our clients on an overall communications plan and discuss with them whether we need to contact all customers or only a small subset, sparing the rest from another email and request. For some clients, we even program “alerts” to notify them of customers that need immediate follow-up.

As such, I’m always interested to see how other companies handle their interactions when it comes to requesting feedback. Is it a poorly thought out SurveyMonkey survey? Personalized phone call? Or something in between?

Recently, I was in the market for a new mattress and wanted to try one of newer entrants shaking up the mattress industry. I went with Tuft & Needle, and while I won’t bore you with details of the shopping experience or delivery, I found the post-purchase follow-up worth sharing (hopefully you’ll agree).

I received an email that appeared to come directly from one of the co-founders. It was a fairly stock email, but not with overdone marketing content or design, and it is easy enough to mask the email to make it appear to come from the founder. In it, it had one simple request:

“If you are interested in sharing, I would love to hear about your shopping experience. What are you looking for in a mattress and did you have trouble finding it?”

The request made clear that I could simply hit reply to answer. So I did.

I assumed that was it, or maybe I’d get another form response, but I actually got a real response. One that was clearly not stock (or at least not 100% stock – it made specific references to my response). It wasn’t the co-founder who had responded, but another employee, but still impressive in my opinion.

So, what did they do right? What can we take away from this?

  • Make a simple request
  • Make it easy to reply to
  • Include a personalized acknowledgement of the customer’s responses

Maybe you think this is something only a start-up would (or should) do, but what if more companies took the time to demonstrate such great service, whether in their research or their everyday customer service?


Based on my experience…

Born from a conversation I had with a coworker earlier this week, I wanted to talk about research methodology and design and how a client relying solely on what they know – their own experience and expertise – might result in subpar research.

Quantitative and qualitative methods have different strengths and weaknesses, many of which we have blogged about before. The survey is the most frequently used quantitative methodology here at Corona, and it’s an incredibly powerful tool when used appropriately. However, one of the hallmarks of a quantitative, closed-ended survey is that there is little room for respondents to tell us their thoughts – we must anticipate possible answers in question design. When designing these questions, we rely on our client’s and our own experience and expertise.

We know how much the value of a statistically valid survey is appreciated – being able to see what percentage of customers believe or perceive or do something, compare results across different subpopulations, or precisely identify what “clicks” with customers is very satisfying and can drive the success of a campaign.  But the survey isn’t always the right choice.

Sometimes, relying on experience and expertise is not enough, perhaps because you can’t predict exactly what responses customers might have or, despite the phrase being somewhat cliché, sometimes you don’t know what you don’t know. This is why I advocate for qualitative research.

Qualitative research is not statistically valid. Its results are not really meant to be applied to your entire population of customers or even a segment of them. However, it is incredibly powerful when you’re trying to learn more about how your customers are thinking about X, Y, or Z. Feel stuck trying to brainstorm marketing materials, finding a way to better meet your customers’ needs, or come up with a solution to a problem you’re having? Your customers probably won’t hand you a solution (though you never know), but what they say will spark creativity, aid in brainstorming, and become a valuable facet in the development of it.

In the end, both serve a significant role in research as a whole. Personally, I’m a big supporter of integrating both into the process. Qualitative research can bridge the gap between your customers and the “colder” quantitative research. It can help you better understand what your customers are doing or thinking and therefore help you better design a quantitative survey that enables you to capture robust data. Alternatively, qualitative research can follow a quantitative survey, allowing you to explore more of the “why” behind certain survey results. In the end, I simply urge you not to underestimate the value of qualitative research.


Phenomenology: One way to Understand the Lived Experience

How do workers experience returning to work after an on-the-job injury? How does a single-mother experience taking her child to the doctor? What is a tourist’s experience on his first visit to Colorado?

These research questions could all be answered by phenomenology, a research approach that describes the lived experience. While not a specific method of research, phenomenology is a series of assumptions that guide research tactics and decisions.Phenomenological research is uncommon in traditional market research, but that may be due to little awareness of it rather than its lack of utility. (However, UX research, which follows many phenomenological assumptions, is quickly gaining popularity). If you have been conducting research, but feel like you are no longer discovering anything new, then a phenomenology approach may shed some fresh insights.

Phenomenology is a qualitative research approach. It derives perspectives defined by experience and context, and the benefit of this research is a deeper and/or boarder understanding of these perspectives. To ensure perspectives are revealed, rather than prescribed, phenomenology avoids abstract concepts. The research doesn’t ask participants to justify their opinions or defend their behaviors. Rather, it investigates the respondents’ own terms in an organic way, assuming that people do not share the same interpretation of words or labels.

In market research, phenomenology is usually explored by unstructured, conversational interviews. Additional data, such as observing behavior (e.g., following a visitor’s path through a welcome center), can supplement the interviews. Interview questions typically do not ask participants to explain why they do, feel, or think something. These “why” questions can cause research participants to respond in ways that they think the researcher wants to hear, which may not be what’s in their head or heart. Instead, phenomenology researchers elicit stories from research participants by asking questions like “Can you tell me an example of when you…?” or, “What was it like when…?” This way, the researcher seeks and values context equally with the action of the experience.

The utility of this type of research may not be obvious at first. Project managers and decision makers may conclude the research project with a frustrating feeling of “now what?” This is a valid downside of phenomenological research. On the other hand, this approach has the power to make decision makers and organization leaders rethink basic assumptions and fundamental beliefs. It can reveal how reality manifests in very different ways.

A phenomenology study is not appropriate in all instances. But it is a niche option in our research arsenal that might best answer the question you are asking. As always, which research approach you use depends on your research question and how you want to use the results.