RADIANCE BLOG

Category: Market Research

Considerations for researching your members

Corona takes many items into consideration when designing a research plan for our clients. In short, market research includes asking the right people, the right questions, in the right manner, and then conducting the right analyses. Here are a few of the considerations when conducting research with your membership.

Research mode

  • What type of contact information do you have for members? And how are they used to interacting with you? For many, this will be email, but it may also include phone, mail, or even in-person research at conferences. The goal is to select the mode(s) that will reach all, or at least the greatest number of members possible.
  • Quant vs. qual? Are you trying to measure opinions (quant) or do exploratory research or dig deeper into an issue (qual)?
  • Do you need multiple touch points? Announcements, invites, and reminders? Online with telephone or mail reminders?

Goals

  • What are your goals and expected outcomes? It’s often easy to jump into start writing survey questions or qualitative prompts. It’s often harder to think of bigger picture goals and how you will use the information you gain. Start with your goals to ensure the research will turn out successful.

Sampling

  • Sample all or some? For large organizations, you may not need to survey every member to have valid, representative results. You may want to give everyone an opportunity to respond or you may decide to only survey a random selection to minimize the number of members contacted.
  • Can you append data for actual behavior? While you can always ask about their membership behavior (e.g., length of membership, conferences attended, etc.), if you have that information already, you can just append it to their results. This will yield more accurate results and require fewer questions asked to respondents.

Frequency

  • How often should you conduct member research? Annual research makes the most sense for some organizations, while others may go years between efforts. There is no right answer, but in general, regular intervals make the most sense, and you will want to take into consideration the rate of change within the organization. If membership turns over regularly, or you’re in a fast-paced industry, more frequent research may be needed to keep the pulse of members.

Incentives

  • Do we offer an incentive? Generally speaking, an incentive will increase response rate. Furthermore, through our own testing, Corona has seen that the make-up of respondents includes a broader mix of people when an incentive is offered. Incentives serve to both encourage response and recognize their time and effort in completing the survey.
  • What type of incentive? While there are many options, the incentive should have broad appeal as to not skew the results by being over appealing to one segment and not at all appealing to another. Prize drawings, small token gift cards, and/or additional member benefits are all common options.

What other questions or concerns have you had about conducting research with your members?


Preferred membership benefits and how they can change over time

So far in this series on membership organizations, we’ve discussed communications, segmenting, and the importance of personal benefits. Here we combine the latter two and look at how perceptions of benefits change over time. The reasons someone may join fresh out of college or at the start of a new career is different than someone who continues to be a member as they near retirement.

This, in fact, is another benefit of segmenting your membership, both in practice and in evaluating results from any membership research. By looking at how results vary by age, time in the industry or their career, and/or time as a member, you can tailor services and messaging to each group.

For example, we’ve seen such differences as:

  • Resource access
  • Skill development
  • Career development
  • Broader industry efforts

Even if your organization is more homogeneous, such as a young professionals group, understanding where they are at will help you ensure the organization remains relevant to them.

What other factors have you seen vary by member tenure?


What type of benefits do members care about most?

Membership organizations exist to serve their members. So it’s no surprise that Corona is often tasked with uncovering what benefits members find most valuable and what new benefits members are seeking. Similarly, we often conduct research with non-members to measure their awareness of the organization’s benefits and whether they are a “fit” for them.

The benefits Corona has examined has run the spectrum, from career services to professional development to lobbying and more. While benefits are unique to an organization, we have observed some broad trends, such as:

Benefits that focus on the individual consistently rate higher than benefits that focus on the profession or industry.

This can vary by segment and organization, but in short, a benefit that focuses on personal gain (e.g., networking, career, skills, etc.) will rate higher than those that focus on the profession or industry more broadly (e.g., funding research, representing their interest with legislatures, etc.). This isn’t to say the other things aren’t important, but that they have relatively less appeal than personal benefits.

This makes intuitive sense in many ways. While people will support the broader industry benefits, they first want their own needs met. People ask what’s in it for them and the clearest answers are those benefits that they can see a direct advantage from. While people can often see the benefits of supporting their broader industry, it is not as direct or an immediate of a line from organizational offering to personal benefit.

Have you seen this with your own organization? What benefits have you found to be especially valuable among members?

~

As a side note, when Corona researches benefits we don’t only research the value of the benefit, but also often how the organization is performing in providing that benefit and whether the organization is seen as the entity to provide it. Weighing all of these factors together helps our clients make a more informed decision to what benefits to offer.

This is the third post in a series of posts about membership organizations. Corona has worked extensively with membership organizations and is sharing some of its lessons learned over the years here. Follow us on Facebook, LinkedIn, or Twitter to get all updates, and sign-up for our quarterly newsletter here.


Measuring in Multiple Dimensions

The shortest route between any two points is a straight line.

Don’t dance around.

Get to the point.

Sometimes being indirect is the best route.

We hear such sayings everyday. Indirectness is often seen as a liability, circuitousness as a weakness. But in our work, that’s not always the best strategy.

Understanding people and behaviors requires us to understand the context in which they live, think, and make decisions, and that’s a difficult thing. In my 18 years of work as a market researcher, one thing that I’ve learned is that people have such a wide variety of perspectives on life that there’s no way I can take them all into account when conducting research. The best I can do is to piece together the patterns that I see in the data and use that to enlighten myself and my client.

In order to do that, sometimes you can’t just ask one question. You have to triangulate. This may involve asking different types of questions that circle back to the same general topic in different ways. It’s expensive, relatively speaking, because it keeps you from asking other questions in that same survey space, but it can be valuable.

However, it is indeed a luxury to be able to engage in this practice. So we look for other approaches where possible.

As a variant on that approach, one of my favorite tricks of the trade is to ask complementary questions, questions that help you understand the context of a particular issue. It may not give you different perspectives, but it’s an efficient way to add value in multiple ways.

For example, if we ask about desire for various recreational opportunities, we ask simultaneously how important it is to their quality of life. This gives us three measures for the price to two: desire, importance, and the combination of the two. It helps us understand the issue and its context. For mental health issues, we may ask about how common they are in households, but also what their impact is. Again, we get three items of information for the price of two.

More importantly, this gives us a context for measuring opinions and issues. By asking related questions in two or three different dimensions, we can measure issues with a more rich understanding.

I hope you found this post interesting, but also informative and important.


How to measure what people want

Recently after an interview for a project, some us at Corona had a discussion about whether or not it would be useful to use a survey for the project. Like a lot of projects, this potential client was interested in what new changes the public might want in their organization. And at first, this seems like it could be a great area to do a survey and ask people what they want. However, directly asking people about what they might want can backfire sometimes for a number of reasons:

  1. Various psychologists have found that people are not always great at predicting their emotional response to something (e.g., will X make me happy?). Part of the reason is that people don’t always do a good job of imagining what it will actually be like.
  2. People often think that they want more choices, but this is generally not the case.
  3. Depending on the topic, people might feel like there is a “socially correct” option and might choose that one instead of what they really want.
  4. I think in general, we don’t always know what we want, especially when the possibilities are vast. And sometimes what we want may not come through in the survey questions. Sometimes experiencing a change is very different than reading about a change, especially if you’re trying to gauge whether you will like the change or not.

In some situations, it may be more useful to try to measure behavior instead of opinions when trying to determine what people want. While it is sometimes difficult to do this, the data can be very rich and useful. One interesting approach is to temporarily make a change and record what happens. For example, New York City first made Times Square pedestrian only as a test to see what the impact might be. It was initially a hard sell because people were thinking about what the city would lose—one of the main thoroughfares. But there were lots of positives to making it pedestrian only—enough to make the change permanent. When you survey people about potential changes, sometimes it is easier to think about what you lose in the change, as opposed to what you might gain. And that can impact how people respond to the survey.

A pop up shop is another example of this. A shop can temporarily appear for a few days or a month to see whether a more permanent location is a good idea. Even if your online shoppers say in a survey that they would visit a physical location, a pop up store will let you know whether that actually happens.

So the next time your organization is considering making a change, it might be useful to think about whether a survey is going to be the most useful way to decide what to change or whether measuring behaviors as part of a test might be a better approach.


Measuring Reactions to Your Ideas

Market research can be painful sometimes.  You may have poured your heart and soul into an idea and feel it’s really good, only to put it in front of your customers and hear all the things they hate about it.  But it’s better to know in advance than to find out after you’ve spent a ton of money and risked your brand equity for your idea.

It may not be as sexy as measuring customer satisfaction, prioritizing product features, or helping you optimize your pricing strategies, but sometimes market research is simply necessary to make sure that you haven’t overlooked something important when developing a product, service, or marketing campaign.  No matter how much we try to put ourselves in the shoes of our customers, it is impossible to be 100% sure that your own background and experiences have ensured that you fully understand the perspectives of customers who come in a huge variety of shapes and sizes.

In our own work, we frequently work with advertising agencies to help inform and evaluate ad campaigns and media before launch.  Considering the enormous amount of money required to reach a wide audience (though television, radio, online ads, etc.), it just makes sense to devote a small part of your budget to running the campaign by a variety of people in your audience to make sure you know how people might react.

In some cases, what you learn might be fairly minor.  You might not have even noticed that your ad lacks diversity.  You might not have noticed that your ad makes some people feel uncomfortable.  Or perhaps, your own world view has given you a blind spot to the fact that your ad makes light of sensitive issues, such as religion, major tragedies, or even date rape.

Unfortunately, we saw an example of this issue in Denver recently, where a local coffee chain’s attempt at humor infuriated the local neighborhood with a sign that read, “Happily gentrifying the neighborhood since 2014.”  From the perspective of someone less engaged in the neighborhood, you can understand what they were getting at – that good coffee was a sign of progress in the natural development of a thriving city.

However, the statement completely misses the fact that gentrification often results in people being forced from the homes they have lived in for years and the destruction of relationships across an entire neighborhood.  In this particular case, the coffee shop was located directly in the middle of a neighborhood that has been struggling with gentrification for the past decade or more, and tensions were already high.  The ad was like throwing gasoline on a fire and has resulted in protests, graffiti, and even temporary closure of the store.

It’s certainly easy to blame the company, the ad agency, and anyone else that didn’t see that this campaign would be a bad idea.  However, the reality is that all of us have our blind spots to sensitive issues, and no matter how much we feel like we understand people of different backgrounds, there will always be a chance you’ve missed something.

So, please, for the sake of your own sanity and those of your customers, do some research before you launch a marketing campaign.  At a minimum, run your ad by some people who might see it just to see how they react.  And if you want a more robust evaluation of your campaign, which can help to ensure that your advertising dollars have the biggest impact possible, we can probably help.


How do you measure the value of an experience?

When I think about the professional development I did last week, I would summarize it thusly: an unexpected, profound experience.

I was given the opportunity to attend RIVA moderator training and I walked away with more than I ever could have dreamed I would get. Do you know that experience where you think back to your original expectations and you realize just how much you truly didn’t understand what you would get out of something? That was me, as I sat on a mostly-empty Southwest plane (156 seats and yet only 15 passengers) flying home. While you can expect a RIVA blog to follow, I was struck by the following thought:

What does it mean to understand the impact your company, product, or service has on your customers?

I feel like I was born and raised to think quantitatively. I approach what I do with as much logic as I can (sometimes this isn’t saying much…) When I think about measuring the impact a company, product, or service has on its customers, my mind immediately jumps to numbers – e.g. who (demographically) and how satisfied are they with it. But am I really measuring impact? I think yes and no. I’m measuring an impersonal impact; one that turns people into consumers and percentages. The other kind of impact largely missed in quantitative research is the impact on the person.

If I were to fill out a satisfaction or brand loyalty survey for RIVA, I would almost be unhappy that I couldn’t convey my thoughts and feelings about the experience. I don’t want them to know just that I was satisfied. I want them to understand how profound this experience was for me. When they talk to potential customers about this RIVA moderator class, I want them to be equipped with my personal story. If they listen and understand what I say to them, I believe they would be better equipped to sell their product.

This is one of the undeniable and extremely powerful strengths of qualitative research. Interviews, focus groups, anything that allows a researcher to sit down and talk to people is creating some of the most valuable data that can be created. We can all think of a time where a friend or family member had such a positive experience with some company, product, or service that they just couldn’t help but gush about it. Qualitative research ensures that valuable of that feedback is captured and preserved. If you want to truly understand who is buying your product or using your service, I cannot stress the importance of qualitative research enough.


Breaking down the wall between quant and qual

Recently we had a project involving a large survey with numerous open-end questions. Taking the divide and conquer approach, it was all hands-on deck to quickly code the thousands of responses. As a qualitative researcher, coding survey responses can feel like a foreign process and I often found myself overthinking both my codes and the nuanced content of responses. When I had finished, I typed up a summary of my findings and even pulled out a few ‘rock star’ quotes that illustrated key trends and takeaways. The experience left me wondering—why is content analysis of survey open-ends not more common? It is qualitative data after all.

Simply put, the purpose of content analysis is the elicitation of themes or content in a body of written or other pointed media. Like many qualitative approaches, it does not produce numerical measurements; rather, content analysis measures patterns and trends in the data. Incorporating qualitative analysis techniques such as content analysis into traditionally quantitative studies better contextualizes survey results and produces greater insights.

Imagine a classic branding survey where participants are asked sentiment questions such as ‘what is your impression of Brand X’? Often, the questions are designed as a Likert scales with defined categories (e.g. very positive, somewhat positive, neutral, etc.). While this provides general insight into attitudes and impressions of the brand, it does not necessarily highlight the broader insights or implications of the research findings. When Corona does a brand survey, we regularly ask an open-end question for qualitative content analysis as a follow-up, such as ‘What specifically about Brand X do you find appealing?’ or, conversely, ‘What specifically about Brand X do you find unappealing?’. Inclusion of qualitative follow-up provides additional framing to the quantitatively designed Likert scale question and augments insights. Additionally, if the survey shows a sizeable negative sentiment towards a brand, incorporating qualitatively designed open-ends can uncover issues or problems that were unknown prior to the research, and perhaps outside of the original research scope.

Historically, quantitative and qualitative research has been bifurcated, both in design and in analysis. However, hybrid approaches such as the one described above are quickly gaining ground and the true value is being realized. Based on our experience here at Corona, for content analysis to be effectively done in a quantitative-dominant survey, it is best for this to be decided early in the research design phase.

A few things to keep in mind when designing open-ended questions for content analysis:

  • Clearly define research objectives and goals for the open-end questions that will be qualitative analyzed.
  • Construct questions with these objectives in mind and incorporate phrasing the invites nuanced responses.
  • Plainly state your expectations for responses and if possible, institute character minimums or maximums as needed.

In addition to the points mentioned above, it is important to note that there are some avoidable pitfalls. First off, this method is best suited for surveys with a smaller sample size, preferably under 1000 respondents. Also, the survey itself must not be too time intensive. It is well known that surveys which extend beyond 15 to 20 minutes often lead to participants dropping out or not fully completing the survey. Keep these time limits in mind and be selective about the number of open-ends to be include. Lastly, it is important to keep the participant engaged in the survey. If multiple open-ends are incorporated in to the survey, phrase the questions differently or ask them about different topics in an effort to keep participants from feeling as  though they are repeating themselves.

In an ideal world, quantitative and qualitative approaches could meld together seamlessly, but we all know this isn’t an ideal world. Time constraints, budgets, research objectives are just a handful of reasons why a hybrid approach such as the one discussed here may not be the right choice. If it is though, hybrid approaches provide participants an opportunity to think deeper about the topic at hand and also can create a sense of active engagement between the participant and the end-client. In other words—they feel like their voice is being heard and the end-client gains a better understanding of their customer.


The Four Cornerstones of Survey Measurement: Part 2

Part Two: Reliability and Validity

The first blog in this series argued that precision, accuracy, reliability, and validity are key indicators of good survey measurement.  It described precision and accuracy and how the researcher aims to balance the two based on the research goals and desired outcome.  This second blog will explore reliability and validity.

Reliability

In addition to precision and accuracy, (and non-measurement factors such as sampling, response rate, etc.) the ability to be confident in findings relies on the consistency of survey responses. Consistent answers to a set of questions designed to measure a specific concept (e.g., attitude) or behavior are probably reliable, although not necessarily valid.  Picture an archer shooting arrows at a target, each arrow representing a survey question and where they land representing the question answers. If the arrows consistently land close together, but far from the bulls-eye, we would still say the archer was reliable (i.e., the survey questions were reliable). But being far from the bulls-eye is problematic; it means the archer didn’t fulfill his intentions (i.e., the survey questions didn’t measure what they were intended to measure).

One way to increase survey measurement reliability (specifically, internal consistency) is to ask several questions that are trying to “get at” the same concept. A silly example is Q1) How old are you, Q2) how many years ago were you born, Q3) for how many years have you lived on Earth. If the answers to these three questions are the same, we have high reliability.

The challenge with achieving high internal reliability is the lack of space on a survey to ask similar questions. Sometimes, we ask just one or two questions to measure a concept. This isn’t necessarily good or bad, it just illustrates the inevitable trade-offs when balancing all indicators.  To quote my former professor Dr. Ham, “Asking just one question to measure a concept doesn’t mean you have measurement error, it just means you are more likely to have error.”

Validity

Broadly, validity represents the accuracy of generalizations (not the accuracy of the answers). In other words, do the data represent the concept of interest? Can we use the data to make inferences, develop insights, and recommend actions that will actually work? Validity is the most abstract of the four indicators, and it can be evaluated on several levels.

  • Content validity: Answers from survey questions represent what they were intended to measure.  A good way to ensure content validity is to precede the survey research with open-ended or qualitative research to develop an understanding of all top-of-mind aspects of a concept.
  • Predictive or criterion validity: Variables should be related in the expected direction. For example, ACT/SAT scores have been relatively good predictors of how students perform later in college. The higher the score, the more likely the student did well in college.  Therefore, the questions asked on the ACT/SAT, and how they are scored, have high predictive validity.
  • Construct validity: There should be an appropriate link between the survey question and the concept it is trying to represent.  Remember that concepts, and constructs, are just that, they are conceptual. Surveys don’t measure concepts, they measure variables that try to represent concepts.  The extent that the variable effectively represents the concept of interest demonstrates construct validity.

High validity suggests greater generalizability; measurements hold up regardless of factors such as race, gender, geography, or time.  Greater generalizability leads to greater usefulness because the results have broader use and a longer shelf-life.  If you are investing in research, you might as well get a lot of use out of it.

This short series described four indicators of good measurement.  At Corona Insights, we strive to maximize these indicators, while realizing and balancing the inevitable tradeoffs. Research survey design is much more than a list of questions, it’s more like a complex and interconnected machine, and we are the mechanics that are working hard to get you back on the road.


Keeping it constant: 3 things to keep in mind with your trackers

When conducting a program evaluation or customer tracker (e.g., brand, satisfaction, etc.), we are often collecting input at two different points in time and then measuring the difference. While the concept is straightforward, the challenge is keeping everything as consistent as possible so we can say that the actual change is NOT a result of how we conducted the survey.

Because we can be math nerds sometimes, take the following equation:

A change to any part of the equation to the left of the equal sign will result in changes to your results. Our goal then is to keep all the survey components consistent so any change can be attributed to the thing you want to measure.

These include:

  1. Asking the same questions
  2. Asking them the same way (i.e. research mode)
  3. And asking them to a comparable group

Let’s look at each of these in more detail.

Asking the same questions

This may sound obvious, but it’s too easy to have slight (or major) edits creep into your survey. The problem is, we then cannot say if the change we observed between survey periods is a result of actual change that occurred in the market, or if the change was a result of the changing question (i.e., people interpreted the question slightly differently).

Should you never add or change a question? Not necessarily. If the underlying goal of that question has changed, then it may need to be updated to get you the best information going forward. Sure, you may not be able to compare it looking back, but getting the best information today may outweigh the goal of measuring change on the previous question.

If you are going to change or add questions to the survey, try to keep them at the end of the survey so the experience of the first part of the survey is similar.

Asking them the same way

Just as changing the actual question can cause issues in your tracker, changing how you’re asking them can also make an impact. Moving from telephone to online, from in-person to self-administered, and so on can cause changes due to how respondents understand the question and other social factors. For instance, respondents may give more socially desirable answers when talking to a live interviewer than they will online. Reading a question yourself can lead to a different understanding of the question than when it is read to you.

 

Similarly, training your data collectors with consistent instructions and expectations makes a difference for research via live interviewers as well. Just because the mode is the same (e.g., intercept surveys, in-class student surveys, etc.) doesn’t mean it’s being implemented the same way.

Asking a comparable group

Again, this may seem obvious, but small changes in who you are asking can impact your results. For instance, if you’re researching your customers, and on one survey you only get feedback from customers who have contacted your help line, and on another survey you surveyed a random sample of all customers, the two groups, despite both being customers, are not in fact the same. The ones who have contacted your help line likely had different experiences – good or bad – that the broader customer base may not have.

~

So, that’s all great in theory, but we recognize that real-life sometimes gets in the way.

For example, one of the key issues we’ve seen is with changing survey modes (i.e., Asking them the same way) and who we are reaching (i.e., Asking a comparable group). Years ago, many of our public surveys were done via telephone. It was quick and reached the majority of the population at a reasonable budget. As cell phones became more dominant and landlines started to disappear, while we could have held the mode constant, the group we were reaching would change as a result. Our first adjustment was to include cell phones along with landlines. This increased costs significantly, but brought us back closer to reaching the same group as before while also benefiting from keeping the overall mode the same (i.e., interviews via telephone).

Today, depending on the exact audience we’re trying to reach, we’re commonly combining modes, meaning we may do phone (landline + cell), mail, and/or online all for one survey. This increases our coverage (https://www.coronainsights.com/2016/05/there-is-more-to-a-quality-survey-than-margin-of-error/), though it does introduce other challenges as we may have to ask questions a little differently between survey modes. But in the end, we feel it a worthy tradeoff to have a quality sample of respondents. When we have to change modes midway through a tracker, we work to diminish the possible downsides while drawing on the strengths to improve our sampling accuracy overall.