RADIANCE BLOG

Category: Surveying Surveys

The Four Cornerstones of Survey Measurement: Part 2

Part Two: Reliability and Validity

The first blog in this series argued that precision, accuracy, reliability, and validity are key indicators of good survey measurement.  It described precision and accuracy and how the researcher aims to balance the two based on the research goals and desired outcome.  This second blog will explore reliability and validity.

Reliability

In addition to precision and accuracy, (and non-measurement factors such as sampling, response rate, etc.) the ability to be confident in findings relies on the consistency of survey responses. Consistent answers to a set of questions designed to measure a specific concept (e.g., attitude) or behavior are probably reliable, although not necessarily valid.  Picture an archer shooting arrows at a target, each arrow representing a survey question and where they land representing the question answers. If the arrows consistently land close together, but far from the bulls-eye, we would still say the archer was reliable (i.e., the survey questions were reliable). But being far from the bulls-eye is problematic; it means the archer didn’t fulfill his intentions (i.e., the survey questions didn’t measure what they were intended to measure).

One way to increase survey measurement reliability (specifically, internal consistency) is to ask several questions that are trying to “get at” the same concept. A silly example is Q1) How old are you, Q2) how many years ago were you born, Q3) for how many years have you lived on Earth. If the answers to these three questions are the same, we have high reliability.

The challenge with achieving high internal reliability is the lack of space on a survey to ask similar questions. Sometimes, we ask just one or two questions to measure a concept. This isn’t necessarily good or bad, it just illustrates the inevitable trade-offs when balancing all indicators.  To quote my former professor Dr. Ham, “Asking just one question to measure a concept doesn’t mean you have measurement error, it just means you are more likely to have error.”

Validity

Broadly, validity represents the accuracy of generalizations (not the accuracy of the answers). In other words, do the data represent the concept of interest? Can we use the data to make inferences, develop insights, and recommend actions that will actually work? Validity is the most abstract of the four indicators, and it can be evaluated on several levels.

  • Content validity: Answers from survey questions represent what they were intended to measure.  A good way to ensure content validity is to precede the survey research with open-ended or qualitative research to develop an understanding of all top-of-mind aspects of a concept.
  • Predictive or criterion validity: Variables should be related in the expected direction. For example, ACT/SAT scores have been relatively good predictors of how students perform later in college. The higher the score, the more likely the student did well in college.  Therefore, the questions asked on the ACT/SAT, and how they are scored, have high predictive validity.
  • Construct validity: There should be an appropriate link between the survey question and the concept it is trying to represent.  Remember that concepts, and constructs, are just that, they are conceptual. Surveys don’t measure concepts, they measure variables that try to represent concepts.  The extent that the variable effectively represents the concept of interest demonstrates construct validity.

High validity suggests greater generalizability; measurements hold up regardless of factors such as race, gender, geography, or time.  Greater generalizability leads to greater usefulness because the results have broader use and a longer shelf-life.  If you are investing in research, you might as well get a lot of use out of it.

This short series described four indicators of good measurement.  At Corona Insights, we strive to maximize these indicators, while realizing and balancing the inevitable tradeoffs. Research survey design is much more than a list of questions, it’s more like a complex and interconnected machine, and we are the mechanics that are working hard to get you back on the road.


The Four Cornerstones of Survey Measurement: Part 1

Part One: Precision and Accuracy

Years ago, I worked in an environmental lab where I measured the amount of silt in water samples by forcing the water through a filter, drying the filters in an oven, then weighing the filters on a calibrated scale. I followed very specific procedures to ensure the results were precise, accurate, reliable, and valid; the cornerstones of scientific measurement.

As a social-science researcher today, I still use precision, accuracy, reliability, and validity as indicators of good survey measurement. The ability of decision makers to draw useful conclusions and make confident data-driven decisions from a survey depends greatly on these indicators.

To introduce these concepts, I’ll use the metaphor of figuring out how to travel from one destination to another, say from your house to a new restaurant you want to try. How would you find your way there? You probably wouldn’t use a desktop globe to guide you, it’s not precise enough. You probably wouldn’t use a map drawn in the 1600’s, it wouldn’t be accurate. You probably shouldn’t ask a friend who has a horrible memory or sense of direction, their help would not be reliable. What you would likely do is “Google It,” which is a valid way most of us get directions these days.

This two-part blog will unpack the meaning within these indicators. Let’s start with precision and accuracy. Part-two will cover reliability and validity.

Precision

Precision refers to the granularity of data and estimates. Data from an open-ended question that asked how many cigarettes someone smoked in the past 24 hours would be more precise than data from a similar closed-ended question that listed a handful of categories, such as 0, 1-5, 6-10, 11-15, 16 or more. The open-ended data would be more precise because it would be more specific, more detailed. High precision is desirable, all things being equal, but there are often “costs” associated with increasing precisions, such as increased time to take a survey, that might not outweigh the benefit of greater precision.

Accuracy

Accuracy refers to the degree that the data are true. If someone who smoked 15 cigarettes in the past 24 hours gave the answer ‘5’ to the open-ended survey question, the data generated would be precise but not accurate. There are many possible reasons for this inaccuracy. Maybe the respondent truly believed they only smoked five cigarettes in the past 24 hours, or maybe they said five because that’s what they thought the researcher wanted to hear. Twenty-four hours may have been too long of a time span to remember all the cigarettes they smoked, or maybe they simply misread the question. If they had answered “between 1 and 20,” the data would have been accurate, because it was true, but it wouldn’t have been very precise.

Trade-offs

Many times, an increase in precision can result in a decrease in accuracy, and vice-versa. Decision makers can be confident in accurate data, but it might not be useful. Precise data typically give researchers more utility and flexibility, especially in analysis. But what good is flexible data if there is little confidence in its accuracy. Good researchers will strive for an appropriate balance between precision and accuracy, based on the research goals and desired outcomes.

Now that we have a better understanding of precision and accuracy, the second blog in this series will explore reliability and validity.


Feeding your market’s desire to participate in surveys

I got an online survey the other day from a public organization, and they wanted to know … something.  It doesn’t really matter for the purposes of this post.

I like to participate in surveys for a variety of reasons.  First, I’m naturally curious about what’s being asked, and why.  Maybe I can learn something.  Second, if it’s an issue that I care about, I can offer my opinion and have a voice.  Third, I’m a human being so I just like to share my opinion.  And finally, I have a professional interest in seeing surveys designed by other people, just to compare against my own design ideas.

With the possible exception of the last reason, I would hazard a guess that I’m not uncommon in these motivations.  Most people who respond to surveys do so because they’re curious, because they want a voice, and because they like sharing their opinion.

However, it takes time to complete a survey, and like everyone else, my time is precious.  I want it to be worth my time, which links back to my motivations.  Will participating really give me a voice?  Will I learn something from it?  Will anyone care what I say?  How will my information be used?  I want to trust that something good will come from my participation.

This brings me to another key motivator that is not often mentioned.  In addition to wanting good outcomes from my participation, I also want to be sure that nothing bad will come of it.  I want to trust that the people surveying me are ethical and will protect both me and the results of the survey.

Thinking about these forces, let’s go back to this survey that I received.  It was a topic that I care about, so I was interested to see what questions were being asked.  I could check ‘curiosity’ off my list.  It was from a legitimate organization, so I’d be willing to have a voice in their decisions and share my opinion with them.  I could check those two items off my list.

But then I took a second look at the survey.  It was being done for a legitimate organization, but with “assistance” from a consultant that I was unfamiliar with.  I pulled up Google and tried to look up the company.  Nothing.  They had no web site at all, and only a half-dozen Google hits that were mostly spam.

When I participate in a survey, I want to know that my responses aren’t going to be used against me.  There’s no crime in being a young company, but having no website told me that this wasn’t a legitimate company.  Did these people know what they were doing?  Were they going to protect my information like a legitimate research company would, or would they naively turn over my responses to the client so they could target me for their next fundraising campaign?  I had no idea since I couldn’t identify what the “consultant” actually did.

Beyond that, there was another problem.  I clicked on the link, and it took me to one of the low-cost survey hosting sites.  Based on my experience in the industry, I know that committed researchers don’t use these sites, and that they’re inadvertently tools for public input rather than legitimate research.  (The sampling is usually wrong and there’s often no protection against “stuffing the ballot box”.

I declined to participate in that survey, which made me sad.  I suspected that the end client had noble motivations, but in the end they didn’t meet my criteria for participating.

Given our 18 years in the business, we at Corona are often looking at changes on the margins of surveys.  What can we do in our wording or our delivery mode or our timing to maximize meaningful participation?  This was a good reminder to me that we also have to be careful to satisfy the much more basic and powerful forces that aid or damage participation.  Before anything else, the things that survey researcher have to do are:

  1. Make people feel safe in responding. You must clearly identify who is involved in the study, and make available a web site that clearly shows oversight by a legitimate market research firm.  (This is even more important for qualitative research, which requires a bigger time commitment by respondents.) View Corona’s research privacy policy and Research participant information.
  2. Confirm to people that their opinion is important. Maybe this is my bias as a person in the industry, but if I get a cheaply designed survey on software that is used for entertainment polls from some consultant who is virtually unknown by the World Wide Web, it tells me that the project isn’t a priority for the client.  If the research is important, give your respondents that message by your own actions.
  3. Confirm to people that the survey gives them a voice. You can overtly say this, but you also have to “walk the walk” by giving people confidence.  One thing that I’ve noticed more and more is the use of surveys as marketing tools rather than research tools.  Sending out frequent cheaply produced surveys as a means of “engaging our audience” is not a good idea if the surveys aren’t being used for decision making.  People figure out pretty quickly when participation is a waste of their time, and then they’re less likely to participate when you really need their input.

All in all, we in the research industry talk a lot about long-term declines in participation rates, but many of us are contributing to that by ignoring the powerful motivations that people have to participate in surveys.  People should WANT to participate in our surveys, and we should support that motivation.  We can do that by surveying them only when it’s important, by showing a high level of professionalism and effort in our communications with them, and by helping to reassure them that we’re going to both protect them and carry forward their voice to our clients.


Tuft & Needle: Incredible Mattresses. Incredible research?

If you have ever received a proposal from Corona Insights regarding customer research, you may have seen this line:

“We believe that surveying customers shouldn’t lower customer satisfaction.”

We take the respondent’s experience into account, from the development of our approach through the implementation of the research (e.g., survey design, participant invites, etc.), even in our choice of incentives. We work with our clients on an overall communications plan and discuss with them whether we need to contact all customers or only a small subset, sparing the rest from another email and request. For some clients, we even program “alerts” to notify them of customers that need immediate follow-up.

As such, I’m always interested to see how other companies handle their interactions when it comes to requesting feedback. Is it a poorly thought out SurveyMonkey survey? Personalized phone call? Or something in between?

Recently, I was in the market for a new mattress and wanted to try one of newer entrants shaking up the mattress industry. I went with Tuft & Needle, and while I won’t bore you with details of the shopping experience or delivery, I found the post-purchase follow-up worth sharing (hopefully you’ll agree).

I received an email that appeared to come directly from one of the co-founders. It was a fairly stock email, but not with overdone marketing content or design, and it is easy enough to mask the email to make it appear to come from the founder. In it, it had one simple request:

“If you are interested in sharing, I would love to hear about your shopping experience. What are you looking for in a mattress and did you have trouble finding it?”

The request made clear that I could simply hit reply to answer. So I did.

I assumed that was it, or maybe I’d get another form response, but I actually got a real response. One that was clearly not stock (or at least not 100% stock – it made specific references to my response). It wasn’t the co-founder who had responded, but another employee, but still impressive in my opinion.

So, what did they do right? What can we take away from this?

  • Make a simple request
  • Make it easy to reply to
  • Include a personalized acknowledgement of the customer’s responses

Maybe you think this is something only a start-up would (or should) do, but what if more companies took the time to demonstrate such great service, whether in their research or their everyday customer service?


Thinking strategically about benchmarks

When our clients are thinking about data that they would like to collect to answer a question, we sometimes are asked about external benchmarking data. Basically, when you benchmark your data, you generally are asking how you compare to other organizations or competitors. While external benchmarks can be useful, there are a couple of points to consider when deciding whether benchmarking your data is going to be useful:

  1. Context is key. Comparing yourself to other organizations or competitors can encourage some big picture thinking about your organization. But it is important to remember the context of the benchmark data. Are the benchmark organizations similar to you? Are they serving similar populations? How do they compare in size and budget? Additionally, external benchmark data may only be available in aggregated form. For example, non profit and government organizations may be grouped together. Sometimes these differences are not important, but other times they are an important lens through which you should examine the data.
  2. Benchmark data is inherently past-focused. When you compare your data to that of other organizations, you are comparing yourself to the past. There is a time-lag for any data collection, and the data are reflecting the impacts of changes or policies that have already been implemented. While this can be useful, if your organization is trying to adapt to changes that you see on the horizon, it may not be as useful to compare yourself to the past.
  3. Benchmark data is generally more useful as part of a larger research project. For example, if your organization differs significantly from other external benchmarks, it can be helpful to have data that suggest why that is.
  4. What you can benchmark on may not be the most useful. Often, you are limited in the types of data available about other organizations. These may be certain financial data or visitor data. Sometimes the exact same set of questions is administered to many organizations, and you are limited to those questions for benchmarking.

Like most research, external benchmarking can be useful—it is just a matter of thinking carefully about how and when to best use it.


Getting the most out of your customer survey

There are a multitude of tools available these days that allow organizations to easily ask questions of their customers.  It is certainly not uncommon when Corona begins an engagement for the client to have made internal attempts at conducting surveys in the past.  In some cases, these studies have been relatively sophisticated and have yielded great results. In others, however, the survey’s results were met with a resounding “Why does this matter?”.

The challenge is that conducting a good survey requires a much more strategic view than most realize.  This starts with designing the survey questions themselves.  We always begin our engagements by asking our clients to think through the decisions that will be made, the opportunities to improve, and the possible challenges to be addressed based on the results.  By keeping the answers to these questions in mind as you design your survey questions, you can minimize the amount of “trivia” questions in your survey that might be interesting to know, but won’t really have any influence on your future decisions.

Even after having questions designed, you have to consider how you will get people to participate in the survey.  If you have a database of 100,000 customers, it may be tempting to just send invitations to all of them.  But what if you plan to send out a plea for donations in the next few weeks?  Consider the impact of asking for 15 minutes of time from people who might be asked to support you very soon.  Being careful to appropriately time the survey and perhaps only send it out to a small segment of customers might help to minimize fatigue that could negatively impact your overall business strategy in the near future.

Finally, once you’ve collected the results, simple tabulations will only tell a small part of the story.  Every result should be examined through the lens of the actual strategic impact of the results.  A good question to ask throughout the analysis of your results is, “So what?”.  Keep the focus on the implications of the results rather than the results themselves, your final report of what you learned with have a much better chance of making a meaningful impact on your organization moving forward.

Obviously, we at Corona are here to help walk you through this process in order to ensure the highest-quality result possible, but even if you choose to go it alone, keeping a strategic view of what you need to learn and how it will influence your decisions will help to avoid a lot of wasted effort.


Subpopulations in Research

As I’m sure you know, we do a lot of survey research here at Corona. When we provide the results, we try to build the most complete picture for our clients, and that means looking at the data from every which way possible. One of the most effective ways to do this is by looking at subpopulations.

What is a subpopulation?

A subpopulation is essentially a fraction or part of the overall pool of the population you are surveying. A subpopulation can be defined many ways. For example, some of the most common subpopulations to examine in research are gender (e.g. male and female), age (e.g. <35, 35-54, 55+), race/ethnicity, location, etc.  You can effectively define a subpopulation using whatever criteria you like; for instance, you can have a subpopulation that is based on what type of dessert is preferred – those who like cake and heathens those who don’t like cake.

What does it mean to have subpopulations?

When you examine survey results by subpopulations, at a basic level respondents are simply split into the subpopulations or groups (commonly called breakouts) you defined. After being broken into these groups, the results for the survey are compiled for each individual group separately. For example, take the following survey question:

  1. About how many hours a week do you watch sports?
    1. 1 hour or less
    2. 2 to 4 hours
    3. 5 to 7 hours
    4. 8 hours or more

The results would typically have two components: top-level results (results compiled for all respondents to the survey) and breakouts (results by group for any subpopulations that have been defined). For the above example question, the results might look something like this:

In this completely made-up example, you can see the benefit of having subpopulations. While 21 percent of overall respondents watched five to seven hours of sports a week, you can see that male respondents accounted for a hefty chunk, as 26 percent of males watch that much sports, compared to only 16 percent of females. Breaking out questions by subpopulations allows you to more closely examine data and assists in finding those gems of information.

Getting the most out of your survey

Being prepared to utilize subpopulations in your survey analysis means putting your best foot forward and maximizing your investment. Many subpopulations are constructed using questions commonly asked in surveys (gender, age, etc.), but some questions might not otherwise be asked without the foresight of planning to break respondents into subpopulations. For example, a nonprofit might be building a questionnaire to survey their patrons on their messaging; by simply asking if a respondent has donated to the organization, they can examine survey results of donors separately from all patrons. The survey can now not only better inform messaging for the organization overall, but also allows them to better target and communicate to donors, specifically.

Conducting a survey can be a challenging experience, so the more you can get out of a single survey, the better. The next time you are designing a survey, ask around your workplace to see if a few questions can be added to better utilize the information you’re collecting. Now you’re one step closer to conducting the perfect survey!


Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.


Online research is becoming more feasible in smaller locales (and that includes Denver)

Door-to-door, intercept, mail, telephone, online – surveys have evolved with the technology and needs of the times. Online has increased speed and often lowered cost of conducting surveys. For some populations, it has even made conducting surveys more feasible.

CO PushpinHowever, online surveys haven’t always been feasible in a city such as Denver or even statewide in Colorado.

(I should note here that we’re talking about the general public or other populations where we do not have a list. For instance, if we were surveying your customers and you had a database of customers with email addresses, conducting the survey online is almost certainly the way to go.)

Why it’s been tough until now

So why has it been tough until now to conduct public opinion research online in Denver, the Front Range, or even all of Colorado?

Unlike with mail, where huge databases of addresses exist, and telephone, where again lists or RDD sample can be generated, there is no master repository of email addresses or requirement that residents have one official email address. (Many of us probably have multiple emails – I personally have four outside of work.)

The market research industry’s answer to this has been to create databases, but unlike with mail and telephone where lists can be generated via public sources, email addresses generally have to be collected via individuals voluntarily sharing their information. In the industry, companies have specialized in doing just that – recruiting a lot of potential respondents to their online panel in exchange for incentives provided when they complete a survey. In addition to email, these companies generally collect some basic demographic information as well to make targeting more effective.

Now, let’s say a panel had one million U.S. members in their database. Sounds big, doesn’t it? Well, given that Colorado makes up less than 2% of the nation’s population, than means there might be 20,000 Coloradoans in their database. If you wanted Denver Metro only (about half the state’s population), that takes our max potential to 10,000. If only 10% respond to any given survey invite, the most respondents you may be able to receive is 1,000, and that’s before any additional screening (e.g., you’re only looking for commuters). That is simplified summary, but as you can see, it largely becomes a numbers game – you need a very large panel to drill down to smaller geography or subset of the population.

What has changed

These panels are nothing new. Corona has been using them for a decade, but what has changed recently in our home market (and most smaller geographies around the country, for that matter) is that the panels have grown large enough to adequately supply enough respondents to participate in our studies. A few years ago, we could only do online studies nationwide, in regions (e.g., the west, south, etc.), or maybe in very large metropolitan areas. As panels and recruitment continued to grow, we were able to do general population studies (i.e., pretty much everyone qualifies as we don’t have additional criteria for screening), but not smaller segments of the population. Now, while we can still run into difficulty with really niche groups, we can conduct studies with parents, visitors to a certain attraction, and many other groups all within Denver metro or the Front Range.

Still, a note of caution

So, problem solved, right? Unfortunately, online panels come with some caveats. First, compared to a mail or telephone survey, when sample is randomly generated, the results are not considered representative as the sample is not random probability sample. (There are some probability-based samples for online panels, but they’re still in the “only big enough for nationwide studies phase” mostly.) Panels are typically designed to reflect the overall population in terms of demographics, but due to their recruiting method, can’t be considered “random”.

Other concerns need to be taken into account such as how quickly the panel turns over respondents, avoiding respondents who try to game the system just for incentives, and other quality control measures.

For these reasons, Corona still regularly recommends other survey modes, such as mail and telephone (yes, we still do mail!) when we feel they will provide better answers for our clients. Often times, however, online may be the only feasible option given the challenges with telephone (e.g., cell phones) and mail (e.g., slower, static content). Sometimes we’ll propose both to our clients and then discuss the relative tradeoffs with them.

In summary, online is a growing option for Denver and Colorado, as well as other smaller cities, but be sure to pick the mode that is best for your research – not just the one that is easiest.

 


Research on Research: Boosting Online Survey Response Rates

David Kennedy and Matt Herndon, both Principals here at Corona, will be presenting a webinar for the Market Research Association (MRA) on August 24th.

The topic is how to boost response rates with online surveys. Specifically, they will be presenting research Corona has done to learn how minor changes to such things as survey invites can make an impact on response rates. For instance, who the survey is “from”, the format, and salutation can all make a difference.

Click here to register. You do need to be a member to view the webinar. (We hope to post it, or at least a summary, here on our blog afterwards.)

Even if you can’t make it, rest assured that if you’re a client at least, these lessons are already being applied to your research!