# The Four Cornerstones of Survey Measurement: Part 2

### Part Two: Reliability and Validity

The first blog in this series argued that precision, accuracy, reliability, and validity are key indicators of good survey measurement.  It described precision and accuracy and how the researcher aims to balance the two based on the research goals and desired outcome.  This second blog will explore reliability and validity.

#### Reliability

In addition to precision and accuracy, (and non-measurement factors such as sampling, response rate, etc.) the ability to be confident in findings relies on the consistency of survey responses. Consistent answers to a set of questions designed to measure a specific concept (e.g., attitude) or behavior are probably reliable, although not necessarily valid.  Picture an archer shooting arrows at a target, each arrow representing a survey question and where they land representing the question answers. If the arrows consistently land close together, but far from the bulls-eye, we would still say the archer was reliable (i.e., the survey questions were reliable). But being far from the bulls-eye is problematic; it means the archer didn’t fulfill his intentions (i.e., the survey questions didn’t measure what they were intended to measure).

One way to increase survey measurement reliability (specifically, internal consistency) is to ask several questions that are trying to “get at” the same concept. A silly example is Q1) How old are you, Q2) how many years ago were you born, Q3) for how many years have you lived on Earth. If the answers to these three questions are the same, we have high reliability.

The challenge with achieving high internal reliability is the lack of space on a survey to ask similar questions. Sometimes, we ask just one or two questions to measure a concept. This isn’t necessarily good or bad, it just illustrates the inevitable trade-offs when balancing all indicators.  To quote my former professor Dr. Ham, “Asking just one question to measure a concept doesn’t mean you have measurement error, it just means you are more likely to have error.”

#### Validity

Broadly, validity represents the accuracy of generalizations (not the accuracy of the answers). In other words, do the data represent the concept of interest? Can we use the data to make inferences, develop insights, and recommend actions that will actually work? Validity is the most abstract of the four indicators, and it can be evaluated on several levels.

• Content validity: Answers from survey questions represent what they were intended to measure.  A good way to ensure content validity is to precede the survey research with open-ended or qualitative research to develop an understanding of all top-of-mind aspects of a concept.
• Predictive or criterion validity: Variables should be related in the expected direction. For example, ACT/SAT scores have been relatively good predictors of how students perform later in college. The higher the score, the more likely the student did well in college.  Therefore, the questions asked on the ACT/SAT, and how they are scored, have high predictive validity.
• Construct validity: There should be an appropriate link between the survey question and the concept it is trying to represent.  Remember that concepts, and constructs, are just that, they are conceptual. Surveys don’t measure concepts, they measure variables that try to represent concepts.  The extent that the variable effectively represents the concept of interest demonstrates construct validity.

High validity suggests greater generalizability; measurements hold up regardless of factors such as race, gender, geography, or time.  Greater generalizability leads to greater usefulness because the results have broader use and a longer shelf-life.  If you are investing in research, you might as well get a lot of use out of it.

This short series described four indicators of good measurement.  At Corona Insights, we strive to maximize these indicators, while realizing and balancing the inevitable tradeoffs. Research survey design is much more than a list of questions, it’s more like a complex and interconnected machine, and we are the mechanics that are working hard to get you back on the road.