Validating Your Survey Findings With Real-World Data
12/15/25 / David Kennedy
The goal of most survey research is to generalize the results to a broader population. Maybe you survey a portion of your members to understand how you’re doing across all members. Or you survey a sample of residents to measure priorities for community planning and you need them to represent all residents in your community.
Survey samples are, however, just that—a sample of the broader population. Even when randomly selected, a sample may not accurately represent everyone it’s meant to represent. Inaccurate representation happens for a variety of reasons, perhaps most importantly due to response bias, the tendency for some groups to respond to your survey more than others. Maybe those members who are really engaged are more likely to see your email request and therefore respond. Maybe older residents are more likely to return a completed survey than younger residents.
These response biases may not matter if responses between the over- and under-represented groups don’t differ. But responses often do differ by group and not accounting for this in your analysis could lead to very different conclusions. For example, if only the most engaged members respond to your survey, you could overestimate awareness and usage.

Checking your survey sample
We often check survey samples against known demographics since we have good benchmarks for many demographics (e.g., age, gender, education within a community). By comparing the survey sample to that of the broader population we can adjust the dataset, as appropriate, so the survey sample mirrors the overall population: Responses from underrepresented groups are weighted more strongly, to match the population, and responses from overrepresented groups are weighted less strongly. We can do this, too, with other organizations, such as arts and culture or professional society organizations, if they know certain demographic traits of their overall membership (often they don’t, which is another topic.)
What if we don’t know the overall composition of membership in demographic terms? One might be tempted to look at the responding population and judge for oneself if it feels right, but that is a risky proposition, that stereotypes and biases can easily influence.
Improving accuracy with behavioral data
What the organization probably has, though, is actual member data on their behavior. For example:
- Frequency of visits
- Time as a member
- Number of downloads for a resource
- Most recent conference attended
- Giving level
These data, too, can be used to compare and, if necessary, weight the survey sample to increase the accuracy of the results. What’s more, by weighting according to behavioral traits, we’re often also improving the demographic accuracy as well, since many behaviors are associated with demographics such as age or income. If you’re wondering how accurate your survey data is for your organization, reach out and we would be happy to discuss your data with you.
—
