It’s the season for political polling, which is a convenient occasion for illustrating the many potential pitfalls of conducting opinion research. Last week there was a particularly good example of biases in opinions caused by the way a question is asked.
There is currently a bill (House Bill 1366) in the North Carolina State Legislature that aims to reduce bullying in the public schools, and (at least at one point) specifically calls for harsher penalties for bullying that is based on group membership, including sexual orientation.
So what do North Carolinians think about the bill? Well, apparently only 24% of them support it.
No, wait a minute—74% of them support it.
There’s an easy explanation — you get what you ask for. Here’s how the more liberal Public Policy Polling phrased the question in their survey (which showed 74% support):
There is currently a proposal in the General Assembly that specifies the need to protect children from bullying based on their sexual orientation. Do you think this provision should be passed into law?
And here’s how the more conservative Civitas Institute phrased the question in the poll that received 24% support:
Do you think public schools in North Carolina should implement an anti-bullying policy that requires students be taught that homosexuality, bisexuality, cross-dressing and other behaviors are normal and acceptable?
Regardless of your politics, I think anyone would agree that there is a pretty big difference in the emotional tone, the choice of absolutes (“specifies” vs. “requires”), and the choice of descriptors (“sexual orientation” vs. “homosexuality, bisexuality, cross-dressing and other behaviors”; “children” vs. “students”) in these two questions. This all adds up to big differences in what those questions are asking, so it is unsurprising that they got such divergent results*.
As recognized by the local media, both polling groups typically operate on opposite sides of the political spectrum, but I have to agree with the reporter that the Civitas question is the more biased** of the two. Casting political questions in terms of absolutes (i.e., “requires”) often lowers levels of support because most Americans do not like the idea of being told what to do by the government. Throwing in the ambiguous (and scary) “other behaviors” invites respondents’ imaginations to run wild. Finally, framing the bill in terms of “teaching” rather than “preventing bullying” is arguably a misstatement of what the bill is supposed to do. You can make the argument that children are “taught” what is normal and important by viewing how adults punish and reward their behavior, but “taught” in the context of public education explicitly conjures the image of direct classroom instruction. For all of these reasons, the Civitas question looks like it was written to get the exact result they got. It may not be a public opinion question, but a marketing question, designed to get headlines and shift attention.
In other words, to ensure you get good quality data, you need to be careful what you ask. Which, if either, of these questions is likely to provide an accurate estimate of how people will vote on the bill? And to ensure that you as a reader are not mislead when biased questions are reported in the media, you need to know what was asked!
*If I had to be evenhanded to both sides I would argue that the Public Policy Polling question was asking about support for the intent of the bill, while the Civitas Institute was focused on support for a potential effect of the bill.
**The Public Policy Polling Question isn’t perfect either. “To protect children” is a fairly loaded phrase (Simpsons fans will recall the often exclaimed, “won’t somebody think of the children?!”)