Category: Marketing Research’s Past

Where to next? Election polling and predictions

The accuracy of election polling is still being heavily discussed, and one point that is worth some pondering was made by Allan Lichtman in an NPR interview the day after the election.  What he said was this:

“Polls are not predictions.”

To some extent this is a semantic argument about how you define prediction, but his point, as I see it, is that polls are not a model defining what factors will drive people to choose one party, or candidate, over another.  Essentially, polls are not theory-driven – they are not a model of “why,” and they do not specify, a priori, what factors will matter.  So, polling estimates rise and fall with every news story and sound bite, but a prediction model would have to say something up front like “we think this type of news will affect behavior in the voting booth in this way.” Lichtman’s model, for example, identifies 13 variables that he predicts will affect whether the party in power continues to hold the White House, including whether there were significant policy changes in the current term, whether there was a big foreign policy triumph, whether the President’s party lost seats during the preceding mid-term election, and so on.

Polls, in contrast, are something like a meta prediction model.  Kate made this point as we were discussing the failure of election polls:  polls are essentially a sample of people trying to tell pollsters what they predict they will do on election day, and people are surprisingly bad at predicting their own behavior.  In other words, each unit (i.e., survey respondent) has its own, likely flawed, prediction model, and survey respondents are feeding the results of those models up to an aggregator (i.e., the poll).  In this sense, a poll as prediction, is sort of like relying on the “wisdom of the crowd” – but if you’ve ever seen what happens when someone uses the “ask the audience” lifeline on Who Wants to Be a Millionaire, you know that is not a foolproof strategy.

Whether a model or a poll is better in any given situation will depend on various things.  A model requires deep expertise in the topic area, and depending on knowledge and available data sources, it will only capture some portion of the variance in the predicted variable.  A model that fails to include an important predictor will not do a great job of predicting.  Polls are a complex effort to ask the right people the right questions to be able to make an accurate estimate of knowledge, beliefs, attitudes, or behaviors.  Polls have a variety of sources of error, including sampling error, nonresponse bias, measurement error, and so on, and each of those sources contribute to the accuracy of estimates coming out of the poll.

The election polling outcomes are a reminder of the importance of hearing from a representative sample of the population, and of designing questions with an understanding of psychology.  For example, it is important to understand what people can or can’t tell you in response to a direct question (e.g., when are people unlikely to have conscious access to their attitudes and motivations; when are knowledge or memory likely to be insufficient), and what people will or won’t tell you in response to a direct question (e.g., when is social desirability likely to affect whether people will tell you the truth).

This election year may have been unusual in the number of psychological factors at play in reporting voting intentions.  There was a lot of reluctant support on both sides, which suggests conflicts between voters’ values and their candidate’s values, and for some, likely conflicts between conscious and unconscious leanings.  Going forward, one interesting route would be for pollsters to capture various psychological factors that might affect accuracy of reporting and incorporate those into their models of election outcomes.

Hopefully in the future we’ll also see more reporting on prediction models in addition to polls.  Already there’s been a rash of data mining in an attempt to explain this year’s election results.  Some of those results might provide interesting ideas for prediction models of the future.  (I feel obliged to note: data mining is not prediction.  Bloomberg View explains.)

Elections are great for all of us in the research field because they provide feedback on accuracy that can help us improve our theories and methods in all types of surveys and models.  (We don’t do much traditional election polling at Corona, but a poll is essentially just a mini-survey – and a lot of the election “polls” are, in fact, surveys.  Confused?  We’ll try to come back to this in a future blog.) We optimistically look forward to seeing how the industry adapts.

Evolutionary vs. Revolutionary

In a recent blog post I wrote about identifying claims full of hype in market research. This got me thinking about evolutionary vs. revolutionary changes we’ve seen and how most really fall into the former.  Perhaps it’s no surprise given that all market research works to the same end  (i.e., gaining new knowledge) by largely the same means (i.e., gathering information from your target audience).

First, what is meant by evolution vs. revolution?  By evolution, I mean gradually improving what was done before whereas by revolution, I mean disruptive change or a completely new way of doing things.

I think most of the recent advancements fall under evolution…

  • Online qualitative? We’re moving from in person focus groups and interviews to ones done via webcam online.
  • Mobile surveys? Online surveys restructured for a smaller screen and shorter response.
  • DIY research? Same tools, new easier to use package.

Most often, evolutionary changes are marked with adapting a current research mode to new technology.  In the 90’s and early 2000’s research was transitioning to online.  Today we’re seeing the next leap to mobile.

Some companies are already participating neuromarketing efforts. Fast Companies highlighted six of them in recent article.

There are of course some changes that have the potential to be revolutionary, depending on how they are used.

  • Neuromarketing can help us gather information at a level respondents generally cannot respond at.
  • Big data – really the ability to collect and analyze it – allows new connections to be found.

Online listening and using social media for research could be seen as revolutionary, unless it is only being used to recruit for more traditional forms of market research, such as a survey.  Other developments, such as location based surveys could be seen as revolutionary or just an evolution of one of the oldest forms of surveying – the intercept.

What do you think? Are you seeing more evolution or revolution in market research?


Mirror mirror on the wall, what is the least desirable methodology of them all?

GreenBook, a directory of market research firms, conducts and publishes the Research Industry Trends (GRIT) report annually.  While perusing the most recent results, I stumbled upon the following finding (techniques respondents would choose in their ideal research company):

The top part of the graph showing most desirable research techniques – mobile and online – wasn’t surprising.  After all, these tools have been the talk of the industry for several years.  What surprised me, was not that mail was considered least desirable, but that it was such a strong response.  Nearly twice as many people said mail was the least desirable than the second most least desirable, telephone (telephone does have its share of challenges from cell phones to participation rates).

Why the hatred of mail? The report, or at least online summary, didn’t go into the why on this particular question.  My guess is the longer timeline to field the survey is  a major barrier, as well as the additional logistics of conducting the actual mailing, data entry, and so on.  Response rates with mail, when done right, can actually be quite good and the costs aren’t always out of line with other research forms.  While mail may not be the newest or hippest means out there (ok, it isn’t), mail can still get the job done.  As always, it comes down to picking the right tool for the job.  For certain populations, small geographies, or other areas where mail excels, it is still a good tool.

To be fair, there were many interesting results that I wouldn’t disagree with, and I do enjoy seeing the trends in the report.  Sometimes, I just think the mirror lies.

On a related note, this made me think of a story on NPR last year.  Kevin Kelly, author of What Technology Wants, argues no invention has ever gone extinct (they later found examples that contradict this, but stay with me for a second…), but rather continue to live on to serve some unique niche or as part of a bigger piece of technology. So, until there ceases to be a mail service, mail surveys (and door-to-door, and intercept surveys, and…) will probably continue to live on to serve unique niches in the research world where they excel.


Unusual Questions Asked in the U.S. Census

Kevin recently taught a class on how to use U.S. Census data, and did a little historical research on census questions. He discovered a few questions asked in the past that may seem a little odd today, though they likely were quite relevant during their particular time period. They’re paraphrased below.

1. 1850 Census – How many slaves escaped from you in the last year that you did not recapture? And how many slaves did you free?
2. 1860 Census – What’s your net worth? (Asked separately for real estate and for other possessions)
3. 1870 Census – Are you not allowed to vote for some reason other than “rebellion or other crime”? (Asked only of men.)
4. 1880 Census – Were you sick enough today that you couldn’t attend to your ordinary duties? And if so, what was your sickness?
5. 1910 Census – If you are a polygamist, are your wives sisters? (Asked only of Native Americans.)
6. 1940 Census – If your home doesn’t have running water, is there a source of water within 50 feet of your home?
7. 1960 Census – Does your home have a basement? And how many television sets do you own?
8. 1970 Census – Do you enter your home through a front door, or do you enter your home through someone else’s living quarters? And do you have a battery-operated radio?

There were likely very good reasons to ask questions like these in the past, even if they may not resonate today. It makes us wonder what questions we’re being asked in the current American Community Surveys that will seem antiquated or odd 50 years from now.

Image by Norman Rockwell.

Market research is dead. Long live market research.

Traditional market research is dead.  I’ve heard this multiple times in the past and I’m betting you have too.  To be fair, traditional market research (often simplified as surveys and focus groups) isn’t without its challenges, but to say it is dead is carving its headstone a little prematurely.

So why have so many people signed its death certificate?  Well, some have had a bad experience with market research (it didn’t provide the answers, provided the wrong answers, etc.), while others have a stake in its death (think life insurance).

How can you sell the next greatest thing in marketing research if there is nothing wrong with traditional market research?

In the past few years I’ve read about how data mining, online video, social media, and neuroscience are all driving the nails in the coffin of more traditional methods, but do they solve all of our problems as often claimed?

  • We just need to mine our existing data because the answers are already there…But what if they’re not?
  • Video gives us more depth than surveys can…But is it quantifiable? Projectable?
  • Social media is telling us things we never thought to ask…But is it telling us the things we are asking? And from the right people?
  • Neuroscience is the key to unlocking what we really think…But what are people really reacting to?

I don’t (generally) fault the companies for taking the stance they do – they have a product to sell, and often times it’s even a good product.  However, the truth is that these new tools are just that – tools.  And saying one tool can solve all problems is just inaccurate.

Think about this way – it would be like saying all I need to build a new house is a hammer (good luck when you need to cut a board or do wiring).  A hammer is a great tool to build a house, but it is made better by all of the other tools in the toolbox.

What do you think? Is traditional market research at death’s doorstep or just going through an awkward stage at puberty?