More defective research…from the inside

You probably have noticed that we’re pretty passionate about discussing good and less-than-good practices in market research on our blog (if you haven’t, check out here, here, here, here, and here).

I was reading the book Why Smart Companies Do Dumb Things the other day and one of the things the author mentioned as a “dumb thing” was the misuse of market research within companies – not that it was conducted poorly, but that it was misrepresented. One example that he mentioned was GM’s approval of the Pontiac Aztek. Apparently focus groups (and possibly other research) showed major issues with the new vehicle, but those findings were never directly relayed to top management. Instead, they got heavily edited versions of the research that weren’t nearly so bleak.

So why did this happen? According to the author, Calvin Hodock, there are several reasons. For one, relaying findings from the focus groups by word of mouth can lead to misinterpretation since findings are “delicate” – instead always go the original report source. Also, company cultures can be a big contributor. For example, at GM, no one wanted to speak up about the problems because they were so far into the development and the young team that was working on it didn’t want to be blamed for the failure. The culture in this case promoted self-preservation over the greater good of the company.

Fixing company culture issues is obviously more difficult than retooling your survey execution or fixing other research processes that can be the culprit of defective research (as our other posts noted above), but it’s obviously a critical step in getting the most out of your research. After all, if you never hear what the research was telling you, what’s the point of even conducting it in the first place?

The importance of good sampling

One of the most important factors that determines if your [fill in research mode here … survey, focus group, etc.] produces accurate results is your sample. A sample, by definition, is a subset of the population you are studying that is selected for the actual research study. Perform your research with the wrong sample, or just one that is inaccurately designed, and you will almost certainly get misleading results (in the industry, this is called external validity–the extent to which your sample’s results can generalize to the population you care about).

I bring this up because I am seeing more and more surveys where anyone can respond, and can often respond multiple times. Recently, I’ve seen several online surveys with no participation restrictions (and not just the fun opinion polls) and email snowball samples (a snowball sample is one in which additional respondents are recruited from referrals from initial respondents). I was going to post the survey links here until I realized I would only be contributing to their poor results.

These ranged from usage and opinions of outdoor hiking areas; an economic impact study on the impact that access to a rock climbing area produces; and a public opinion survey on criminals. In each example, the survey would have most likely gone to only those with strong opinions, and in some cases, only those with an opinion that they wanted to hear anyway. Not to mention, anyone who wanted to could take the survey multiple times with little effort. The best sampling is one in which all respondents in your study population have an equal chance of being chosen to participate and limits respondents to those that you chose.

This brings up a little relevant, election year history lesson too (I know you love history). Back in 1936, Literary Digest (whose readers were largely wealthy Americans) predicted Alfred Landon (who?) would win over Franklin Roosevelt. George Gallup, with a smaller, but more scientifically sampled poll, accurately predicted Roosevelt would win (and he did in a landslide). Then, 12 years later in 1948, Gallup got it wrong when he shut off the polling two weeks before the election (that missed some of the action) and inaccurately predicted Thomas Dewey would win over Harry Truman.

Google Insights

Marketers salivate over the amount of data Google holds, and today Google gives us another window into their database of intentions.  Similar to Google Trends, their newest service, Google Insights, allows you to get a glimpse of what terms people are searching for. Insights, however, offers additional tools to make those results more useful.  Now you can filter by geography, time frame, and category/topic.

The potential uses for this are limitless.  Need to track your online campaign better?  Look at results by geography or by how the search pattern changes before and after your campaign.  Need to know what terms are most popular in your targeted geography so you can bid on the best terms on AdWords?  You could identify terms by market to better target separate campaigns based on differences in local lingo.  Or do you need to know when you should start advertising your seasonal products?  You can see what time of year people have begun to search for those products in the past.

And don’t forget about offline advertising and marketing.  Need to find new markets? You could estimate demand based on their search terms.  Need to spot trends as they arise?  Google Trends could help with that, but now you can look at it over time and see the variations by locale.  How about seeing how you stack up against your competitors?

Enough with random ideas.  I’ve already done some playing with brands:  Crocs has a great rise (and seasonal falls).  Look for Al Gore (yes he is a brand) and you can see the rise from his movie, Oscar, and Nobel Peace Prize.

Of course, Google Insights shouldn’t be used alone in most situations – and you can’t always directly correlate search rankings with potential causes such as an ad campaign- but it provides yet one more new tool to help us measure what’s going on out in the world.

Picture of the googlebot from the artists at

Be Careful What You Ask For

It’s the season for political polling, which is a convenient occasion for illustrating the many potential pitfalls of conducting opinion research.  Last week there was a particularly good example of biases in opinions caused by the way a question is asked.

There is currently a bill (House Bill 1366) in the North Carolina State Legislature that aims to reduce bullying in the public schools, and (at least at one point) specifically calls for harsher penalties for bullying that is based on group membership, including sexual orientation.

So what do North Carolinians think about the bill?  Well, apparently only 24% of them support it.

No, wait a minute—74% of them support it.

What gives?

There’s an easy explanation — you get what you ask for. Here’s how the more liberal Public Policy Polling phrased the question in their survey (which showed 74% support):

There is currently a proposal in the General Assembly that specifies the need to protect children from bullying based on their sexual orientation. Do you think this provision should be passed into law?

And here’s how the more conservative Civitas Institute phrased the question in the poll that received 24% support:

Do you think public schools in North Carolina should implement an anti-bullying policy that requires students be taught that homosexuality, bisexuality, cross-dressing and other behaviors are normal and acceptable?

Regardless of your politics, I think anyone would agree that there is a pretty big difference in the emotional tone, the choice of absolutes (“specifies” vs. “requires”), and the choice of descriptors (“sexual orientation” vs.  “homosexuality, bisexuality, cross-dressing and other behaviors”; “children” vs. “students”) in these two questions.  This all adds up to big differences in what those questions are asking, so it is unsurprising that they got such divergent results*.

As recognized by the local media, both polling groups typically operate on opposite sides of the political spectrum, but I have to agree with the reporter that the Civitas question is the more biased** of the two.  Casting political questions in terms of absolutes (i.e., “requires”) often lowers levels of support because most Americans do not like the idea of being told what to do by the government.  Throwing in the ambiguous (and scary)  “other behaviors” invites respondents’ imaginations to run wild.  Finally, framing the bill in terms of “teaching” rather than “preventing bullying” is arguably a misstatement of what the bill is supposed to do.  You can make the argument that children are “taught” what is normal and important by viewing how adults punish and reward their behavior, but “taught” in the context of public education explicitly conjures the image of direct classroom instruction.   For all of these reasons, the Civitas question looks like it was written to get the exact result they got.  It may not be a public opinion question, but a marketing question, designed to get headlines and shift attention.

In other words, to ensure you get good quality data, you need to be careful what you ask.  Which, if either, of these questions is likely to provide an accurate estimate of how people will vote on the bill?  And to ensure that you as a reader are not mislead when biased questions are reported in the media, you need to know what was asked!

*If I had to be evenhanded to both sides I would argue that the Public Policy Polling question was asking about support for the intent of the bill, while the Civitas Institute was focused on support for a potential effect of the bill.

**The Public Policy Polling Question isn’t perfect either.  “To protect children” is a fairly loaded phrase (Simpsons fans will recall the often exclaimed, “won’t somebody think of the children?!”)

Photo of the North Carolina State Capitol in Raleigh courtesy of Jim Bowen and licensed via a Creative Commons Attribution 2.0 license.

Corona team member helps Ad2 Denver take 2nd place in public service competition!

Congratulations to our own David Kennedy, and the rest of the Ad2 Denver team, who took 2nd place in the national 2008 American Advertising Federation‘s Public Service Competition!  Their project culminated in a very cool and witty media campaign for the new Bradford Washburn American Mountaineering Museum located in Golden, CO.  We’re very proud of Dave’s success, and honored that he helps the Corona team give back to the community.  (We also love the BWAMM ads!)  Way to go Dave!

Obama’s Super Marketing Machine

I should first start off with a general disclaimer.  We’re a neutral market research firm with no affiliation with any political party.  Oh, and another disclaimer, we do market research for a living, so we are biased in that respect.  With all that out of the way…

I read an article today on Obama’s Super Marketing Machine.  We’ve been hearing for a while about his excellent grassroots efforts and his fund raising successes, but this is one of the first articles on the underlying efforts that make it all possible.  In short, he’s taking advantage of mining and segmenting databases; conducting surveys of attitudes and behaviors; and building profiles of supporters, contributors, neighborhoods, and likely voters to help with everything from fund raising to get out the vote efforts (for a more satirical look at the issue, see the Onion’s recent article on market research which requires the NSFW warning typical to most Onion articles: it has rampant foul language).

I’ll let you read the article for yourself, but some of the most interesting insights to us are in the comments, as many readers conveyed their “big brother” privacy fears.  The research techniques of the Obama campaign are nothing new (many readers said as much), as these tools have been used in private industry for years.  What’s new is bringing this level of research sophistication to a Democrat’s political campaign (these techniques aren’t new to Republicans – the 2000 and 2004 Bush wins are generally attributed to Karl Rove’s use of similar methods, see this book for example).

But rather than focus on politics or the dangers of rampant data collection (which are potentially many and should not be minimized) I’d like to look at how such data mining is actually a good thing – and not just for the companies.

Earlier I went on Amazon to look for a book, and the home page was covered with product deals directly related to my hobbies (photography and climbing).  Not only were the suggested products in the same category, the camera accessories were the right ones to fit my camera, the guide books were of areas that I was interested in. The other day at the grocery store when I checked out I received coupons for products that I actually buy.

Maybe this is creepy to some people, but why wouldn’t you want to receive relevant advertising messages instead of random, irrelevant messages?  If I received a coupon for adult diapers at the grocery store I would be quite disturbed. If I’m in the market for a new TV, and someone wants to tell me that they have a sale, great!  Saves me time.

On a bigger scale, how much more efficient does this make the economy?  Companies can spend fewer dollars to reach more people who actually may buy their product (in this case, a President).

I’ll stop there, but you get my point – data mining and geodemographic segmentation isn’t all bad.  Yes, there should be restraints, effective oversight, and the information should be used ethically, but overall it can actually help improve your life.

Who uses this stuff anyway?

Do you ever wonder who uses market research? You may think, “marketers, of course,” but there can be many more audiences to market research than just marketers or even management. The findings could impact everyone in the organization from the CEO to the front line employees.

I came across this somewhat old article today about tailoring your reports to your reader. Makes complete sense of course, and it’s something we strive to do as well (we’ve even given presentations with hardly a chart or graph if we think there are better ways at presenting the data). But I’ve often wondered if our reports end up on the desks of people that we weren’t aware of when creating the report, or perhaps more likely, the results are not clearly communicated between one user and another (since they all look at it through their own eyes).

Writing different variants of reports – or at least different sections clearly tailored to different audiences – is obviously one solution. But you still have to know WHO you are audience is and WHAT THEY NEED. I liked the article’s suggestion to go visit the actual client (not just the primary contact or purchaser); see who they are, how they interpret data, and what they need out of the research. So, if you hire us in the future, don’t be surprised if I come knocking at your door for a tour and meet-and-greet.

Don’t Stop….Graphing?

I’ll admit it — I’m a graph nerd. I tend to obsess over the minutia of tables, charts, and graphs, in search of the best ways to present different types of data. I’ve fully accepted the idea — popularized by Edward Tufte and others– that many of the advances in graphing technology lead to pretty, but dysfunctional, graphs. And certain popular graph types are simply not suited for the types of data people often try to present with them (one of these days you’ll be subjected to a rant on pie charts or producing three-dimensional graphs from two-dimensional data).

But even a fundamentalist needs to have fun. So when I feel the need to combine my love for (bad) music and graphing, I turn to GraphJam, a site dedicated to “Pop Culture for People in Cubicles.” The charts, flow charts, and other data graphics people create and share at GraphJam indicate the endless human capacity for creating comedy from anything at hand — even Excel’s Charting Tools!

Corona Makes Fastest Growing Private Companies List … Again!


For the sixth time in seven years, Corona has made the “25 Fastest Growing Private Companies” list, compiled for the Denver area by the Denver Business Journal.  This year, we are ranked 5th (in terms of percentage growth in gross revenues) in our class, with a growth of 135% between 2005 and 2007.

Thanks to everyone – especially our clients – for making Corona Research so successful!

Forget Gen Y, What About the “Google Generation”?

Since our work on Digital Natives (pdf) for the Idaho Commission for Libraries on digital natives (mentioned in this post), we’ve been noticing others’ work on defining the behavior of GenY and the subsequent generation (whom I refuse to call Gen Z) who have all grown up with ubiquitous computers, cell phones, and the Internet.

University College London, working for the British Library, recently released yet another interesting report examining individuals born after 1993 (whom the report dubs the “Google Generation”).

The report, based on literature reviews and analysis of library database search data, focuses on how the Google Generation searches for and uses information (and how that behavior is different from other cohorts), with a focus on searches for “scholarly” articles.

A great feature of this report is that the researchers have indicated their confidence (from low to very high) in the validity of each of the hypotheses and myths they set out to examine.

To me, one of the most arresting results* lay in this graph** (click on the graph to open a window with a slightly more readable version):


Personal relationships, across all cohorts, are a common way to find scholarly articles, but the younger cohorts are more likely to search google scholar, examine an electronic table of contents, or visit a journal publisher’s website.

Members of the Google Generation are also much less likely to visit the library in person, which provides still more support to the idea that academic libraries of the future will feature far fewer physical stacks and far more virtual ones.

*Ok, this result isn’t perfect. Since the data is cross-sectional, we can’t be completely sure if the differences in behaviors between cohorts are due to the fact that they are in different generations or if there is some developmental change (i.e., some systematic difference in behavior, preferences, or training between older and younger individuals that younger individuals will eventually “grow out” of) that is causing the differences here.
**To nitpick some more, the graph isn’t perfect either. The y-axis isn’t labeled (nor is the x-axis, which we believe to be age), and the text accompanying the graph says only “the graph shows the relative value that members of the academic community place on a range of methods for finding articles,” so there’s no way to tell what scale was actually offered for the values (e.g., 1 to 6, or 1 to 10, etc.), or whether numerical “values” were accompanied by verbal labels that aren’t included on the graph. Also, the smoothed curves are unnecessary, and give the illusion of a continuous variable when, in reality, there are no values between the labeled cohorts. Using a simple straight line that connected visible dots would have been clearer.