Ethnography as it should be

This is the fifth in a series of posts on our recent trip to Africa. To see our other posts, click here. While in South Africa, we stayed at a classic game lodge.  We ate impala at night and we slept in a tent (albeit the most luxurious tent I’ve ever seen), and in the day we drove around with a Zulu ranger in a land rover, looking for animals.  Photo: Elephant on safari.

The land rovers could hold up to nine people if they enjoyed crowding, and it was obviously in the lodge’s best interest to keep them full.  So whenever small groups arrived, they were assigned to a land rover and a ranger en masse.  Three or four couples might ride together during the day, and as a bonding exercise, they also ate their meals together at night, back at the lodge.  When we arrived, we saw some of the groups that had been together for a few days, and they all seemed to get along famously.  It was a good marketing idea for the most part, as it let guests meet each other and added a social element to the bumpy rides.

For the first couple of days, Karla and I rode alone, because there were no other groups checking in during an unexpected lull in business.  We knew that we would eventually become part of a bigger group, though, and hoped that it would include some interesting people.  We were a Fred and Wilma looking forward to meeting a Barney and Betty.  On day three, an extended family of six joined our land rover, another group of Americans that included three adult siblings, a spouse, and two parents of the siblings. Photo: Our land rover (in which we were ignored).

It seemed like we would get along well, but … hey, we tried.  The family wasn’t mean or smelly or uncouth, but rather they completely and unilaterally ignored us.  All conversations were between themselves, and any attempts by us to converse were generally met with cursory answers and no back-and-forth.  I don’t know how you can ignore other people in the same vehicle, but they somehow accomplished it.  Overall, it was more than a little disappointing, and made for some rather awkward dinners and other forced interactions.  (Admittedly, however, it made for some rather humorous moments from a humbling perspective, such as our last night as a group, when the patriarch called for a group picture.  We had driven as a group from the lodge into town, for a beach outing and a nice dinner: Karla and me, their family group, our everpresent ranger, and a temporary driver who had been assigned to us that day.  “We should get everyone together for a group shot,” the patriarch said at dinner, and the youngest daughter (in her 20s) gestured to us.  “Maybe we can get those people to take the photo.”  After nearly a week of spending hours together every day, they still hadn’t bothered to learn our names.  But it got worse:  as I patiently took their camera to take their photo, and the patriarch said, “Y’know, this isn’t a complete group shot, now that I think of it.  Go get that temporary driver!”)  Photo: Our retaliatory group photo, minus the other family.

So anyway, the nice thing about this situation is that, while we didn’t exactly make new friends, it was a great opportunity to do ethnographic research.  In ethnographic research, the idea is to “live amongst the natives” and observe their behavior.  One challenge with that is that you can’t “live amongst the natives” without impacting their behavior.  If they know you’re doing research, they may change their behaviors as a result of your presence, which can taint the findings.

In this case, though, it’s safe to say that our presence did not change one thing about their behavior.  They knew nothing about us when we met them, and they knew nothing about us when we parted ways, and if pressed, I’m not sure that they would have acknowledged that there were other people in the land rover.  Since there were occasional bouts of boredom when the animals weren’t leaping from the bush, Karla and I conducted an extensive ethnographic analysis of the family, plotting and discussing how each family member interacted with each other, examining the social structure of their little tribe, and generally developing what could be a fascinating magazine article some day about family dynamics.

And you know the most interesting thing we found? Remember that Safaris and Girls, Girls and Safaris blog post last week?   Well, we proved that it held true for this family.  The youngest daughter, even though she was legally an adult, was the one who had singlehandedly caused the trip to happen.  Hopelessly spoiled and relentlessly doted on by every other family member, she was the one whose initial idea was immediately adopted and turned into a $50,000 family vacation at the patriarch’s expense.  So yes, marketers, point your safari ads to girls and young women, because they appear to be the decisionmaking heart of your market.

We won’t extend this blog article to include the full results of our ethnographic study of this family, in part because they were dysfunctional enough that it would be more entertaining than educational.  Suffice to say, though, that we know exactly how any marketing should be aimed at that family, and any others that are unfortunate enough to have similar family dynamics.  I’d be willing to bet that we know them better than they know themselves.

I guess if you ever find yourself on vacation with people who own a market research company, you should see if they’re taking notes at dinner.


18 Steps to preventing and catching online cheaters

In a recent post, I talked about the problem of professional respondents, and specifically people who cheat to earn their incentive.  At the end of the post, I posed the question, “what can we do?”

Here I provide some basics on how to ensure the quality of your online data.

Survey Design

  1. Screeners. Screeners shouldn’t broadcast the type of respondent needed to qualify for the survey.
  2. Design. Is your survey engaging so people don’t want to “speed” through it?  Once respondents become bored, they’ll hurry to finish and/or lack focus to answer your questions accurately.
  3. Engagement. Does your survey make the respondent feel like they’re contributing?  That they’re able to tell you what they really think?  Are all the possible answer choices present so respondents don’t become frustrated that they cannot answer?
  4. Length. Is your survey sufficiently “short”?  Longer surveys will cause respondents to not complete the survey, or worse, speed through it.  What is sufficient, of course, depends on the type of respondent and the subject matter.
  5. Experience. Does the survey provide a good user experience?  Instructions should be clear, layout clean, and repetition of questions kept to a minimum.
  6. Reality checks. Include questions telling respondents which answer to select to test that they’re reading the question.  For example, “Please check answer choice two in this question.”
  7. Consistency and opposite wording. Are respondents consistent with their responses on similar questions?  Ask two similar (thought sometimes reversed) questions – often at different spots in the survey.
  8. Red herrings. Does a respondent indicate they have done something or seen something that does not exist?  Include nonexistent choices in your response options.

Sample Development (as it mostly relates to using panels)

  1. Joining. How was the panel developed?  Can anyone join?  The obvious answer to this is no.  The panel should recruit from invitations instead of allowing anyone to join or a snowball method where friends of members can join.  Additionally, the panel should be recruited as “randomly” as possible for the given population.
  2. Frequency. How often do the panel members participate in surveys?  (While it is presumed that someone taking too many surveys isn’t a preferred method, how many is too many is not exactly known currently.)
  3. House cleaning. How is the panel “cleaned”?  Does the panel filter for duplicate information (people who are registered twice)?  Do they remove known cheaters?
  4. Overlap. If you’re using multiple panels for one project, is there overlap?  Can you filter for duplicates?
  5. Personalized. How personalized is your invitation?  This can help the respondent know their contribution is important.
  6. Tokens or other personal codes. Does your invitation only allow a respondent to take the survey once and prevent him/her from passing the survey on to others? (See another recent post on this issue here.)  Some type of an individual code should be provided, ideally hidden so the respondent cannot alter the code.

Data Cleaning

  1. Compare survey responses. Are different surveys exactly the same in your database?  While not necessarily evidence enough to discard the data – depending on the survey it may be likely that two people responded the same – combined with other tests below, identically completed surveys may create the case to discard data.
  2. Digital fingerprinting. Was the same machine used to take multiple surveys?  Digital fingerprinting can vary on what data is collected, but often includes at a minimum browser settings, IP address, and view settings.
  3. Speed. How quickly did the respondent finish?  If the respondent took exceptionally little time to complete the survey (as often determined by the time it took other survey takers), then the survey should be flagged.
  4. Patterns. Are there pretty pictures in your data?  Data should be checked for straight-lining.  (Asking divergent questions, as stated above can also help.)

This is by no means a comprehensive list of tools to prevent and catch cheaters – with the dynamic nature of the Internet, what works today will likely fail to protect your data tomorrow.

What else have you done to ensure high quality data?

Photo from http://www.wikihow.com/Draw-Using-Scantrons


Safaris and girls, girls and safaris

This is the fourth in a series of posts on our recent trip to Africa.  To see our other posts, click here.  This is a random observation, but we couldn’t miss it.  During part of our vacation, we went to a game lodge in South Africa.  While there was certainly a variety of household types and family types going on safari, one oddity struck us:  a very disproportionate number of families had young daughters, ranging in age from perhaps six to twelve.  For every boy we saw in that age group, we probably saw six or more girls.  If you’re a peoplewatcher by nature, it was a pattern that couldn’t be missed.

Do young girls like African wildlife more than young boys?  I don’t know.  I don’t have kids myself, and maybe every parent in the world would nod knowingly and say, “Oh, yeah.  They’re in their giraffe phase at that age.”  Maybe that’s the case.

However, it struck us that the only reasonable explanation for seeing such a strong gender skewing is that someone – the daughters or the parents – was making a decision to go on safari for their daughters’ enjoyment.  This means that a significant portion of the power to select an African vacation may not lie in the hands of the people with the pocketbooks, but rather in the wheedling power or the wistful vacation dreams of preteen girls.  That may be insightful to those who market such vacations.

This observation links in with a global theory we’ve been developing about the social development of the next generations, by the way, but that’s not quite ready to be unfurled yet.

 


Trends in travel…and research

I attended the Governor’s Colorado Tourism Conference in Beaver Creek. It was a fun three days and we ran into several people we’ve worked with and met several more that we hope to.  The conference started off with Daniel Levine from the Avant-Guide speaking about “The Five Social Trends that will propel Colorado Tourism into the Next Decade.”

It got me thinking – these trends aren’t necessarily specific to the travel industry – so how do these trends relate to market research?  Below I list each trend discussed and a few top-of-mind ways it could impact our industry.

1. Experiential

It’s more than just about physical goods, but the experience those goods can provide.  In travel this is adventure travel, getting a close encounter of your destination, and essentially “doing” instead of just “seeing.”  Beyond travel we see this in everyday life with our choices for where we eat, how we spend our freetime, and even what we do for a living.

As researchers, we’ve seen the impact of this trend.  Not only is what we are measuring changing, but how we measure it as well.  Survey questions and focus group topics must take this into account of course, but this also ties in with a larger trend in research – ethnographic research.  What better way to learn about a target audience’s experiences than to observe them and/or interview them while they are engaging in the particular activity that we are interested in as researchers?

2. Personalization

We see personalization everywhere today.  From those jewels that you put on Crocs, engraving your iPod, and the skins you put on your laptop, mass customization has allowed for greater personalization.

With reserch too, we see greater personalization on two different levels.  First, with our clients.  At Corona, for example, I don’t think I’ve ever done the same project twice (unless it was a follow up for the same client); every project is custom to the exact goals for that client.  Second, with our research subjects.  People don’t want to respond to bland, generic surveys (or any other mode of data collection).  People want to tell you about their specific experience.  By designing better surveys with relevant, specific questions, skip patterns and piping (i.e. taking answers in one question to create another question), and by correctly sampling the right population, we can make the research experience significanly more relevant, and in the process, ensure that findings are the most revealing and actionable for our clients.

3. Transparency

We all have seen this one with travel.  This is the customers’ reviews of hotels, airlines, and so on.  This is also services like Farecast that provide information on airfare trends and whether you are getting a good deal or not.

Initially, this one stumped me.  How can this apply to the professional services sector, and especially market research?  This is the knee-jerk reaction many industries or companies have – “this trend can’t or doesn’t apply to me”.  Of course, that’s wrong.  They’re trends for a reason.  And obviously it does impact market research.  While the information may not be as readily available or organized (yet) like it is on travel sites, the information is out there.  The important thing here is how we participate.  Several months ago I set upGoogle Alerts to alert me anytime “Corona Research” is mentioned in news, blogs, or other sites on the web, so we can be aware of conversations about us.  But that’s still passive, and while we may not set up a “rate this product” quite yet on our site, we are looking at ways we facilitate feedback from our past clients to our future clients beyond just listing our references.

4. Mobile

Mobile technology is allowing travelers to react quicker and change their plans on a dime.  All sorts of new apps are being created, from truly paperless checkin to buying tickets and making reservations over your phone.

To see our recent thoughts on mobile research in a previous post, click here.

5. Marketing sustainability

Finally, marketing sustainability.  You don’t even have to look hard anymore to see companies promoting how “green” they are (though I would argue more often than not its more hype than substance, but maybe I’ll discuss that in another post).  The argument here is that you not only have to be green, but you have to communicate it to your customers too.

Similar to mobile, we’ve talked about green efforts in a previous post and newsletter.  But whereas the travel market has proven that people will pay more to stay at green resorts, would research customers do the same?  Sure, its nice that we’re “greener” but is that a reason a company would hire us?  Would they be willing to pay to offset the carbon footprint of their project (paying would be the easy part, calculating the footprint of a research project would the fun part)?  Love to hear what you think about this.

Image:  Beaver Creek from the chairlift.


Professional survey respondents

We get a lot of inquiries about how to join our panel or participate in our focus groups, and consequently we spend a lot of time explaining that we don’t maintain this kind of recruiting list for participants.  (We custom recruit for almost all our groups.  We’ll explain why below.)  Some questions come from people who have just participated in research for the first time and are shocked that it wasn’t a scam, and enjoyed being paid to share their thoughts. Others are old hands at research participation and consider it a form of employment.  Hopefully this post will help both potential participants and those who commission and conduct market research understand why recruiting from a list of interested parties is undesirable.

In a quick search, I easily found several sites helping people get on a list (here and here).  I personally like the photos of happy people in the video and the association of market research with telemarketers in the last link…arghh!  For another 2.1 million links for paid surveys, click here.

Now, I’m not against the entrepreneurial spirit of earning a buck, but professional respondents are not good for our industry and therefore not good for our clients.  Especially when they start to misrepresent themselves in order to participate.  WHY?  Completing the survey (or other research mode) and earning the incentive becomes their only objective.  The problem is only compounded when respondents outright lie to qualify for the survey.  And as respondents take more and more surveys, they become more skilled in learning how to ensure they’ll make it past the screener questions and qualify.  Additionally, there is potential concern that respondents who become too skilled at taking surveys – even when not cheating – may not give quality responses due to lack of focus.  Or they may just become “tuned” to marketing in their everyday lives, and in a sense be too sophisticated to represent the “average person” targeted by the marketing campaign.

This goes back to the core tenet of survey research – sampling.  If your sample is not representative of your target population, then accurate conclusions cannot be drawn.  The only group that cheaters represent are cheaters themselves (and even then they wouldn’t fill out the survey correctly!).  Even when not cheating, similar problems arise from using “professional participants” who are not representative because they’ve become “experts” at awareness of marketing.

And this isn’t just some methodology-obsessed research firm speaking either.  Big companies are having concerns too.  A recent BusinessWeek article on the quality of online polling noted that P&G is enforcing stricter guidelines when conducting Web polling.  The article cites one instance in which two different surveys came to two completely different results regarding the attractiveness of a product.

So what can we do?  Stay tuned for an upcoming post on ways to limit professional respondents, cheaters, or respondents who are just plain lazy.


The importance of “other” both here and in Madagascar

This is the third in a series of posts on our recent trip to Africa.  To see our first two posts, click here and here. We checked into a hotel in Antananarivo, and I was delighted to see that the Malagasy people embrace market research.  Inside our room was a customer service survey asking about various attributes of the hotel.

However, upon reviewing the survey, it struck me that the survey was dealing with the wrong issues.  It was a classic hotel survey:  was the room neat and clean?  Was my check-in fast?  How was the food at the restaurant?  The survey was asking about the basic infrastructure of the hotel.

Those questions are fine, and they have value.  However, as an international traveler in a developing country, I am primarily looking for features that weren’t covered in the survey.  Sure, I want a neat room and fast check-in.  But what I really need is a feeling of safety and security in my room, and a good safe.  I need an ability to use a credit card, and to exchange money.  I need the ability to get small Malagasy currency for the innumerable tips I have to bestow, when the airport only gives me the equivalent of hundred-dollar bills.  I really, really want hot water in the shower, and a shower head height that halfway works for a giant American, and it’d be nice if there was a guide to nearby tourist places where I could walk, and information about places where I could find Internet access.

Fortunately, the end of the survey contained the question, “What other issues would you like to share?”  This allowed me to let them know about my small currency needs and my worry about the people outside the hotel who were trying to take my suitcase without telling me that they were hotel employees.  Those are the things that will help them become more desirable to foreign tourists, and by including that last question, I was able to tell them so.

Even on a well designed survey, adding the “other” question provides the researcher a chance to catch any nuances that may otherwise have been missed.  It is often in these free response questions that, through the respondents’ own words, real insights can be gained.

 


Who Uses the Internet? (Part 2: Demographics)

Part one of our examination of who uses the Internet looked at the question geographically.  In part two, we’ll look at Internet usage nationwide (data again via the NTIA) broken down by several important demographic variables.

In all of the graphs that follow, in-home Internet usage (green portion of the bars) and outside of the home internet usage (gold portion of the bars) are summed to derive the total Internet usage rate.  The bars correspond to the percentage of households with each type of internet usage, and the demographic categories are for the adult reference person for that household, called the householder.

Education

There is a strong linear trend between education (here, the highest level of schooling completed by the householder) and Internet use, with only about a quarter of homes where the householder completed only an elementary education using the Internet, increasing up to 90 percent for householders who are college graduates. College graduates love their Internet (and also are more likely to have the financial resources to pay for it), so they are the ones who are more likely to be reached by an Internet survey.

Race

Households with Asian-American and Caucasian householders have the highest rates of Internet usage at 82 percent and 75 percent, respectively.  American Indians have the largest rate of outside the home Internet use: 18 percent.

But beyond the fact that there is no chance of contacting via email, web-ad, twitter, or social networking a fifth of households with Asian-American householders and a quarter of households with Caucasian householders, is the sobering reality that Internet marketing and research won’t ever reach 4 out of ten households with American Indian or African-American householders, nor nearly half of all households with Hispanic householders.

Family Income

Similar to education level, Internet usage rises with family income (with the curious exception of the 2 percent of all households who earn less than $5,000 annually).  Additionally, as income rises, the percentage of households that only use the Internet outside of their home falls.  Only 58 percent of all households earning less than $50,000 a year use the Internet at all.

What have we learned?  That the digital divide still exists and that the Internet, like any other medium, has plenty who will miss the message.  Internet surveys and marketing will disproportionately miss those with lower incomes, those who are racial and ethnic minorities (with the exception of Asian-Americans), and those who have less education.




Customer service and the little things

This is the second in a series of posts on our recent trip to Africa.  To see our initial post, click here.  One thing that’s nice about traveling is being out with the public.  As researchers, we’re natural peoplewatchers, and this helps us with research designs when we’re back at our desks.

Our first interesting observation took place before we even left the United States.  We were sitting at the airport waiting to board, and were sitting near the counter at the gate.  A small line was forming as people sought seat changes or upgrades or whatever else brings you to the gate counter, and the single gate clerk was slowly losing ground.  The line went from two to three to half a dozen, and every few minutes it seemed to get a little longer.

No one had an issue with the gate clerk, because she was obviously working hard and working efficiently.  However, a turn of events took place that turned opinions sour quickly.

A group of three other gate agents came to the counter.  At first, we assumed that they were coming to help, but apparently they were off duty.  They merely stood to the side of the counter and joked around and laughed, having a grand old time.

As the line continued to grow, more and more people arrived to see one person working and three people goofing off.  The goofer-offers were still in uniform, so the people in line apparently presumed that they were simply ignoring the customers, and we watched as the mood in the line quickly went from patient and positive to irritated and very negative.

The thing that I found most fascinating about the situation is that a customer service study would have shown poor service at that gate, when in actuality the lone agent was working very hard.  A manager doing a customer service study would have wrongly assumed that she was underperforming, when in reality she was the victim of some misperceptions.  It reminded us as researchers that sometimes there may be factors beneath the level of the data that can corrupt one’s conclusions, and it also confirmed that a person can learn a lot about customer service experiences via observational research.

We also learned that off-duty employees in uniform shouldn’t be goofing around near a line of customers.

 


Who Uses the Internet? (Part 1: State by State)

Who can’t answer your Internet survey? Who is unable to view your spiffy new website? Who won’t be reached by your email newsletter? In survey research, we call the answer to these questions coverage error or the proportion of individuals in your population of interest who are unable to be sampled/reached. Although it’s a statistical concept, coverage error is also vital to getting your message out to those you want to hear it.

Coverage errors can be very different for populations depending on the mode of the survey. The rule of thumb around the Corona offices is that (with the exception of the homeless) all households have a door you can knock on, fewer have a home telephone you can reach through a random digit dial sample, fewer have an address you can reach using a commercial mailing list, and even fewer use the Internet and have email addresses. This hierarchy is why Corona recommends only conducting online surveys with targeted populations who are likely to be wired, as the coverage error inherent in online surveys of the general population means you’re going to get very biased results.

Now, thanks to the Federal government, we have firm data on who is missing from the wired population, so we have an idea of just how bad the coverage error is. NTIA, the National Telecommunications and Information Administration, just released some very interesting data on household Internet use as a part of their report on the state of broadband in the USA.

So who uses the Internet? In this post we’ll look at this question geographically—which states have higher and lower rates of Internet usage. And in part two we’ll look at this question demographically, to see how income, race and ethnicity, education, and household type are related to Internet usage.

What parts of the country have higher and lower Internet usage?

The banks of the Mississippi are a relatively poor place to do a general population online survey. Six [Mississippi (60 percent of household usage), Arkansas (62 percent), Louisiana (63 percent), Tennessee (66 percent), Kentucky (67 percent), & Missouri (67 percent)] of the ten states with the lowest rates of household Internet use are bordered by the river.  The other four states with the lowest levels of household Internet use are West Virginia (58 percent), Alabama (61 percent), Oklahoma (64 percent),  and South Carolina (67 percent).

The west is the place to find high rates of Internet usage, as five of the states [Alaska (84 percent), Utah (82 percent), Washington (82 percent), Colorado (79 percent), and Wyoming (76 percent)] with the highest rates of Internet usage are western. The other five states are New Hampshire (81 percent), Minnesota (79 percent), Vermont (79 percent), Kansas (77 percent), and Maryland (76 percent). Even in Alaska, which has the highest rate of household Internet usage, about one out of every six households does not use the Internet.

Those who use the Internet at home are more likely to respond to surveys, since the Internet is more readily available to them.  Again, we see lower rates of in-home Internet availability in the southern United States and along the Mississippi, with rates hovering between 40 and 60 percent.  This means that in these areas, nealry half of all households do not have Internet access in their homes! The highest rates (between 70 and 80 percent) are again in the west and northeast.

Broadband connections allow for more easy use of such services as streaming video, VOIP, and other bandwidth-intensive pursuits (including embedding video clips in surveys).

In twenty-four states currently less than half of all households have in-home broadband access, with six having a rate of less than 40 percent.  Those six states, all in the southern United States, are West Virginia (33 percent), Mississipi (33 percent), Alabama (37 percent), Arkansas (38 percent), Oklahoma (39 percent), and South Carolina (39 percent).

In only three states is the rate of in-home broadband usage higher than 60 percent: New Hampshire (65 percent), Alaska (63 percent), and Massachusetts (61 percent).

This brief analysis shows that if a survey of the general population is your goal, the Internet is not the best place to do that–your coverage error will be high in areas of the southern United States and you will not even be able to reach 18 percent of the households in the most connected states.  In our next post, we will examine just how bad the coverage error is nationwide by different demographic variables.