The Corona-Nerd Gift Guide

Time is ticking on holiday gift buying.  Don’t panic though as Corona has you covered, at least for the nerds in your life.

Here are 10 gift ideas for your nerds, or at least those with an offbeat sense of humor.

1.  The complete National Geographic Atlas set.

Who doesn’t like a high quality map?  Day dream of far off places for hours and stay up on current geopolitical boundaries.

2. Data visualization book

Maybe you want to give something even more visually stunning.  David McCandless’s stunning infographics help you visualize data and connect the dots.

3.  How to program data visualization

Or, perhaps, you want to program your own.  Learn how from  a reporter and visual journalist at FiveThirtyEight.com.

 4. ACME Klein Bottle

For the mathematician.  Don’t know what a Klein bottle is? 

 5.  A normal (distribution) pillow

For the statistician.    Other distributions also available (perhaps for the not-so-normal?).

6. “Correlation does not equal causation” t-shirt

Everyone who works in research should have one.

 

7. Tetris lamp

Instead of those 8-bit gifts, get the gift that keeps on entertaining.

8. Gallium spoon

Gallium “melts” in hot liquid. Perfect for scaring your coworkers during their morning tea or coffee.

9.Delayed Gratification

Give the gift that keeps on giving, albeit slowly.  A leading publication of “slow journalism,” they report with the power of hindsight.

10. The official Coronerd

Beth Mulligan, Principal, knitted our unofficial mascot last year for all employees. Unfortunately, not for sale at this time.

Coronerd

 


To Force or not to Force (an answer): It’s a complicated question

Survey design can be a complex and nuanced process.  We have made a multitude of posts on the subject, including asking the right people to participate, and how to ask the right questions, but one area we don’t talk about a lot is how the answers you provide in a survey can influence your results.  This is less of an issue in a verbal survey (such as a telephone survey) since interviewers are able to Answeraccept answers that aren’t directly given, but this can be a critical issue in online surveys and mail surveys since respondents can read all of the answers available to them before determining how they want to respond.   Here are a few of the decisions you may need to consider when designing questions for these types of surveys:

None of the above and Other

In cases where you provide respondents with a list of options to choose from, it is important that every respondent is able to answer the question. For example, consider the following hypothetical question for a survey of college students:

  • Which of the following best describes your year in school?
    • Freshman
    • Sophomore
    • Junior
    • Senior

In many cases, that question may be perfectly reasonable.  However, what if a student is in their fifth year?  Or what if the school has non-traditional students?  In these cases, respondents will likely either select a random answer or simply abandon your survey.  Neither of these possibilities is ideal, so it may be useful to include a “none of the above” option or an “other” option so that everyone can feel comfortable selecting an answer.

Our recommendation: Be comprehensive in your answer options, and give “other” and “none of the above” choices any time you aren’t sure you have all of the possibilities covered.

Don’t know

Another way in which this issue manifests itself is with regard to providing a “don’t know” option.  For example, consider the following hypothetical question:

  • How would you rate your opinion of Organization X?
    • Very positive
    • Somewhat positive
    • Neutral
    • Somewhat negative
    • Very negative

By not including a “don’t know” option, you are in effect forcing people to make a decision about the organization.  However, if someone in fact doesn’t know enough about the organization, they may potentially choose an answer that they don’t truly believe or abandon their response altogether.

On the other hand, including a “don’t know” option in your survey may not always be ideal either.  While a portion of your respondents may legitimately need to choose that answer, a number of others may choose that answer simply because it’s an easy option that doesn’t require a lot of thought.  If you are confident that all respondents know the organization well, including a “don’t know” option may actually reduce the quality of your data due to respondents taking the easy way out.  (As an aside, this is a similar issue as including a “neutral” option, but we’ll save that for another blog post!)

Our recommendation: Leave “don’t know” off if you believe that respondents should be able to form an opinion about the question, but include the option if there’s a good chance that many respondents may truly not have an answer.

Prefer not to answer

Similar to the above, when asking questions about sensitive topics, it can be beneficial to allow respondents to choose not to answer the question.  Household income, for example, is often valuable to use as a segmentation criteria about consumers, but many survey respondents are reluctant to share such information.  Including a “prefer not to answer” option may make respondents more comfortable with participating in your survey.  However, even more so than the “don’t know” option, suggesting that it’s OK to not answer the question will undoubtedly increase the number of respondents who do so, dramatically increasing the amount of missing data for the question.

Our recommendation: Put sensitive questions toward the end of the survey so that respondents are already comfortable with answering, but don’t include a “prefer not to answer” option unless the question is particularly objectionable for respondents.  Instead, consider allowing respondents to simply skip the question as described below.

Not forcing responses

That brings us to our final topic which is, in a sense, an answer option that you can’t even see: not requiring responses to survey questions in an online survey.   Even if you’ve tried to be comprehensive in your answers and provide alternate answers such as those described above, there is always a possibility that a respondent may simply be unwilling to answer a question.   If you require them to do so, they will likely either choose a random response or just abandon your survey entirely.  If having missing data will cause problems with your survey (for example, if meeting some criteria is required to direct respondents to appropriate questions) or in your analysis (for example, in a segmentation group you will use in your analysis), it may be necessary to require a response, but be sure to consider the implications of doing so.

Our recommendation: Use forced responses sparingly.  It’s usually better to allow someone to skip a question than it is to have data for a question that isn’t accurate.


The Power of Ranking

One of the fun tools of our trade is index development.  It’s a way to rank order things on a single dimension that takes into account a number of relevant variables.  Let’s say you want to rank states with respect to their animal welfare conditions, or rank job candidates with regard to their experience and skills, or rank communities with respect to their cost of living.  In each of these cases, you would want to build an index (and indeed, we have, for several of those questions).

Index-based rankings are all the rage.  From the U.S. News & World Report ranking of Best Colleges to the Milliken Institute’s Best Cities for Successful Aging, one can find rankings on almost any topic of interest these days.  But these rankings aren’t all fun and games (as a recent article in The Economist points out), so let’s take a look at the stakeholders in a ranking and the impacts that rankings have.

  1. The Audience/User. Rankings are a perfect input for busy decision makers.  They help decision makers maximize their choices with very little effort.  As such, they influence behavior, driving decisions about where to apply to college, whom to hire, where to go on vacation, where to move in retirement, and so on.  But if the rankings are based on different variables than are important to the users, users can be misled.
  2. The “Ranked”. For the ranked, impacts reflect the collective decisions of the users.  Rankings impact colleges’ applicant pools, cities’ tourism revenues, and local economies.  And on the flip side, rankings influence the behavior of those being ranked who will work to improve their standing on the variables included in the index.  As the old adage goes, “what gets measured gets done.”
  3. The “Ranker”. The developer of the index holds a certain amount of power and responsibility.  There are both mathematical and conceptual competencies required (in other words, it’s a bit of a science and an art).  The developer has to decide which variables to include and how to weight them, and those decisions are often based on practical concerns as much or more than on relevance to the goal of the measurement.  (There is usually a strong need to use existing data sources and data that is available for all of the entities being ranked.)  Selecting certain variables and not others to include in the index can have downstream impacts on where ranked entities focus their efforts for improvement, even when those included variables were chosen for expediency rather than impact.

To illustrate, I built an index to rank “The Best Coffee Shops in My Neighborhood.”  I identified the five coffee shops I visit the most frequently in my neighborhood and compiled a data set of six variables: distance from my home, presence of “latte art,” amount of seating, comfort of seating, music selection, and lighting.

Coffee_Latte Art

 

 

 

 

 

My initial data set is below.  First, take note of the weight assigned to each variable.  Music selection and seating comfort are less important to my ranking than distance from home, latte art, amount of seating, and lighting.  Those weights reflect what is most important to me, but might not be consistent with the preferences of everyone else in my neighborhood.

Index Table

Next, look at the data.  Distance from home is recorded in miles (note that smaller distances are considered “better” to me, so this will require transformation prior to ranking).  Latte art is coded as present (1) or absent (0).  This is an example of a measure that is a proxy for something else.  What is important is the quality of the drink, and the barista’s ability to make latte art is likely correlated with their training overall – since I don’t have access to information about years of experience or completion of training programs, this will stand in instead as a convenience measure.  Amount of seating is pretty straightforward.  Shop #5 is a drive-through.   Seating comfort is coded as hard chairs (1) and padded seats (2).  Music selection is coded as acceptable (1) and no music (0).  Lighting is coded as north-facing windows (1), south-facing windows (2), and east- or west-facing windows (3), again, because that is my preference.

After I transform, scale, aggregate, and rank the results, here is what I get.

Index Table 2

 

 

 

 

These results correspond approximately with how often I visit each shop, suggesting that these variables have captured something real about my preferences.

Now, let’s say I post these rankings to my neighborhood’s social media site and my neighbors increase their visits to Shop #2 (which ranked 1).  My neighbors with back problems who prefer hard seat chairs may be disappointed with their choices based on my ranking.  The shop owners might get wind of this ranking and will want to know how to improve their standing.  Shops #3 and #5 might decide to teach their employees how to make latte art (without providing any additional training on espresso preparation), which would improve their rankings, but would be inconsistent with my goal for that measure, which is to capture drink quality.

With any ranking, it’s important to think about what isn’t being measured (in this example, I didn’t measure whether the shop uses a local roaster, whether they also serve food, what style of music they play, what variety of drinks they offer, etc.), and what is being measured that isn’t exactly what you care about, but is easy to measure (e.g., latte art).  These choices demonstrate the power of the ranker and have implications for the user and the ranked.

Perhaps next we’ll go ahead and create an index to rank Dave’s top ski resorts simultaneously on all of his important dimensions.

What do you want to rank?


Big Insights can come in Little Numbers

On many of our research projects, the sample size (i.e., number of people who are surveyed) directly relates to research cost.  Costs typically increase as we print and mail more surveys or call more people. Normally, the increase in sample size is worth the extra cost because the results are more likely to accurately reflect the population of interest; however, is a big sample size always necessary?

Here is an example of when sample size does not need to be very large in order to draw valuable insights.  Let’s say you are the communications manager for an organization that wants to improve public health by increasing broccoli consumption.  For the past year, you have diligently been publicizing the message that broccoli is good for your digestive system because it is high in fiber.  Lately, your director has wondered if another message may be more effective at persuading people to eat broccoli—maybe a message that touts broccoli’s ample amount of Vitamin-C, which can help fight off the common cold. Switching your communication campaign’s key message would be expensive, but probably worth it if your new message was dramatically more effective at persuading behavior. However, if the Vitamin-C message was only marginally more effective, then it might not be worth spending the money to switch.  Your boss tasks you with conducting research to compare the effectiveness of the fiber message to the Vitamin-C message. Pencils Graph

If you have undertaken message research in the past, you may have heard that you need a large and randomly drawn sample in order to draw reliable insights.  For example, you might survey your population and collect 400 responses from a group who saw the original fiber message and 400 responses from those who saw the new Vitamin-C message.  While collecting up to 800 responses might be valuable for some types of analysis, it is probably unwarranted to answer the research question described above. Indeed, you might only need to collect about 130 responses (65 from each group) to answer the question “Which message is more effective?”  Why so few?

Sixty-five responses from each group should reveal a statistically significant difference if the effect size is moderate or greater. (In social science research, we use the term effect size as a way to measure effectiveness.  For example, is a new message more or less effective than an old message? A small effect size is less effective than a large effect size, and you need to apply careful analysis to detect a small effect, while a large effect is obvious and easy to detect).   With 65 responses, analysis should reveal a statistically significant difference if the effect size is at least moderate.

So what does moderate mean?  A helpful way (although not technically accurate) to understand effect size is to think of it as a lack of agreement between two groups (e.g., those who saw the fiber message and those who saw the Vitamin-C message).  With 65 responses from each group, a statistically significant result would mean there was no more than 66 percent agreement between the groups (technically, we mean less than 66 percent distribution overlap). For most communication managers, that is a substantial effect.  If the result pointed to the new Vitamin-C message being more effective, it’s probably worthwhile to spend the money to switch messaging!  If analysis did not find a statistically significant difference between the messages, then it’s not advisable to switch because the increased effectiveness (if any) of the new message would be marginal at best.

If cost is no factor, then a bigger sample size is usually better, but I have not yet met a client who said cost didn’t matter. Rather, our clients are typically looking for insights that are going to help them produce meaningful and substantial impacts. They look for a good value and timely results.  By understanding the intricacies of selecting an appropriate sample size, we stretch our client’s research dollars.  Give us a call if you would like to discuss how we could stretch your research dollars.


Welcome, Mollie!

We are delighted to welcome Mollie Boettcher as the newest Multi-Ethnic Group Of People Holding The Word Welcomemember of the Corona Insights team!

As our newest Associate, Mollie will specialize in qualitative research practices including, but certainly not limited to: recruiting research participants, conducting focus groups and interviews, then analyzing and interpreting qualitative data for clients seeking data-driven guidance.

Mollie attended the University of Winsonsin—La Crosse, where she majored in Business Management and minored in Chemistry. Mollie graduated with her B.S. in 2010.

When Mollie is not hard at work in her office, you can find her out hiking or snowshoeing in the Rocky Mountains. She also likes to explore Denver, including taking her dog on walks in Wash Park, and enjoying the many unique restaurants and breweries that Denver has to offer.


How Researchers Hire

Corona Insights just recently went through a round of hiring (look for a blog post soon about our newest member) and, while many of our hiring steps may be common, it did occur to me that our process mirrors the research process.

  • Set goals. Research without an end goal in mind will get you nowhere fast.  Hiring without knowing what you’re hiring for, will ensure an inappropriate match.
  • Use multiple modes.  Just as approaching research from several methodologies (e.g., quant, qual) yields a more complete picture, so too does a hiring process with multiple steps.  Reviewing resumes (literature review), screening (exploratory research), testing (quantitative), several rounds of interviews (qualitative), and mock presentation.
  • Consistency.  Want to compare differences over time or between different segments? Better be consistent in your approach.  Want to compare candidates? Better be consistent in your approach.

Needle in haystack imageI could go on about the similarities (drawing a broad sample of applicants?), but you get the idea.  The principles of research apply to a lot more than just research.

And as with any recurring research, we reevaluate what worked and what can be improved before iteration.  Therefore our process changes a little each time, but the core of it remains the same – asking the right questions and analyzing data with our end goals in mind.  Just like any good research project.

Stay tuned for a blog post about our new hire.



Haunted by Old Survey Questions

“As the Corona team bravely entered the haunted project file, they heard a strange sound. They quickly turned to the right to see Old Surveys Ghostanalyses covered in cobwebs. They shuddered. Suddenly a weird shadow crossed their faces. As they looked up, they could barely make out what it was…a report? An old invoice? No, it couldn’t be! But it was. It was the original survey, completely unchanged since 1997…..AAAAHHHH!”

The above might be a slight exaggeration. If anything, we do try to keep our screaming to a minimum at work. However, I do think that often organizations can become “haunted” by old survey questions that they cannot seem to escape.

Obviously, there can be value in tracking certain questions over time. This is especially true if you want to determine the effect of certain programming or events. However, we sometimes see organizations that are afraid to change or remove any questions at all year over year, even if it is unclear whether all the questions are useful anymore.

As pointed out previously, it is a good idea to think through the goals of your research before you start a project. If you have not seen any changes in a measure over time and/or you did not do anything intentionally to move a measure in the past year, you might consider updating your survey. And there are a ton of different ways to do this.

You can ask key measures every other year, and in between, ask totally new questions that allow you to dig deeper into a certain issue, test new marketing concepts, etc. You can ask a smaller subset of only the critical key measures every year but rotate out a subset of new questions. You can ask the same questions, but maybe ask them of a new audience . You can ask the same questions, but try out a new type of analysis. For example, instead of just reporting 55% of donors strongly believe X and 31% strongly believe Y, we could look at which belief predicts the amount that donors give. We might find out that belief X more strongly predicts the donation amount than belief Y and that might change your entire marketing campaign.

Again, tracking questions over time can definitely be important. But it might be worth considering whether you are asking certain questions because they are of use or if you are doing so just because you have always asked these questions in your survey. If the latter is your reason, it might be time to rethink your survey goals and questions.

Don’t be haunted by old survey questions. Let Corona help you clear out those ghosts and design a survey tailored to meet your current ghouls…er, goals.


New Case Study: 1920’s Era Town

We always enjoy our successful engagements with our clients, but too often we can’t share our work.  So we get even more excited when we have the opportunity to share our work, along with how it benefited a client.

Our latest case study is about a market feasibility study Corona conducted for a 1920’s Era Town concept.  View the case study here, including more on the concept, how Corona helped the founder analyze the feasibility, and the results.

Be sure to check out our other case studies and testimonials too.


Asking the “right” people is half the challenge

200395121-001We’ve been blogging a lot lately about potential problem areas for research, evaluation, and strategy. In thinking about research specifically, making sure you can trust results often boils down to these three points:

  1. Ask the right questions;
  2. Of the right people; and
  3. Analyze the data correctly

As Kevin pointed out in a blog nearly a year ago, #2 is often the crux.  When I say, “of the right people,” I am referring to making sure who you are including in your research represents who you want to study.  Deceptively simple, but there are many examples of research gone awry due to poor sampling.

So, how do you find the right people?

Ideally, you have access to a source of contacts (e.g., all mailing addresses for a geography of interest, email addresses for all members, etc.) and then randomly sample from that source (the “random” part being crucial as it is what allows you later to interpret the results for the overall, larger population).  However, those sources don’t always exist and a purely random sample isn’t possible.  Regardless, here are three steps you can take to ensure a good quality sample:

  1. Don’t let just anyone participate in the research.  As tempting as it is to just email out a link or post a survey on Facebook, you can’t be sure who is actually taking the survey (or how many times they took it).  While these forms of feedback can provide some useful feedback, it cannot be used to say “my audience overall thinks X”.  The fix: Limit access through custom links, personalized invites, and/or passwords.
  2. Respondents should represent your audience. This may sound obvious, but having your respondents truly match your overall audience (e.g., customers, members, etc.) can get tricky.  For example, some groups may be more likely to respond to a survey (e.g., females and older persons are often more likely to take a survey, leaving young males under represented). Similarly, very satisfied or dissatisfied customers may be more likely to voice an opinion, than those who are indifferent or least more passive. The fix: Use proper incentives up front to motivate all potential respondents, screen respondents to make sure they are who you think they are, and statistically weight the results on the backend to help overcome response bias.
  3. Ensure  you have enough coverage.  Coverage refers to the proportion of everyone in your population or audience that you can reach.  For example, if you have contact information for 50% of your customers, then your coverage would only be 50%.  This may or may not be a big deal – it will depend on whether those you can reach are different from those you cannot.  A very real-world example of this is telephone surveys.  The coverage of the general population via landline phones is declining rapidly now nearing only half; more importantly, the type of person you get via landline vs. a cell phone survey is very different.  The fix: The higher the coverage the better.  When you can only reach a small proportion via one mode of research, consider using multiple modes (e.g., online and mail) or look for a better source of contacts.  One general rule we often use is that if we have at least 80% coverage of a population, we’re probably ok, but always ask yourself, “Who would I be missing?”

Sometimes tradeoffs have to be made, and that can be ok when the alternative isn’t feasible.  However, at least being aware of tradeoffs is helpful and can be informative when interpreting results later.  Books have been written on survey sampling, but these initial steps will have you headed down the correct path.

Have questions? Please contact us.  We would be happy to help you reach the “right” people for your research.