Big Insights can come in Little Numbers

On many of our research projects, the sample size (i.e., number of people who are surveyed) directly relates to research cost.  Costs typically increase as we print and mail more surveys or call more people. Normally, the increase in sample size is worth the extra cost because the results are more likely to accurately reflect the population of interest; however, is a big sample size always necessary?

Here is an example of when sample size does not need to be very large in order to draw valuable insights.  Let’s say you are the communications manager for an organization that wants to improve public health by increasing broccoli consumption.  For the past year, you have diligently been publicizing the message that broccoli is good for your digestive system because it is high in fiber.  Lately, your director has wondered if another message may be more effective at persuading people to eat broccoli—maybe a message that touts broccoli’s ample amount of Vitamin-C, which can help fight off the common cold. Switching your communication campaign’s key message would be expensive, but probably worth it if your new message was dramatically more effective at persuading behavior. However, if the Vitamin-C message was only marginally more effective, then it might not be worth spending the money to switch.  Your boss tasks you with conducting research to compare the effectiveness of the fiber message to the Vitamin-C message. Pencils Graph

If you have undertaken message research in the past, you may have heard that you need a large and randomly drawn sample in order to draw reliable insights.  For example, you might survey your population and collect 400 responses from a group who saw the original fiber message and 400 responses from those who saw the new Vitamin-C message.  While collecting up to 800 responses might be valuable for some types of analysis, it is probably unwarranted to answer the research question described above. Indeed, you might only need to collect about 130 responses (65 from each group) to answer the question “Which message is more effective?”  Why so few?

Sixty-five responses from each group should reveal a statistically significant difference if the effect size is moderate or greater. (In social science research, we use the term effect size as a way to measure effectiveness.  For example, is a new message more or less effective than an old message? A small effect size is less effective than a large effect size, and you need to apply careful analysis to detect a small effect, while a large effect is obvious and easy to detect).   With 65 responses, analysis should reveal a statistically significant difference if the effect size is at least moderate.

So what does moderate mean?  A helpful way (although not technically accurate) to understand effect size is to think of it as a lack of agreement between two groups (e.g., those who saw the fiber message and those who saw the Vitamin-C message).  With 65 responses from each group, a statistically significant result would mean there was no more than 66 percent agreement between the groups (technically, we mean less than 66 percent distribution overlap). For most communication managers, that is a substantial effect.  If the result pointed to the new Vitamin-C message being more effective, it’s probably worthwhile to spend the money to switch messaging!  If analysis did not find a statistically significant difference between the messages, then it’s not advisable to switch because the increased effectiveness (if any) of the new message would be marginal at best.

If cost is no factor, then a bigger sample size is usually better, but I have not yet met a client who said cost didn’t matter. Rather, our clients are typically looking for insights that are going to help them produce meaningful and substantial impacts. They look for a good value and timely results.  By understanding the intricacies of selecting an appropriate sample size, we stretch our client’s research dollars.  Give us a call if you would like to discuss how we could stretch your research dollars.


Welcome, Mollie!

We are delighted to welcome Mollie Boettcher as the newest Multi-Ethnic Group Of People Holding The Word Welcomemember of the Corona Insights team!

As our newest Associate, Mollie will specialize in qualitative research practices including, but certainly not limited to: recruiting research participants, conducting focus groups and interviews, then analyzing and interpreting qualitative data for clients seeking data-driven guidance.

Mollie attended the University of Winsonsin—La Crosse, where she majored in Business Management and minored in Chemistry. Mollie graduated with her B.S. in 2010.

When Mollie is not hard at work in her office, you can find her out hiking or snowshoeing in the Rocky Mountains. She also likes to explore Denver, including taking her dog on walks in Wash Park, and enjoying the many unique restaurants and breweries that Denver has to offer.


How Researchers Hire

Corona Insights just recently went through a round of hiring (look for a blog post soon about our newest member) and, while many of our hiring steps may be common, it did occur to me that our process mirrors the research process.

  • Set goals. Research without an end goal in mind will get you nowhere fast.  Hiring without knowing what you’re hiring for, will ensure an inappropriate match.
  • Use multiple modes.  Just as approaching research from several methodologies (e.g., quant, qual) yields a more complete picture, so too does a hiring process with multiple steps.  Reviewing resumes (literature review), screening (exploratory research), testing (quantitative), several rounds of interviews (qualitative), and mock presentation.
  • Consistency.  Want to compare differences over time or between different segments? Better be consistent in your approach.  Want to compare candidates? Better be consistent in your approach.

Needle in haystack imageI could go on about the similarities (drawing a broad sample of applicants?), but you get the idea.  The principles of research apply to a lot more than just research.

And as with any recurring research, we reevaluate what worked and what can be improved before iteration.  Therefore our process changes a little each time, but the core of it remains the same – asking the right questions and analyzing data with our end goals in mind.  Just like any good research project.

Stay tuned for a blog post about our new hire.



Haunted by Old Survey Questions

“As the Corona team bravely entered the haunted project file, they heard a strange sound. They quickly turned to the right to see Old Surveys Ghostanalyses covered in cobwebs. They shuddered. Suddenly a weird shadow crossed their faces. As they looked up, they could barely make out what it was…a report? An old invoice? No, it couldn’t be! But it was. It was the original survey, completely unchanged since 1997…..AAAAHHHH!”

The above might be a slight exaggeration. If anything, we do try to keep our screaming to a minimum at work. However, I do think that often organizations can become “haunted” by old survey questions that they cannot seem to escape.

Obviously, there can be value in tracking certain questions over time. This is especially true if you want to determine the effect of certain programming or events. However, we sometimes see organizations that are afraid to change or remove any questions at all year over year, even if it is unclear whether all the questions are useful anymore.

As pointed out previously, it is a good idea to think through the goals of your research before you start a project. If you have not seen any changes in a measure over time and/or you did not do anything intentionally to move a measure in the past year, you might consider updating your survey. And there are a ton of different ways to do this.

You can ask key measures every other year, and in between, ask totally new questions that allow you to dig deeper into a certain issue, test new marketing concepts, etc. You can ask a smaller subset of only the critical key measures every year but rotate out a subset of new questions. You can ask the same questions, but maybe ask them of a new audience . You can ask the same questions, but try out a new type of analysis. For example, instead of just reporting 55% of donors strongly believe X and 31% strongly believe Y, we could look at which belief predicts the amount that donors give. We might find out that belief X more strongly predicts the donation amount than belief Y and that might change your entire marketing campaign.

Again, tracking questions over time can definitely be important. But it might be worth considering whether you are asking certain questions because they are of use or if you are doing so just because you have always asked these questions in your survey. If the latter is your reason, it might be time to rethink your survey goals and questions.

Don’t be haunted by old survey questions. Let Corona help you clear out those ghosts and design a survey tailored to meet your current ghouls…er, goals.


New Case Study: 1920’s Era Town

We always enjoy our successful engagements with our clients, but too often we can’t share our work.  So we get even more excited when we have the opportunity to share our work, along with how it benefited a client.

Our latest case study is about a market feasibility study Corona conducted for a 1920’s Era Town concept.  View the case study here, including more on the concept, how Corona helped the founder analyze the feasibility, and the results.

Be sure to check out our other case studies and testimonials too.


Asking the “right” people is half the challenge

200395121-001We’ve been blogging a lot lately about potential problem areas for research, evaluation, and strategy. In thinking about research specifically, making sure you can trust results often boils down to these three points:

  1. Ask the right questions;
  2. Of the right people; and
  3. Analyze the data correctly

As Kevin pointed out in a blog nearly a year ago, #2 is often the crux.  When I say, “of the right people,” I am referring to making sure who you are including in your research represents who you want to study.  Deceptively simple, but there are many examples of research gone awry due to poor sampling.

So, how do you find the right people?

Ideally, you have access to a source of contacts (e.g., all mailing addresses for a geography of interest, email addresses for all members, etc.) and then randomly sample from that source (the “random” part being crucial as it is what allows you later to interpret the results for the overall, larger population).  However, those sources don’t always exist and a purely random sample isn’t possible.  Regardless, here are three steps you can take to ensure a good quality sample:

  1. Don’t let just anyone participate in the research.  As tempting as it is to just email out a link or post a survey on Facebook, you can’t be sure who is actually taking the survey (or how many times they took it).  While these forms of feedback can provide some useful feedback, it cannot be used to say “my audience overall thinks X”.  The fix: Limit access through custom links, personalized invites, and/or passwords.
  2. Respondents should represent your audience. This may sound obvious, but having your respondents truly match your overall audience (e.g., customers, members, etc.) can get tricky.  For example, some groups may be more likely to respond to a survey (e.g., females and older persons are often more likely to take a survey, leaving young males under represented). Similarly, very satisfied or dissatisfied customers may be more likely to voice an opinion, than those who are indifferent or least more passive. The fix: Use proper incentives up front to motivate all potential respondents, screen respondents to make sure they are who you think they are, and statistically weight the results on the backend to help overcome response bias.
  3. Ensure  you have enough coverage.  Coverage refers to the proportion of everyone in your population or audience that you can reach.  For example, if you have contact information for 50% of your customers, then your coverage would only be 50%.  This may or may not be a big deal – it will depend on whether those you can reach are different from those you cannot.  A very real-world example of this is telephone surveys.  The coverage of the general population via landline phones is declining rapidly now nearing only half; more importantly, the type of person you get via landline vs. a cell phone survey is very different.  The fix: The higher the coverage the better.  When you can only reach a small proportion via one mode of research, consider using multiple modes (e.g., online and mail) or look for a better source of contacts.  One general rule we often use is that if we have at least 80% coverage of a population, we’re probably ok, but always ask yourself, “Who would I be missing?”

Sometimes tradeoffs have to be made, and that can be ok when the alternative isn’t feasible.  However, at least being aware of tradeoffs is helpful and can be informative when interpreting results later.  Books have been written on survey sampling, but these initial steps will have you headed down the correct path.

Have questions? Please contact us.  We would be happy to help you reach the “right” people for your research.


The cautionary tale of 5 scary strategic planning mistakes: Part V – Don’t get too tuckered out

Momentum of strategic planThe scariest proposition is creating a strategic plan that inevitably doesn’t get implemented. Strategic plans are worth their weight in gold when they become a blueprint for future progress. As my final word to the wise, I advise leaders undertaking the strategic planning process to hold onto the momentum created by the planning process to carry them through the first years of implementation (the hard part).

I’ve long said that an organization lives in a parallel universe when engaged in strategic planning as you have to remain attentive to the present while you focus on the future. The board’s approval of the completed plan is only the beginning. If there isn’t energy and enthusiasm after the planning process, then you know the next few years of implementation are going to feel l-o-n-g. It is only a matter of time before some combination of pitfalls 1-4 (link) above sneak into the day to day.

This blog concludes my five part series about the scary tales of strategic planning. I encourage every leader to consider these lessons as they devote themselves to being strategic. Avoid these pitfalls and many others by trusting an expert to be your strategic consultant. Years of experience have given me the foresight to help my clients be successful in giving their organization a truly strategic plan.

Miss the first four blogs in this series? Feel free to start at the beginning, or pick the topic that most resonates with you.

 

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part III – Dismiss unrealistic expectations

Part IV – Be willing to say “no”


Who you gonna call?

grant evaluationWith Halloween approaching, we are writing about scary things for Corona’s blog. This got thinking about some of the scary things that we help to make less scary.  Think of us as the people who check under the bed for monsters, turn on lights in dark corners, bring our proton packs and capture the ectoplasmic entities … wait, that last one’s the Ghostbusters.  But you get the idea.

As an evaluator I find that evaluators often have a scary reputation.  There is a great fear that evaluators will conclude your programs aren’t working and that will be the end of funding and the death of your programs.  In reality, a good evaluator can be an asset to your programs (a fear-buster, if you will) in a number of ways:

  1. Direction out of the darkness.  Things go wrong … that’s life.  Evaluation can help figure out why and provide guidance on turning it around before it’s too late.  Maybe implementation wasn’t consistent, maybe some outcome measures were misunderstood by participants (see below), maybe there’s a missing step in getting from A to B.  Evaluators have a framework for systematically assessing how everything is working and pinpointing problems quickly and efficiently so you can address them and move forward.
  2. Banisher of bad measures.  A good evaluator will make sure you have measures of immediate, achievable goals (as well as measures of the loftier impacts you hope to bring about down the road), and that your measures are measuring what you want (e.g., questions that are not confusing for participants or being misunderstood and answered as the opposite of what was intended).
  3. Conqueror of math.  Some people (like us) love the logic and math and analysis of it all.  Others, not so much.  If you’re one of the math lovers, it’s nice to have an evaluation partner to get excited about the numbers with you, handle the legwork for calculating new things you’ve dreamed up, and generally provide an extra set of hands for you.  If you’re not so into math, it’s nice to be able to pass that piece off to an evaluator who can roll everything up, explain it in plain language, and help craft those grant application pieces and reports to funders that you dread.  In either case, having some extra help from good, smart people who are engaged in your work is never a bad thing, right?

This fall, don’t let the scary things get in your way.  Call in some support.


The cautionary tale of 5 scary strategic planning mistakes: Part IV – Be willing to say “no”

E006351With 14 years of experience helping organizations create strategic plans at Corona, I’ve seen many stumbles. This quarter, our firm is authoring content about “what can go wrong” in our work. On this topic, I have created a five part blog series to help leaders avoid the common mishaps I’ve witnessed in the past. My fourth lesson to leaders is: be willing to say no.

A strategy must be focused by design. Period. The best strategy sets a recognizable stake in the ground. When a strategy is too broad or too vague, then an organization struggles to devote resources to the appropriate priorities. For example, you may need to do some fence mending with recalcitrant staffers who otherwise aren’t on-board with the new direction. Too often, the experience of strategic plan implementation is muddied by he said/she said differences in view. “Hey, I thought we were going to do X. What do you mean we are doing Y.” Then presto, you’ve got a stalemate. Unwilling to admit the error, we put the plan on the proverbial shelf while we sheepishly blame the plan for a lack of results.

Creating a strategic plan takes a leader who can avoid stalemate of the organization’s direction by addressing differences proactively. Building consensus is key to creating a plan that is workable. The next blog (link) in the series will address what happens when you don’t say “no” and the planning process becomes an exhausting feat.

Read the other blogs in my five part series.

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part III – Dismiss unrealistic expectations

Part V – Don’t get too tuckered out