The Power of Ranking

One of the fun tools of our trade is index development.  It’s a way to rank order things on a single dimension that takes into account a number of relevant variables.  Let’s say you want to rank states with respect to their animal welfare conditions, or rank job candidates with regard to their experience and skills, or rank communities with respect to their cost of living.  In each of these cases, you would want to build an index (and indeed, we have, for several of those questions).

Index-based rankings are all the rage.  From the U.S. News & World Report ranking of Best Colleges to the Milliken Institute’s Best Cities for Successful Aging, one can find rankings on almost any topic of interest these days.  But these rankings aren’t all fun and games (as a recent article in The Economist points out), so let’s take a look at the stakeholders in a ranking and the impacts that rankings have.

  1. The Audience/User. Rankings are a perfect input for busy decision makers.  They help decision makers maximize their choices with very little effort.  As such, they influence behavior, driving decisions about where to apply to college, whom to hire, where to go on vacation, where to move in retirement, and so on.  But if the rankings are based on different variables than are important to the users, users can be misled.
  2. The “Ranked”. For the ranked, impacts reflect the collective decisions of the users.  Rankings impact colleges’ applicant pools, cities’ tourism revenues, and local economies.  And on the flip side, rankings influence the behavior of those being ranked who will work to improve their standing on the variables included in the index.  As the old adage goes, “what gets measured gets done.”
  3. The “Ranker”. The developer of the index holds a certain amount of power and responsibility.  There are both mathematical and conceptual competencies required (in other words, it’s a bit of a science and an art).  The developer has to decide which variables to include and how to weight them, and those decisions are often based on practical concerns as much or more than on relevance to the goal of the measurement.  (There is usually a strong need to use existing data sources and data that is available for all of the entities being ranked.)  Selecting certain variables and not others to include in the index can have downstream impacts on where ranked entities focus their efforts for improvement, even when those included variables were chosen for expediency rather than impact.

To illustrate, I built an index to rank “The Best Coffee Shops in My Neighborhood.”  I identified the five coffee shops I visit the most frequently in my neighborhood and compiled a data set of six variables: distance from my home, presence of “latte art,” amount of seating, comfort of seating, music selection, and lighting.

Coffee_Latte Art

 

 

 

 

 

My initial data set is below.  First, take note of the weight assigned to each variable.  Music selection and seating comfort are less important to my ranking than distance from home, latte art, amount of seating, and lighting.  Those weights reflect what is most important to me, but might not be consistent with the preferences of everyone else in my neighborhood.

Index Table

Next, look at the data.  Distance from home is recorded in miles (note that smaller distances are considered “better” to me, so this will require transformation prior to ranking).  Latte art is coded as present (1) or absent (0).  This is an example of a measure that is a proxy for something else.  What is important is the quality of the drink, and the barista’s ability to make latte art is likely correlated with their training overall – since I don’t have access to information about years of experience or completion of training programs, this will stand in instead as a convenience measure.  Amount of seating is pretty straightforward.  Shop #5 is a drive-through.   Seating comfort is coded as hard chairs (1) and padded seats (2).  Music selection is coded as acceptable (1) and no music (0).  Lighting is coded as north-facing windows (1), south-facing windows (2), and east- or west-facing windows (3), again, because that is my preference.

After I transform, scale, aggregate, and rank the results, here is what I get.

Index Table 2

 

 

 

 

These results correspond approximately with how often I visit each shop, suggesting that these variables have captured something real about my preferences.

Now, let’s say I post these rankings to my neighborhood’s social media site and my neighbors increase their visits to Shop #2 (which ranked 1).  My neighbors with back problems who prefer hard seat chairs may be disappointed with their choices based on my ranking.  The shop owners might get wind of this ranking and will want to know how to improve their standing.  Shops #3 and #5 might decide to teach their employees how to make latte art (without providing any additional training on espresso preparation), which would improve their rankings, but would be inconsistent with my goal for that measure, which is to capture drink quality.

With any ranking, it’s important to think about what isn’t being measured (in this example, I didn’t measure whether the shop uses a local roaster, whether they also serve food, what style of music they play, what variety of drinks they offer, etc.), and what is being measured that isn’t exactly what you care about, but is easy to measure (e.g., latte art).  These choices demonstrate the power of the ranker and have implications for the user and the ranked.

Perhaps next we’ll go ahead and create an index to rank Dave’s top ski resorts simultaneously on all of his important dimensions.

What do you want to rank?


Big Insights can come in Little Numbers

On many of our research projects, the sample size (i.e., number of people who are surveyed) directly relates to research cost.  Costs typically increase as we print and mail more surveys or call more people. Normally, the increase in sample size is worth the extra cost because the results are more likely to accurately reflect the population of interest; however, is a big sample size always necessary?

Here is an example of when sample size does not need to be very large in order to draw valuable insights.  Let’s say you are the communications manager for an organization that wants to improve public health by increasing broccoli consumption.  For the past year, you have diligently been publicizing the message that broccoli is good for your digestive system because it is high in fiber.  Lately, your director has wondered if another message may be more effective at persuading people to eat broccoli—maybe a message that touts broccoli’s ample amount of Vitamin-C, which can help fight off the common cold. Switching your communication campaign’s key message would be expensive, but probably worth it if your new message was dramatically more effective at persuading behavior. However, if the Vitamin-C message was only marginally more effective, then it might not be worth spending the money to switch.  Your boss tasks you with conducting research to compare the effectiveness of the fiber message to the Vitamin-C message. Pencils Graph

If you have undertaken message research in the past, you may have heard that you need a large and randomly drawn sample in order to draw reliable insights.  For example, you might survey your population and collect 400 responses from a group who saw the original fiber message and 400 responses from those who saw the new Vitamin-C message.  While collecting up to 800 responses might be valuable for some types of analysis, it is probably unwarranted to answer the research question described above. Indeed, you might only need to collect about 130 responses (65 from each group) to answer the question “Which message is more effective?”  Why so few?

Sixty-five responses from each group should reveal a statistically significant difference if the effect size is moderate or greater. (In social science research, we use the term effect size as a way to measure effectiveness.  For example, is a new message more or less effective than an old message? A small effect size is less effective than a large effect size, and you need to apply careful analysis to detect a small effect, while a large effect is obvious and easy to detect).   With 65 responses, analysis should reveal a statistically significant difference if the effect size is at least moderate.

So what does moderate mean?  A helpful way (although not technically accurate) to understand effect size is to think of it as a lack of agreement between two groups (e.g., those who saw the fiber message and those who saw the Vitamin-C message).  With 65 responses from each group, a statistically significant result would mean there was no more than 66 percent agreement between the groups (technically, we mean less than 66 percent distribution overlap). For most communication managers, that is a substantial effect.  If the result pointed to the new Vitamin-C message being more effective, it’s probably worthwhile to spend the money to switch messaging!  If analysis did not find a statistically significant difference between the messages, then it’s not advisable to switch because the increased effectiveness (if any) of the new message would be marginal at best.

If cost is no factor, then a bigger sample size is usually better, but I have not yet met a client who said cost didn’t matter. Rather, our clients are typically looking for insights that are going to help them produce meaningful and substantial impacts. They look for a good value and timely results.  By understanding the intricacies of selecting an appropriate sample size, we stretch our client’s research dollars.  Give us a call if you would like to discuss how we could stretch your research dollars.


Welcome, Mollie!

We are delighted to welcome Mollie Boettcher as the newest Multi-Ethnic Group Of People Holding The Word Welcomemember of the Corona Insights team!

As our newest Associate, Mollie will specialize in qualitative research practices including, but certainly not limited to: recruiting research participants, conducting focus groups and interviews, then analyzing and interpreting qualitative data for clients seeking data-driven guidance.

Mollie attended the University of Winsonsin—La Crosse, where she majored in Business Management and minored in Chemistry. Mollie graduated with her B.S. in 2010.

When Mollie is not hard at work in her office, you can find her out hiking or snowshoeing in the Rocky Mountains. She also likes to explore Denver, including taking her dog on walks in Wash Park, and enjoying the many unique restaurants and breweries that Denver has to offer.


How Researchers Hire

Corona Insights just recently went through a round of hiring (look for a blog post soon about our newest member) and, while many of our hiring steps may be common, it did occur to me that our process mirrors the research process.

  • Set goals. Research without an end goal in mind will get you nowhere fast.  Hiring without knowing what you’re hiring for, will ensure an inappropriate match.
  • Use multiple modes.  Just as approaching research from several methodologies (e.g., quant, qual) yields a more complete picture, so too does a hiring process with multiple steps.  Reviewing resumes (literature review), screening (exploratory research), testing (quantitative), several rounds of interviews (qualitative), and mock presentation.
  • Consistency.  Want to compare differences over time or between different segments? Better be consistent in your approach.  Want to compare candidates? Better be consistent in your approach.

Needle in haystack imageI could go on about the similarities (drawing a broad sample of applicants?), but you get the idea.  The principles of research apply to a lot more than just research.

And as with any recurring research, we reevaluate what worked and what can be improved before iteration.  Therefore our process changes a little each time, but the core of it remains the same – asking the right questions and analyzing data with our end goals in mind.  Just like any good research project.

Stay tuned for a blog post about our new hire.



Haunted by Old Survey Questions

“As the Corona team bravely entered the haunted project file, they heard a strange sound. They quickly turned to the right to see Old Surveys Ghostanalyses covered in cobwebs. They shuddered. Suddenly a weird shadow crossed their faces. As they looked up, they could barely make out what it was…a report? An old invoice? No, it couldn’t be! But it was. It was the original survey, completely unchanged since 1997…..AAAAHHHH!”

The above might be a slight exaggeration. If anything, we do try to keep our screaming to a minimum at work. However, I do think that often organizations can become “haunted” by old survey questions that they cannot seem to escape.

Obviously, there can be value in tracking certain questions over time. This is especially true if you want to determine the effect of certain programming or events. However, we sometimes see organizations that are afraid to change or remove any questions at all year over year, even if it is unclear whether all the questions are useful anymore.

As pointed out previously, it is a good idea to think through the goals of your research before you start a project. If you have not seen any changes in a measure over time and/or you did not do anything intentionally to move a measure in the past year, you might consider updating your survey. And there are a ton of different ways to do this.

You can ask key measures every other year, and in between, ask totally new questions that allow you to dig deeper into a certain issue, test new marketing concepts, etc. You can ask a smaller subset of only the critical key measures every year but rotate out a subset of new questions. You can ask the same questions, but maybe ask them of a new audience . You can ask the same questions, but try out a new type of analysis. For example, instead of just reporting 55% of donors strongly believe X and 31% strongly believe Y, we could look at which belief predicts the amount that donors give. We might find out that belief X more strongly predicts the donation amount than belief Y and that might change your entire marketing campaign.

Again, tracking questions over time can definitely be important. But it might be worth considering whether you are asking certain questions because they are of use or if you are doing so just because you have always asked these questions in your survey. If the latter is your reason, it might be time to rethink your survey goals and questions.

Don’t be haunted by old survey questions. Let Corona help you clear out those ghosts and design a survey tailored to meet your current ghouls…er, goals.


New Case Study: 1920’s Era Town

We always enjoy our successful engagements with our clients, but too often we can’t share our work.  So we get even more excited when we have the opportunity to share our work, along with how it benefited a client.

Our latest case study is about a market feasibility study Corona conducted for a 1920’s Era Town concept.  View the case study here, including more on the concept, how Corona helped the founder analyze the feasibility, and the results.

Be sure to check out our other case studies and testimonials too.


Asking the “right” people is half the challenge

200395121-001We’ve been blogging a lot lately about potential problem areas for research, evaluation, and strategy. In thinking about research specifically, making sure you can trust results often boils down to these three points:

  1. Ask the right questions;
  2. Of the right people; and
  3. Analyze the data correctly

As Kevin pointed out in a blog nearly a year ago, #2 is often the crux.  When I say, “of the right people,” I am referring to making sure who you are including in your research represents who you want to study.  Deceptively simple, but there are many examples of research gone awry due to poor sampling.

So, how do you find the right people?

Ideally, you have access to a source of contacts (e.g., all mailing addresses for a geography of interest, email addresses for all members, etc.) and then randomly sample from that source (the “random” part being crucial as it is what allows you later to interpret the results for the overall, larger population).  However, those sources don’t always exist and a purely random sample isn’t possible.  Regardless, here are three steps you can take to ensure a good quality sample:

  1. Don’t let just anyone participate in the research.  As tempting as it is to just email out a link or post a survey on Facebook, you can’t be sure who is actually taking the survey (or how many times they took it).  While these forms of feedback can provide some useful feedback, it cannot be used to say “my audience overall thinks X”.  The fix: Limit access through custom links, personalized invites, and/or passwords.
  2. Respondents should represent your audience. This may sound obvious, but having your respondents truly match your overall audience (e.g., customers, members, etc.) can get tricky.  For example, some groups may be more likely to respond to a survey (e.g., females and older persons are often more likely to take a survey, leaving young males under represented). Similarly, very satisfied or dissatisfied customers may be more likely to voice an opinion, than those who are indifferent or least more passive. The fix: Use proper incentives up front to motivate all potential respondents, screen respondents to make sure they are who you think they are, and statistically weight the results on the backend to help overcome response bias.
  3. Ensure  you have enough coverage.  Coverage refers to the proportion of everyone in your population or audience that you can reach.  For example, if you have contact information for 50% of your customers, then your coverage would only be 50%.  This may or may not be a big deal – it will depend on whether those you can reach are different from those you cannot.  A very real-world example of this is telephone surveys.  The coverage of the general population via landline phones is declining rapidly now nearing only half; more importantly, the type of person you get via landline vs. a cell phone survey is very different.  The fix: The higher the coverage the better.  When you can only reach a small proportion via one mode of research, consider using multiple modes (e.g., online and mail) or look for a better source of contacts.  One general rule we often use is that if we have at least 80% coverage of a population, we’re probably ok, but always ask yourself, “Who would I be missing?”

Sometimes tradeoffs have to be made, and that can be ok when the alternative isn’t feasible.  However, at least being aware of tradeoffs is helpful and can be informative when interpreting results later.  Books have been written on survey sampling, but these initial steps will have you headed down the correct path.

Have questions? Please contact us.  We would be happy to help you reach the “right” people for your research.


The cautionary tale of 5 scary strategic planning mistakes: Part V – Don’t get too tuckered out

Momentum of strategic planThe scariest proposition is creating a strategic plan that inevitably doesn’t get implemented. Strategic plans are worth their weight in gold when they become a blueprint for future progress. As my final word to the wise, I advise leaders undertaking the strategic planning process to hold onto the momentum created by the planning process to carry them through the first years of implementation (the hard part).

I’ve long said that an organization lives in a parallel universe when engaged in strategic planning as you have to remain attentive to the present while you focus on the future. The board’s approval of the completed plan is only the beginning. If there isn’t energy and enthusiasm after the planning process, then you know the next few years of implementation are going to feel l-o-n-g. It is only a matter of time before some combination of pitfalls 1-4 (link) above sneak into the day to day.

This blog concludes my five part series about the scary tales of strategic planning. I encourage every leader to consider these lessons as they devote themselves to being strategic. Avoid these pitfalls and many others by trusting an expert to be your strategic consultant. Years of experience have given me the foresight to help my clients be successful in giving their organization a truly strategic plan.

Miss the first four blogs in this series? Feel free to start at the beginning, or pick the topic that most resonates with you.

 

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part III – Dismiss unrealistic expectations

Part IV – Be willing to say “no”


Who you gonna call?

grant evaluationWith Halloween approaching, we are writing about scary things for Corona’s blog. This got thinking about some of the scary things that we help to make less scary.  Think of us as the people who check under the bed for monsters, turn on lights in dark corners, bring our proton packs and capture the ectoplasmic entities … wait, that last one’s the Ghostbusters.  But you get the idea.

As an evaluator I find that evaluators often have a scary reputation.  There is a great fear that evaluators will conclude your programs aren’t working and that will be the end of funding and the death of your programs.  In reality, a good evaluator can be an asset to your programs (a fear-buster, if you will) in a number of ways:

  1. Direction out of the darkness.  Things go wrong … that’s life.  Evaluation can help figure out why and provide guidance on turning it around before it’s too late.  Maybe implementation wasn’t consistent, maybe some outcome measures were misunderstood by participants (see below), maybe there’s a missing step in getting from A to B.  Evaluators have a framework for systematically assessing how everything is working and pinpointing problems quickly and efficiently so you can address them and move forward.
  2. Banisher of bad measures.  A good evaluator will make sure you have measures of immediate, achievable goals (as well as measures of the loftier impacts you hope to bring about down the road), and that your measures are measuring what you want (e.g., questions that are not confusing for participants or being misunderstood and answered as the opposite of what was intended).
  3. Conqueror of math.  Some people (like us) love the logic and math and analysis of it all.  Others, not so much.  If you’re one of the math lovers, it’s nice to have an evaluation partner to get excited about the numbers with you, handle the legwork for calculating new things you’ve dreamed up, and generally provide an extra set of hands for you.  If you’re not so into math, it’s nice to be able to pass that piece off to an evaluator who can roll everything up, explain it in plain language, and help craft those grant application pieces and reports to funders that you dread.  In either case, having some extra help from good, smart people who are engaged in your work is never a bad thing, right?

This fall, don’t let the scary things get in your way.  Call in some support.