The cautionary tale of 5 scary strategic planning mistakes: Part III – Dismiss unrealistic expectations

Part III – Dismiss unrealistic expectations 

hercules and the bullThis quarter, the Corona team is blogging about “what can go wrong”. The theme inspired me to write a five part series about the common hazards I’ve witnessed in the strategic planning process. In review, avoid self-sabotage  and side swipes. Lesson number three: I advise clients start the process with realistic expectations.

Strategic planning processes go wrong when they are expected to achieve Herculean feats that actually have nothing to do with the real work of setting strategy. Those feats are most often associated with the people side of the organization and its culture. The process of setting strategy must to be concerned with the external environment – most notably with market, customer, industry and macro conditions. Attending to the people side is important too, but don’t expect a strategic planning process to serve as the primary intervention for organizational change. If you need to align around a common vision and guiding principles then commit to doing that philosophical work. But please don’t confuse that with the work required to set a true strategy.

If you find yourself grasping for unrealistic expectations, you will likely face conundrum number four: the inability to say “no”. Stay tuned for part four (link) of my series.

Read the other blogs in my five part series.

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part IV – Be willing to say “no” (Coming soon!)

Part V – Don’t get too tuckered out (Coming soon!)


Begin with the end in mind

Missed the targetWhen we think about the pitfalls of conducting market research, our minds tend to focus on all of the mistakes you can make when collecting data or analyzing the results.  You can find other posts on this blog, for example, that discuss why it is important to collect data in a way that can be generalized to the entire universe being studied, why intentions do not necessarily translate into actions, and why correlation does not equate to causation.

But even if you are diligent about ensuring that your overall methodology is solid, there is another oversight that can potentially cause even more problems in your research: conducting research that isn’t actionable.

During my time with a previous employer, we once had an international gear and apparel brand contact us (we’ll call them “Brand X” for confidentiality) that had just completed a large-scale segmentation of their customers through another vendor.  While that vendor was qualified to do the work, the segmentation analysis had resulted in 12 market segments.  There’s nothing inherently wrong with that from a methodological perspective,  but anyone charged with marketing products will agree that it’s incredibly difficult to split your focus so many directions.  If they tried to do so, they would likely dilute their overall messaging to the point that it simply wasn’t cohesive.

Brand X asked us to try and salvage the project by taking the initial results and refining them into a more manageable set of segments for Brand X in the future.  We were able to help, but in the end, the study took roughly 50% more time and resources to complete because they weren’t specific up front about what they were looking to accomplish with the segmentation and the constraints that would need to be in place for it to be usable.

At Corona, we are always mindful of this potential blunder, so we encourage our clients to think not only about how to conduct the research, but also why the research is being conducted.  We often set 3-5 major goals for the research up front that can be used to vet any other survey questions or focus group topics in order to ensure the end result will meet the needs for which the research was undertaken in the first place.  By understanding how you will be able to use the results, you can design research in a way that will ensure the results will allow you to make those tough decisions in the end.


The cautionary tale of 5 scary strategic planning mistakes: Part II – Avoid side swipes

Side Swipe, strategic plan pitfallsMy blog series is chronicling the five major pitfalls of strategic planning. With years of experience under my belt leading strategic planning efforts, I’ve seen it all. When advising leaders, I warn them to watch out for “side swipes”. What is a “side swipe” you ask?

Your strategic planning process is in motion and out of nowhere comes another priority that collides with the strategy setting process. Akin to being side swiped on the highway, you and your vehicle are now out of commission.  Not only have you lost momentum unexpectedly as you deal with the shock from the event, now you have to turn your attention to the source of the collision. Perhaps it’s a slow-brewing challenge that’s morphed into a pressing emergency. Or, your organization is experiencing high turnover in key positions on staff or the board. Or, the always successful special event is turning into a dud. Whatever the case, you now must deal with a pressing issue that distracts you from creating strategic direction.

Before beginning the strategic planning process, I recommend identifying potential “slow brewing” challenges, avoiding major events during the strategic planning timeline, and starting your process with a strong leadership team in place. No matter how prepared you are, you can be “side swiped” at any time. Being aware of this pitfall is the first step to preventing it from hijacking your planning process.

Read the other blogs in my five part series.

 

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part III – Dismiss unrealistic expectations

Part IV – Be willing to say “no” (Coming soon!)

Part V – Don’t get too tuckered out (Coming soon!)


The cautionary tale of 5 scary strategic planning mistakes; Part I: Don’t self-sabotage

CEO - Chief Strategy OfficerWhen pitching a new client, I’m often asked to reflect on “what went wrong” with another strategic planning process. Perhaps the prospective client is familiar with the other organization either through professional relationships or reputation. Or quite simply, they are curious to know if their own experiences mirror those of other leaders. This quarter, the Corona team is blogging about “what can go wrong”.  With that motivation, I have presented a few of the more gruesome pitfalls to avoid. Lesson number one: don’t self-sabotage your own planning effort.

Let’s face it, as leaders we sometimes get in our own way. Perhaps you find yourself distracted by other pressing matters or realize you are simply going through the motions of (yet another) strategic planning effort. While it is possible to delegate key components of the process, such as process management and idea generation, the CEO is ultimately the chief strategist.

Distracted leadership leaves the process wide open to the other four pitfalls of the planning process. The next four blogs in my series will tell the cautionary tales I’ve learned through a multitude of strategic planning engagements. Use the series to increase awareness of the common obstacles leaders face and avoid making similar mistakes in your own planning process.

Read the other blogs in my five part series, The cautionary tale of 5 scary strategic planning mistakes.

Part II – Avoid side swipes

Part III – Dismiss unrealistic expectations

Part IV – Be willing to say “no” (coming soon!)

Part V – Don’t get too tuckered out (coming soon!)


Millionaires at McDonalds

Statistical Outliers

When Donald Trump walks into a McDonalds, the average patron in that restaurant becomes a millionaire. Is this true?  With a few calculations and assumptions, we can find out.

Forbes Magazine estimates that Mr. Trump is worth $3.9 billion (as of July, 2014).  We will assume 70 million people eat at McDonalds daily and that there are 35,000 restaurants worldwide.  If it takes 15 minutes to pick up an order, then there are about 21 customers in every restaurant at one time…on average.  Let’s assume that every patron, aside from Mr. Trump, has a net worth of $45,000 (the median American net worth is $44,900). Based on these figures, we can calculate the average net worth of these customers.

trump

 

The result?  The average customer at that McDonalds is worth $177 million. In fact, there could be up to 4,000 customers crammed into that restaurant with Mr. Trump and the average patron would still be a millionaire. Does this finding resonate with you?  It smells a little fishy to me—or is that the day-old Filet-O-Fish?

The fact is, when Mr. Trump walked through that grease-stained door, the average customer became a multi-millionaire. However, should we assume that all customers at this McDonalds are very wealthy? Probably not. The trouble with the above calculation (besides the extremely low probability that Donald Trump would ever walk into a McDonalds), is that Mr. Trump’s net worth is an outlier—his extreme wealth is very different from the rest of the population, and adding him to the pool of all customers has a huge influence on the average net worth.  It is like adding a bulldozer to your team while playing tug-of-war; one additional player (with the capacity to pull 100,000 pounds) makes a huge difference.

When analyzing data, how can we consider and/or adjust for the influence of outliers?  Below, I outline two steps and three subsequent options.

Step 1: Outlier detection

BoxplotThe first step one should take is to look at the data graphically. Most statistical software programs can produce boxplots (also known as box-and-whisker plots) that display the median (mid-point) and inter-quartile range (the points in the middle 50% of the dataset), as well as mathematically identify outliers based on pre-set criteria. The boxplot below represents years lived at current residents for a survey we recently completed, it displays the median (the thick line), quartiles (the top and bottom of the box), and variance (the top and bottom lines at the end of the dashed line).  The six points above the top line are identified as outliers because they are greater than 1.5 times the interquartile range plus the upper quartile—a simple calculation that can be completed by hand or by using a statistical software package.

Step 2: Investigate outliers

The second step is to investigate each outlier and try to determine what may have caused the extreme point.  Was there a simple data-entry error, a misunderstanding of the units of measure (e.g., did an answer represent months rather than years), or was the response clearly insincere? When the outlier is clearly the result of a data-entry or measurement error, it is easily fixed.  However, outliers are often not easily explainable.  If you have a few head-scratchers, what should you do?

Option 1: Retain the outlier(s) and do not change your analysis

Extreme data are not necessarily “bad” data.  Some people have extreme opinions, needs, or behaviors; if the goal of research is to produce the most accurate estimate of a population, then their feedback should help improve the accuracy of results. However, if you plan to conduct statistical tests, such as determining if there is a reliable difference between two means, then keep in mind that the outliers may substantially increase the difference in variance between the two groups. Generally, this approach is defendable, although you might consider adding a footnote mentioning the uncertainty of some data.

Option 2: Retain the outlier(s) and take a different approach to analysis

You might find yourself in a situation where you want to retain all unexplainable outliers, but you want to report a statistic that is not strongly influenced by these outliers.  In this case, consider calculating, analyzing, and reporting a median rather than an average (i.e., mean).  A median is the mid-point of a dataset, where half of respondents reported a value above the median and half of them reported a value below it.  Because we calculate medians based on the rank and order of data points rather than their aggregation, medians values are more stable and less likely to swing up or down due to outliers.  You can still conduct statistical tests using medians instead of means, but typically, these tests are not as robust at the equivalent mean tests.

Option 3: Remove the outlier(s)

The third option you might consider is removing the outliers from the dataset.  Doing so has some advantages, but possibly some very serious consequences.  Again, outliers are not necessarily bad data, so removal of outliers should only be done with strong justification.  For example, it might be justifiable to remove outliers when the outliers appear to come from a different population than the one of interest in the research, although it is typically best to create population bounds during the project design process rather than during data analysis.

As we saw in the case of Donald Trump walking into a McDonalds and turning everyone into a multi-millionaire, one or a few outliers can have a dramatic influence over results, specifically population averages.  If you have a dataset that you would like to analyze, consider taking the time to identify outliers and their contexts.  By carefully considering how outliers might influence your results, you can save yourself a lot of time and head scratching.  Of course, feel free to give us a call if you would like us to collect, analyze, or report on any data.


Explaining a complex world simply and incorrectly

Photo credit: http://www.tylervigen.com/

We got a kick out of Tyler Vigen’s blog which demonstrates that mining data doesn’t mean a whole lot if you don’t know what you’re doing. 

He looks at large databases to find nominal correlations between completely unrelated variables.  Let’s see if you can come up with some theories to explain why divorce rates in Maine correlate perfectly with America’s per-capita consumption of margarine.  Or why the number of deaths caused by falling into a swimming pool track very closely to the release of Nicholas Cage films.

The message, of course, is that you can’t just compare two sets of numbers and assume that there’s a relationship just because there’s a correlation.  But we still wonder whether, if we think hard enough, we can come up with a valid theory to explain why the national revenue generated by skiing facilities correlates so well with the number of lawyers in Georgia.

Photo credit: http://www.tylervigen.com/


Refer a client to Corona, receive an iPad

Win an iPad

For the months of July and August, 2014, Corona is thanking our existing clients who refer work to us.  Simply refer a new client to us, and if they initiate work with Corona Insights before December 31, 2014, you’ll receive an iPad as a thank you from us.  There is no limit on how many iPads you can earn.

Be sure to either let us know that you referred someone, or make sure they notify us when they contact us, so we can give proper credit.

Some not-so-fine-print: Only one iPad will be awarded per new client.  If more than one referral was received for a new client, only the first referral will be honored.  Open to all current and past clients of Corona Insights. A “new” client is an organization that Corona Insights has not previously done work with.  Initiated work is defined as signing a contract.  The actual iPad awarded will be determined at the time of award.  Corona Insights reserves the right to change or cancel the promotion at any time.


Data for your sleepless nights

sleepA few months ago, I purchased a fancy pedometer to start collecting more data about myself. For those of you fortunate enough to know my slothful self in real life, I’d like to interrupt your laughter to point out that one of the features I was most interested in was the pedometer’s ability to track my sleep. I’m not sure exactly how it tracks my sleep, nor how precise its measurements are, but it has pushed me to think a lot more about my sleep and about sleep in general. I decided I wanted to know how I compared to other people and to look for patterns in my own sleep data.

First of all, diving into the world of sleep data is like diving into a crazy rabbit hole. (Rabbits, by the way, sleep 8.4 hours per day, but only 0.9 hours of that are spent dreaming. Humans, however, sleep roughly 8 hours, of which 1.9 hours are spent dreaming.[1] Take that, rabbits.) Questions related to sleep are not necessarily where you would expect them to be. They are shockingly absent from the Behavioral Risk Factor Surveillance System (which measures many behaviors related to health) and yet appear on the American Time Use Survey (ATUS).

Even more interesting, you see different patterns in the data depending on the question format. In the ATUS, they have people track their time use via an activity diary. Basically the dairy has you input what activities you were doing when, with whom and where. Based on these diaries, the Bureau of Labor Statistics estimates that in 2012 Americans were getting more than eight hours of sleep per day on average. These data also show that women were getting more sleep than men, and that a woman my age was averaging almost 9 hours of sleep per night.[2]

Sadly, these numbers seemed a little high to me, and a Gallup survey seemed to agree. In a 2013 survey, when people were asked how many hours of sleep they usually got per night, people reported getting fewer than 7 hours.[3] The difference between the diary findings and the survey findings reminded me of a similar pattern in reports of what people eat. Basically, people are really bad at remembering what they ate during a week, so a daily food diary tends to be the more accurate measurement. So maybe people are also bad at recalling how much sleep they get on average during the week? However, I wonder if filling out the diary for ATUS sometimes feels embarrassing. For example, do people feel too embarrassed to admit to all the T.V. they watch/internet browsing they do, so they end up reporting more sleep?

Another sleep data source I found was this chart of the sleeping habits of geniuses.[4] I imagine that getting enough sleep probably helps all of us reach our genius potential. Based on the chart, the average amount of sleep across this sample of geniuses is about 7.5 hours, which seems reasonable. It is super interesting, though, to see how and when geniuses spread out their sleep across the day.

Back to my own data, I noticed two important things. One, the social context can have a big impact on my sleep. Beth and I went to AAPOR in May, and every night we stayed up too late discussing nerdy things that we had learned/ideas for our own analyses. This resulted in many nights of less than 7 hours of sleep. The week after AAPOR, I went to visit my sister. My sister would readily admit that she finds it almost impossible to be a functioning human being on anything less than 8 hours of sleep. Not surprisingly, I averaged more than 8 hours of sleep during that visit. So, insight number one is that I should only share hotel rooms and/or sleep at the homes of people who value sleep. Unfortunately, I don’t group my friends and family based on their sleeping habits and often I like staying up late to debate nerdy things, like what rabbits even dream about during their roughly one hour of dreaming each day. So, I’m not sure how actionable this insight really is.

Second, I noticed that I slept better when I had walked more during the day. Apparently the last laugh is on me because I’m beginning to suspect that even for those of us who really love sleep, being more physically active might be a critical component of the sleep routine.


Data-driven strategy, oh yeah.

Make a strategic break through using data.
Make a strategic break through using data.

My Corona colleagues know the ins and outs of data – the good, the bad and the better not to have at all. I’ve learned a lot from them over the years. In my world as a strategic consultant, I’ve seen first-hand what the right data at the right time can do when setting strategy. Whether you are an executive running:

  • A classical music festival committed to serving a broader audience … so we can make a time honored art form relevant to younger audiences
  • A cancer-fighting nonprofit committed to increasing clinical trial participation … so we can find tomorrow’s cure today
  • A workers compensation insurance company aligning with new health care models … so we can help injured workers and employers

Data can truly drive strategy. How you ask?

  • By bringing timely market opinion and perspective
  • By integrating relevant specialty expertise to advance your own thinking, and getting you to fresh insights faster
  • By serving as a springboard for fresh insights
  • By reality checking a beloved concept before more time or money is invested in it
  • By creating a common platform of information available to the entire team

The results? Focus, clarity, energy and a greater commitment to success.

What are you waiting for….


The relevance of reviews

Online reviewsRecently, Corona conducted some focus groups in part to understand how people make an important decision. In analyzing the groups, I found that in seeking resources to help make that decision, people preferred referrals from their family, friends and neighbors over quantitative data that was developed using a sophisticated methodology. Why? Participants viewed referrals as being more trustworthy and valuable than the data we presented.

This finding illustrates the fact that data comes in many forms and the data that people use in their decision making can range from the scientific to the intuitive. Along these lines, the finding from our focus groups wasn’t particularly surprising to me since I often rely on my own “neighbors” to help guide my decision making. Whether I want to find a new dentist, the best coffee in my neighborhood, or a hotel for my next trip, I usually look at customer ratings and reviews to get other people’s opinions first. According to this article, I am what’s been called a “social researcher” since I actively seek out and read customer reviews prior to making a purchasing decision.

It turns out I’m not alone, as 88 percent of consumers say that online reviews influence their buying decisions. Not only does the review content matter, but the number of reviews that a product has can also influence decision making, as consumers tend to buy products that have more reviews. According to the New York Times, reviews from ordinary people have become an “essential mechanism” for selling just about anything. Further, the article says that, in many situations, reviews are replacing marketing departments, press agents, advertisements, word of mouth and professional critics. The importance of reviews has even led businesses to plant reviews or hire people to produce a mass of reviews under different pseudonyms.

I know that using reviews to make decisions isn’t exactly scientific, but in my personal life, I think of customer reviews as a form of qualitative data which often answers questions like “Why did you like (or dislike) this product?” or “How did you use this product?” I’m also part of the eighty-one percent of consumers who use customer reviews to decide between multiple products, or to confirm that their final selection is the right one.

Interestingly, I don’t write many reviews myself. And while reviews are certainly part of my decision making, I know to take them with a grain of salt – especially since unhappy customers are considered more likely than satisfied customers to try and make their voices heard. Overall, however, I’ve found them to be a valuable and reliable resource. Does anyone you know feel the same way?