RADIANCE BLOG

Category: Strategy & Tactics

Does This Survey Make Sense?

It’s pretty common for Corona to combine qualitative and quantitative research in a lot of our projects.  We will often use qualitative work to inform what we need to ask about in qualitative phases of the research, or use qualitative research to better understand the nuances of what we learned in the quantitative phase.  But did you know that we can also use qualitative research to help design quantitative research instruments through something called cognitive testing?

The process of cognitive testing is actually pretty simple, and we treat it a lot like a one-on-one interview.  To start, we recruit a random sample of participants who would fit the target demographic for the survey.  Then, we meet with the participants one-on-one and have them go through the process of taking the survey.  We then walk through the survey with them and ask specific follow-up questions to learn how they are interpreting the questions and find out if there is anything confusing or unclear about the questions.

In a nutshell, the purpose of cognitive testing is to understand how respondents interpret survey questions and to ultimately write better survey questions.  Cognitive testing can be an effective tool for any survey, but is particularly important for surveys on topics that are complicated or controversial, or when the survey is distributed to a wide and diverse audience.  For example, you may learn through cognitive testing that the terminology you use internally to describe your services are not widely used or understood by the community.  In that case, we will need to simplify the language that we are using in the survey.  Or, you may find that the questions you are asking are too specific for most people to know how to answer, in which case the survey may need to ask higher-level questions or include a “Don’t Know” response option on many questions.  It’s also always good to make sure that the survey questions don’t seem leading or biased in any way, particularly when asking about sensitive or controversial topics.

Not only does cognitive testing allow us to write better survey questions, but it can also help with analysis.  If we have an idea of how people are interpreting our questions, we have a deeper level of understanding of what the survey results mean.  Of course, our goal is to always provide our clients with the most meaningful insights possible, and cognitive testing is just one of the many ways we work to deliver on that promise.


Strategy in an era of unpredictability

The strategic planning process that is used by many today was created decades ago during a more predictable time. By nature, the process is based on an underlying assumption, namely that changes and trends in the external environment can be predicted with some certainty.

So, what about changes that can’t be predicted? Or are unprecedented in our lifetime?  We only have to look to our smart phones for a glimpse into the near future. As a recent piece by the World Economic Forum noted, we are in the midst of the Fourth Industrial Revolution, and what many refer to as the Digital Revolution.

This one is a game changer my friends.

 “There are three reasons why today’s transformations represent not merely a prolongation of the Third Industrial Revolution but rather the arrival of a Fourth and distinct one: velocity, scope, and systems impact. The speed of current breakthroughs has no historical precedent. When compared with previous industrial revolutions, the Fourth is evolving at an exponential rather than a linear pace. Moreover, it is disrupting almost every industry in every country. And the breadth and depth of these changes herald the transformation of entire systems of production, management, and governance.”

How to set strategy now?

  1. Drop the SWOT – this tool is out of date and too often focuses on naval gazing instead of a true assessment of the strategic environment.
  2. Create a sense of urgency – too many organizations plod rather than sprint. Our decision making processes are too slow or cumbersome. We don’t have agreement on the big issues (mission, vision, values and strategy) and so we can’t (or don’t) really empower people to act. And act we must.
  3. Fast twitch for the win – I don’t see enough evidence of agility either. Develop your fast twitching muscles as you’re going to need them.
  4. Think map app – today’s map app alerts you in real-time to the best route. Your strategic plan should do the same.

Remember, strategy is a choice. It focuses attention, resources and commitment.

And strategy isn’t static.

Got your latest app downloaded?


Shhhhhhhh!! Did you know there are three secrets to strategic success?

I’m often asked, “How can we ensure our strategic plan doesn’t sit on a shelf?” The question typically arises as executives and boards consider how best to approach the planning process – and which consultant can facilitate a successful outcome.

By its very nature, a strategic planning process raises expectations and anxiety. And no plan is worth the investment if it sits on a shelf.

While the question is spot-on, it’s being asked of the wrong person. The next time I find that question directed at me I think I’ll pull the small mirror out of my purse as I say, “What a great question. It’s simple really. Success actually begins with you. You’ll need three things: committed leadership, access to resources and accountability for results. I can help you get there, but you’ve got to keep the dust off the plan.”

Strategic success is as easy as 1, 2, 3.Wanna know a secret? Strategic success is as easy as…

 

 

 

 

 

 

 

 


Making improvements through A/B testing

This one or that oneThis one or that oneDid you know that when you visit Amazon.com the homepage you see may be different than the one someone else sees, even beyond the normal personalized recommendations? It’s been widely reported how Amazon is continually tweaking their homepage by running experiments, or A/B tests (sometimes referred to as split tests), to tease out what makes a meaningful impact on sales. Should this button be here or there? Does this call to action work?

For some research questions, asking people their opinion yields significant insight. For others, people just cannot give you an accurate answer. Would you be more likely to open an email with a question as a subject line or with a bold statement? You don’t really know until you try.

So, how does this work? In essence, you’re running experiments, and with any scientific experiment, you will want your control group (e.g., you don’t change anything) and your experiment group (e.g., the one you’re altering a variable with). Ideally, you randomize people into each so you don’t inadvertently influence your results by how people were selected.

So now, you have two groups. While you may want to test several items, it is easiest to test one item at a time (and run multiple experiments to test each subsequent item). This will help you isolate the impact of your change – change too many things and you won’t know what made the difference or whether if some changes were working against each other.

Finally, launch the tests and measure what happens. Did open rates differ between the two? Did engagement increase? Differences aren’t always dramatic, but even a slight change at scale can have significant impact. For instance, if we increase response on a survey by 2%, that could mean 100 additional responses for essentially no additional cost. If the change costs money – for instance one marketing piece costs more than the other – then a cost benefit analysis will need to be performed. Sure, “B” performed better, but better enough to cover the additional expense of doing it?

A few final quick tips: A/B testing is an ongoing endeavor. Maximum learning will occur over time by running many experiments. Remember, things change, so running even the same experiment over and over can still yield new insights. Finally, you don’t always have to split your groups in half. If you have 2,000 customers, you don’t need to split them into two groups of 1,000. Peeling off just 500 for an experiment may be enough and lower the chance of adverse effects.

Ok, enough with the theoretical. How does this work in real-life?

Take our own company as an example. Corona engages in A/B testing, both for our clients and our own internal learnings. For instance, we may tweak survey invites, incentive options, or other variables to gauge impact on response rates. Through such tests we’ve teased out the ideal placement for the link to the survey within an email, from whom such requests should come from, and many other seemingly insignificant variables (though they are anything but insignificant).

How about your organization? Let’s say you’re a nonprofit, since many of our clients are in the nonprofit sector. Here are a few ideas to get you started:

  • eNewsletters. Most newsletter platforms have the ability to do A/B testing. Test subject lines, content, colors, everything. Test days and send times.
  • Website. Depending on your platform, this may be easy or more difficult. Test appeals, images, and donate call to actions.
  • Ad testing. Facebook ads, Google ads, etc. Most platforms allow you make tweaks to continually optimize your performance.
  • Mailings. Alter your mailing to change the appeal, call to action, images, or even form of the mailing (e.g., letter vs. postcard).
  • Programming. In addition to marketing and communications, even your services could possibly be tested. What service delivery model works best? Creates the biggest change?

What other ideas would you want to test?


Where are we now? The new next era nonprofit

I spent the other afternoon sitting around a large table chatting with professionals from across the sector about leadership, and the competencies that an effective leader will need in 2025. As we were chatting about today’s realities – and the social, political, technical and economic factors affecting nonprofits – it struck me that we’ve been here before. Or at least I have. Where’s that you may ask? Contemplating the “next era” of the sector.

Nonprofit While our social consciousness is slow to evolve and too slow to change (think social equity and gender identity) we are witnessing change in the form of driver-less cars, “smart” cities, neuroscience, and the record number of Americans not in the workforce. Those topics weren’t showing up on my Facebook feed five years ago. Back then we weren’t contemplating car-free micro-apartments in Denver either.

What else is on the nonprofit leader’s to-do list today? Six recurring topics with new twists.

  1. $ - Figure out what impact investing really is and whether or not we can do it. I know you are secretly wondering if this really is a game changer or simply a spin on the same old, same old. It’s a game changer.
  2. Inclusiveness – Learn how we can create inclusive and accessible organizations that welcome and engage diverse people. We can’t keep kicking this can down the road.
  3. Innovation – Explore the edges of our work, seeking new ideas from unexpected places leveraging tools like design thinking.
  4. Mission impact – Admit to ourselves that we don’t really understand our customers or how to positively impact their lives in a meaningful way and that we may need to toss out some of our favorites.
  5. Engagement – Realize that too often we treat people transactionally. We think of them in buckets – volunteers, Facebook followers, donors, etc. We haven’t optimized our business models to cultivate engagement. Check out my Synergistic Business ModelTM if you’d like to learn more about this all-to-often ignored cornerstone of the nonprofit business model.
  6. Sustainability – Fess up that our business models aren’t really sustainable and that we need thoughtful, committed and generous people to stand by us for the next few years while we invest in figuring things out – or, more bravely, exit the market and let someone new and fresh bring 2025 solutions to the marketplace.

There are no bright, defining lines between the sectors, only smudges that get fainter every time we step on them. Younger generations could care less about your tax status. They want to know you are authentic, relevant, impactful and efficient. They expect you to do good. Period. Gen Y and the boomers are learning from them.

What competencies will a nonprofit leader likely need in 2025? My list begins with “intelligence” and the courage to explore, experiment and collaborate. Higher education is looking at multi-disciplinary learning. Perhaps nonprofits need to consider busting their silo’ed approaches too.

What’s on your list?

2025 will be here before we know it. Are you ready?


Reluctance: the antithesis of leadership

All too often strategic success is stymied by reluctant leadership. Reluctance can be seen in behaviors small and large. In essence it’s a failure to act. That action may be as simple as stepping up to fill a gap. Those small misses create a culture of excuses, shrugged shoulders, and not heeding the call for help. It leads to a false sense of comfort.

At the strategic level, reluctance is manifest in the inability to make decisions or its opposite – going along with a poor decision rather than speaking up and calling the question. Flip sides of the same coin, both result in missteps, poor investments, missed opportunities and a culture of excuses. But hey, we still feel comfy don’t we?

There’s the old saying that if you want something bad enough you’ll stare fear in the face to achieve it. Leadership requires both courage and the ability to face discomfort for the larger good. This is especially true when facing hard truths and ensuring decisions are truly strategic.

As you prepare for 2016, ask yourself, “If not me, then who?”

Let your answer be “yes.”


Predicting The Future

In one form or another, much of market research is aimed at predicting the future.  Whether you are considering opening a new line of business, tweaking your advertisements, or just trying to serve your constituents better, the key purpose is almost always some form of “If we do X, then what will happen?”  However, when crafting research questions, it is important to keep in mind that not all questions are created equal when it comes to predicting future behavior.  Respondents tend to respond to surveys rationally, and anyone who has been involved in marketing for long will agree that consumers are anything but rational.

Henry Ford Quote

The key issue to consider when designing research questions to predict future behavior is simply whether human beings are able to accurately answer your question.  Here are a few scenarios to consider:

Scenario 1: Media Choice

Let’s say you wanted to know what types of media would be most effective at reaching your target audience.  It might seem intuitive to simply ask a question such as:

Advertising Question

There’s nothing necessarily wrong with that question, and in fact we at Corona use similar questions here and there when we want to at least get a feel for where consumers might look for information.  However, people are notoriously awful at predicting how they will react to something in the future.  Many will likely name the usual media suspects – TV and radio – without thinking through what they actually pay close attention to.  They may not consider more unique advertising media such as social media, outdoor advertising, direct mail, and many others.  Instead, it might be more reliable to ask the question about what they can recall from the past and make the assumption that their past behavior will likely reflect their future behavior:

Advertising Question 2

In either case, any time you can triangulate your survey findings with other data (e.g., past ad performance, media consumption studies, etc.), the stronger your conclusions will be.

Scenario 2: Likelihood of Purchase

Let’s instead say that you are launching a new product and are trying to forecast how many people will purchase your product.  The most straightforward way of asking that question might simply be:

Purchasing Question

The challenge with that question is that it simplifies an extremely complex purchasing decision into an expanded “yes or no” response.  They may find the product attractive, but what will it cost?  Where will it be sold?  What will the economy be like once the product is available?  Are people already familiar with the product, or will they need to learn more about it to make a decision?  Will other competitive products be available at the same time?  Add to those issues the fact that people almost always tend to overstate how likely they are to purchase something, and you get very tenuous results – the results you get are almost always a best-case scenario.  A respondent will make an objective evaluation of their likelihood to purchase when taking a survey, but the final decision is a very emotional decision that can be influenced by all of these factors and more.

Instead, surveys are more effective at helping you understand the product attributes that will help to drive purchase.  For example, you could instead inform messaging about the product based on reactions to a series of statements about the product in terms of how valuable they seem to consumers.  If it is imperative to forecast future purchase behavior, a different approach, such as A/B testing to compare how different approaches work in the real world, using test markets before your full launch, and other advanced analytical techniques may be more effective.

Scenario 3: Optimal Price Point

As a final scenario to consider, let’s say you are launching a new product and want to know how to price it so that you can both drive sales and maximize your revenue.  You may initially think that simply asking the question outright will be most effective:

Optimal Price Point Question

This question (and a variety of other similar questions you could use) again simplifies a very complex purchasing decision into a straightforward answer.  What will the respondent’s financial situation be at the time of purchase?  Are there sales on competing products?  Will they be attracted by the packaging?  Will the product be sold in a small, boutique shop or a large superstore?  All of these can have significant impacts on what people are willing to pay that would not be reflected in a survey response.

There’s nothing wrong with addressing price in a survey necessarily, but a typical survey will be limited in its ability to give you accurate information about optimal prices.  You can use a straightforward survey’s findings to give you a feel for reactions to prices, but a final decision on pricing should be based on many other data points than the survey results alone.

That said, there is a specific type of survey (called a conjoint survey) that is specifically designed to help determine optimal prices by asking consumers to make choices between a variety of combinations of product or service attributes.  It’s a considerably more complex process, but is by far the most reliable way of understanding the value that consumers place on various attributes and can help to accurately inform your pricing strategy.  Similar to the discussion for Scenario 2, test markets could also be a valuable option to understand how consumers will react to various prices.

~

Despite these challenges with predicting future behavior, surveys remain one of the most valuable tools for informing product/service development and marketing.  Surveys are highly effective at understanding current behaviors, measuring awareness, understanding pain points that a product or service could address, understanding attitudes and perceptions of a product or service, and much more.  However, keeping in mind the types of information that respondents are able to accurately provide will ensure that the survey’s results are as accurate and actionable as possible when developing your future strategy.

 


The Challenging In-Between: Bridging the gap among visionaries and operational experts

Over the years I’ve discovered that nonprofit executives and board members typically fall into two main categories: those who are boldly aspirational and those who are decidedly tactical. The first focuses on the big idea and its power to move people. They are lofty, passionate and effervescent. To the operationally-focused person they appear to dodge the important nuts and bolts. On the other hand, the executive whose natural talents lie in operations and the ability to get things done (often in spite of their visionary peers), are naturally challenged to let go and dream big. They seek the known. Their tendency to quickly dive into the weedy details is off-putting to the visionary.

Unfortunately there can be little in common between these divergent thinking styles and they often frustrate the heck out of each other. They use different language, or interpret a common term in opposite ways, and don’t know how to create the connective tissue that binds the two important orientations together.

BridgeWhat is the bridge? Strategy. The essence of strategy lies in charting the unknowns – where the industry is going to be, what customers will need (and demand) in the future, what donors will expect, and how the community will be doing. It also rests in a clear articulation of how the organization uniquely meets its customers’ needs in ways that rivals can’t or don’t. The strategist navigates unknowns and uncertainties. S/he keeps her eye on the 3-5 year horizon as she leads the development of a clear strategy – an articulation of what the organization will focus on over the next 3-5 years based upon well-founded decisions about the objective to be achieved, the scope within which it will work, and the competitive advantage to be leveraged.

I’m a big believer in the power of a strategy statement as described in the classic Harvard Business Review article from 2008 – Can you say what your strategy is? – by David J. Collis and Michael G. Rukstad. It is my go-to resource in this work and cannot recommend it strongly enough.

As a consultant I’m often the bridge builder – the strategy seeker – bringing together the operationally- and aspirationally-oriented executives. This bridge building is iterative. It takes time to spark a ha’s, establish common language, and build a team of executives focused on the same thing – future strategy.


Navigating Time and Space: Why we Include Geography

When I was a kid living in Colorado Springs, my family frequently drove to Denver.  We would go to watch baseball games or visit museums.  It is about 60 miles between Colorado Springs and Denver.  Back then, the speed limit was 60 to 65 miles per hour, and I had fun mentally calculating that since our Jeep was traveling about one mile per minute, then we would reach Denver in about one hour. Of course, this was all before the electronic geographic information revolution.

These days, we can quickly discover the distance from one place to another with just a few clicks on Google Maps or onboard GPS devices, and there are numerous software programs that calculate typical drive, bike or walk times.

Here at Corona, we leverage these geographic information technologies for many of our research projects when we suspect that there is a spatial (i.e., geographic) relationship with our key variables.  In other words, knowing the distance or travel time between two locations can reveal key insights.  For example, let’s say the Colorado Rockies baseball team wanted us to survey fans living in the Metro Area to understand what barriers prevent them from traveling to Coors Field to watch a game.  Our analysis would likely explore their opinions about going to a game (e.g., are tickets well priced, are games played at convenient times, etc), but we might expect that these opinions are influenced, in part, on the distance or travel time between their home and the ballpark. Fans living far away from the ballpark might have stronger convenience barriers than fans living closer to the ballpark.Drivetime Map

To explore this hypothesis, we can use GIS software to plot survey respondents’ homes. Then we decide to analyze by distance or by travel time. This choice depends on the research question we are trying to answer, as well as the context of the research. In a study of opinions about sound or light pollution, analyzing distance clearly makes more sense. If a behavior of interest involves walking or biking, then distance might be more important than travel time, considering walking 100 miles is a significant feat, but people frequently travel that distance by car. Alternatively, a study in a city where all the streets are linear and the speed limits are the same, drive time would be directly related to distance, so the unit of measurement wouldn’t matter.  However, in many of our projects, such as the Coors Field example, we are most interested in drive time.  Using drive times has a big advantage when the transportation system is not linear, which is often the case due to interstate highways, bridges, mountains, canyons, no-travel zones, construction, and a host of other reasons.  Considering drive time during rush hour is likely longer than an early weekend morning, our software allows us to specify a drive time to the day and hour.

So how does including geographic data benefit analysis and improve insights?  Most simply, we create custom segments based on distance, and we create easy-to-understand graphs that cross results to other questions by this variable. Segmentation is a good start, but we rarely stop there.  On many projects, the research demands more rigorous results, in which cases we will convert the data so that we can apply a more advanced analyses that tells us the strength of relationships to other key variables. For example, we can find out the extent that drive time to Coors field predicts fans’ perceived barrier to going there for a game.  In fact, we can explore multiple variables (e.g., ticket prices, fan devotion, and drive time) simultaneously to reveal patterns that would otherwise be difficult to tease apart. In some cases, we calculate drive times to other site, such as other leisure attractions.  We then incorporate that data to the analyses so that the results more closely reflect the real world, where decisions on how to spend leisure time are more complex.

While geography won’t provided all of the answers to our research questions, distance and drive time can be a key variable that helps explain what’s going on.  By using geographic technologies, we can efficiently explore this variable and sharpen our findings and recommendations.  In other words, it helps us paint a more complete picture.

Send us an email if you would like to discuss how analyzing spatial patterns could help answer your most important questions.


Incorporating Exercises into Focus Groups

At Corona Insights, we are always researching best practices for the work we do.  In the world of qualitative research, this often means best practices for conducting focus groups.  Over the years, we have learned many tips and tricks for conducting focus groups, which includes incorporating exercises into our discussions.  There are a wide range of exercises and activities we use depending on the topic of discussion.  These exercises can include everything from drawing ideas to ranking priorities to testing messaging or ads.

Incorporating exercises into focus groups serves several important purposes:

  • It gives participants an opportunity to think about topics in a different way. It can sometimes be hard for participants to fully think through a topic when they are expected to quickly answer a question.  Allowing them more time to think about the topic in an exercise helps encourage different thinking and promotes answers and opinions that are below top of mind.Light Bulbs
  • It encourages those who are quiet to express their opinions. Some participants are quieter by nature, so it can be a challenge to hear their opinions in a group of 8-10 other people.  Incorporating exercises, and having participants share their thought process for completing the activity ensures that even the most shy of participants are participating.
  • It makes the group more interesting. Sitting in a room listening to someone ask questions for two hours can be exhausting, especially if the group takes place in the evening. Incorporating exercises into a focus group breaks up the model of the moderator asking questions and participants answering, and hopefully makes the group more fun!
  • It helps breaks up group think. Sometimes if there are strong personalities in the room, or if a topic is particularly controversial, participants can act as if they agree with each other on certain topics, even if this is not truly the case. This can also happen if most participants haven’t given the topic a lot of previous thought, and a few participants have more knowledge on the topic than others.  Having the participants work and think individually during an exercise ensures that the group is not being influenced by group think.

So, if you ever attend a Corona focus group, don’t be surprised to see participants doing more than just answering questions (and hopefully having more fun and expressing more thoughtful and insightful opinions because of it).