How Researchers Hire

Corona Insights just recently went through a round of hiring (look for a blog post soon about our newest member) and, while many of our hiring steps may be common, it did occur to me that our process mirrors the research process.

  • Set goals. Research without an end goal in mind will get you nowhere fast.  Hiring without knowing what you’re hiring for, will ensure an inappropriate match.
  • Use multiple modes.  Just as approaching research from several methodologies (e.g., quant, qual) yields a more complete picture, so too does a hiring process with multiple steps.  Reviewing resumes (literature review), screening (exploratory research), testing (quantitative), several rounds of interviews (qualitative), and mock presentation.
  • Consistency.  Want to compare differences over time or between different segments? Better be consistent in your approach.  Want to compare candidates? Better be consistent in your approach.

Needle in haystack imageI could go on about the similarities (drawing a broad sample of applicants?), but you get the idea.  The principles of research apply to a lot more than just research.

And as with any recurring research, we reevaluate what worked and what can be improved before iteration.  Therefore our process changes a little each time, but the core of it remains the same – asking the right questions and analyzing data with our end goals in mind.  Just like any good research project.

Stay tuned for a blog post about our new hire.



Haunted by Old Survey Questions

“As the Corona team bravely entered the haunted project file, they heard a strange sound. They quickly turned to the right to see Old Surveys Ghostanalyses covered in cobwebs. They shuddered. Suddenly a weird shadow crossed their faces. As they looked up, they could barely make out what it was…a report? An old invoice? No, it couldn’t be! But it was. It was the original survey, completely unchanged since 1997…..AAAAHHHH!”

The above might be a slight exaggeration. If anything, we do try to keep our screaming to a minimum at work. However, I do think that often organizations can become “haunted” by old survey questions that they cannot seem to escape.

Obviously, there can be value in tracking certain questions over time. This is especially true if you want to determine the effect of certain programming or events. However, we sometimes see organizations that are afraid to change or remove any questions at all year over year, even if it is unclear whether all the questions are useful anymore.

As pointed out previously, it is a good idea to think through the goals of your research before you start a project. If you have not seen any changes in a measure over time and/or you did not do anything intentionally to move a measure in the past year, you might consider updating your survey. And there are a ton of different ways to do this.

You can ask key measures every other year, and in between, ask totally new questions that allow you to dig deeper into a certain issue, test new marketing concepts, etc. You can ask a smaller subset of only the critical key measures every year but rotate out a subset of new questions. You can ask the same questions, but maybe ask them of a new audience . You can ask the same questions, but try out a new type of analysis. For example, instead of just reporting 55% of donors strongly believe X and 31% strongly believe Y, we could look at which belief predicts the amount that donors give. We might find out that belief X more strongly predicts the donation amount than belief Y and that might change your entire marketing campaign.

Again, tracking questions over time can definitely be important. But it might be worth considering whether you are asking certain questions because they are of use or if you are doing so just because you have always asked these questions in your survey. If the latter is your reason, it might be time to rethink your survey goals and questions.

Don’t be haunted by old survey questions. Let Corona help you clear out those ghosts and design a survey tailored to meet your current ghouls…er, goals.


New Case Study: 1920’s Era Town

We always enjoy our successful engagements with our clients, but too often we can’t share our work.  So we get even more excited when we have the opportunity to share our work, along with how it benefited a client.

Our latest case study is about a market feasibility study Corona conducted for a 1920’s Era Town concept.  View the case study here, including more on the concept, how Corona helped the founder analyze the feasibility, and the results.

Be sure to check out our other case studies and testimonials too.


Asking the “right” people is half the challenge

200395121-001We’ve been blogging a lot lately about potential problem areas for research, evaluation, and strategy. In thinking about research specifically, making sure you can trust results often boils down to these three points:

  1. Ask the right questions;
  2. Of the right people; and
  3. Analyze the data correctly

As Kevin pointed out in a blog nearly a year ago, #2 is often the crux.  When I say, “of the right people,” I am referring to making sure who you are including in your research represents who you want to study.  Deceptively simple, but there are many examples of research gone awry due to poor sampling.

So, how do you find the right people?

Ideally, you have access to a source of contacts (e.g., all mailing addresses for a geography of interest, email addresses for all members, etc.) and then randomly sample from that source (the “random” part being crucial as it is what allows you later to interpret the results for the overall, larger population).  However, those sources don’t always exist and a purely random sample isn’t possible.  Regardless, here are three steps you can take to ensure a good quality sample:

  1. Don’t let just anyone participate in the research.  As tempting as it is to just email out a link or post a survey on Facebook, you can’t be sure who is actually taking the survey (or how many times they took it).  While these forms of feedback can provide some useful feedback, it cannot be used to say “my audience overall thinks X”.  The fix: Limit access through custom links, personalized invites, and/or passwords.
  2. Respondents should represent your audience. This may sound obvious, but having your respondents truly match your overall audience (e.g., customers, members, etc.) can get tricky.  For example, some groups may be more likely to respond to a survey (e.g., females and older persons are often more likely to take a survey, leaving young males under represented). Similarly, very satisfied or dissatisfied customers may be more likely to voice an opinion, than those who are indifferent or least more passive. The fix: Use proper incentives up front to motivate all potential respondents, screen respondents to make sure they are who you think they are, and statistically weight the results on the backend to help overcome response bias.
  3. Ensure  you have enough coverage.  Coverage refers to the proportion of everyone in your population or audience that you can reach.  For example, if you have contact information for 50% of your customers, then your coverage would only be 50%.  This may or may not be a big deal – it will depend on whether those you can reach are different from those you cannot.  A very real-world example of this is telephone surveys.  The coverage of the general population via landline phones is declining rapidly now nearing only half; more importantly, the type of person you get via landline vs. a cell phone survey is very different.  The fix: The higher the coverage the better.  When you can only reach a small proportion via one mode of research, consider using multiple modes (e.g., online and mail) or look for a better source of contacts.  One general rule we often use is that if we have at least 80% coverage of a population, we’re probably ok, but always ask yourself, “Who would I be missing?”

Sometimes tradeoffs have to be made, and that can be ok when the alternative isn’t feasible.  However, at least being aware of tradeoffs is helpful and can be informative when interpreting results later.  Books have been written on survey sampling, but these initial steps will have you headed down the correct path.

Have questions? Please contact us.  We would be happy to help you reach the “right” people for your research.


The cautionary tale of 5 scary strategic planning mistakes: Part V – Don’t get too tuckered out

Momentum of strategic planThe scariest proposition is creating a strategic plan that inevitably doesn’t get implemented. Strategic plans are worth their weight in gold when they become a blueprint for future progress. As my final word to the wise, I advise leaders undertaking the strategic planning process to hold onto the momentum created by the planning process to carry them through the first years of implementation (the hard part).

I’ve long said that an organization lives in a parallel universe when engaged in strategic planning as you have to remain attentive to the present while you focus on the future. The board’s approval of the completed plan is only the beginning. If there isn’t energy and enthusiasm after the planning process, then you know the next few years of implementation are going to feel l-o-n-g. It is only a matter of time before some combination of pitfalls 1-4 (link) above sneak into the day to day.

This blog concludes my five part series about the scary tales of strategic planning. I encourage every leader to consider these lessons as they devote themselves to being strategic. Avoid these pitfalls and many others by trusting an expert to be your strategic consultant. Years of experience have given me the foresight to help my clients be successful in giving their organization a truly strategic plan.

Miss the first four blogs in this series? Feel free to start at the beginning, or pick the topic that most resonates with you.

 

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part III – Dismiss unrealistic expectations

Part IV – Be willing to say “no”


Who you gonna call?

grant evaluationWith Halloween approaching, we are writing about scary things for Corona’s blog. This got thinking about some of the scary things that we help to make less scary.  Think of us as the people who check under the bed for monsters, turn on lights in dark corners, bring our proton packs and capture the ectoplasmic entities … wait, that last one’s the Ghostbusters.  But you get the idea.

As an evaluator I find that evaluators often have a scary reputation.  There is a great fear that evaluators will conclude your programs aren’t working and that will be the end of funding and the death of your programs.  In reality, a good evaluator can be an asset to your programs (a fear-buster, if you will) in a number of ways:

  1. Direction out of the darkness.  Things go wrong … that’s life.  Evaluation can help figure out why and provide guidance on turning it around before it’s too late.  Maybe implementation wasn’t consistent, maybe some outcome measures were misunderstood by participants (see below), maybe there’s a missing step in getting from A to B.  Evaluators have a framework for systematically assessing how everything is working and pinpointing problems quickly and efficiently so you can address them and move forward.
  2. Banisher of bad measures.  A good evaluator will make sure you have measures of immediate, achievable goals (as well as measures of the loftier impacts you hope to bring about down the road), and that your measures are measuring what you want (e.g., questions that are not confusing for participants or being misunderstood and answered as the opposite of what was intended).
  3. Conqueror of math.  Some people (like us) love the logic and math and analysis of it all.  Others, not so much.  If you’re one of the math lovers, it’s nice to have an evaluation partner to get excited about the numbers with you, handle the legwork for calculating new things you’ve dreamed up, and generally provide an extra set of hands for you.  If you’re not so into math, it’s nice to be able to pass that piece off to an evaluator who can roll everything up, explain it in plain language, and help craft those grant application pieces and reports to funders that you dread.  In either case, having some extra help from good, smart people who are engaged in your work is never a bad thing, right?

This fall, don’t let the scary things get in your way.  Call in some support.


The cautionary tale of 5 scary strategic planning mistakes: Part IV – Be willing to say “no”

E006351With 14 years of experience helping organizations create strategic plans at Corona, I’ve seen many stumbles. This quarter, our firm is authoring content about “what can go wrong” in our work. On this topic, I have created a five part blog series to help leaders avoid the common mishaps I’ve witnessed in the past. My fourth lesson to leaders is: be willing to say no.

A strategy must be focused by design. Period. The best strategy sets a recognizable stake in the ground. When a strategy is too broad or too vague, then an organization struggles to devote resources to the appropriate priorities. For example, you may need to do some fence mending with recalcitrant staffers who otherwise aren’t on-board with the new direction. Too often, the experience of strategic plan implementation is muddied by he said/she said differences in view. “Hey, I thought we were going to do X. What do you mean we are doing Y.” Then presto, you’ve got a stalemate. Unwilling to admit the error, we put the plan on the proverbial shelf while we sheepishly blame the plan for a lack of results.

Creating a strategic plan takes a leader who can avoid stalemate of the organization’s direction by addressing differences proactively. Building consensus is key to creating a plan that is workable. The next blog (link) in the series will address what happens when you don’t say “no” and the planning process becomes an exhausting feat.

Read the other blogs in my five part series.

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part III – Dismiss unrealistic expectations

Part V – Don’t get too tuckered out


The cautionary tale of 5 scary strategic planning mistakes: Part III – Dismiss unrealistic expectations

Part III – Dismiss unrealistic expectations 

hercules and the bullThis quarter, the Corona team is blogging about “what can go wrong”. The theme inspired me to write a five part series about the common hazards I’ve witnessed in the strategic planning process. In review, avoid self-sabotage  and side swipes. Lesson number three: I advise clients start the process with realistic expectations.

Strategic planning processes go wrong when they are expected to achieve Herculean feats that actually have nothing to do with the real work of setting strategy. Those feats are most often associated with the people side of the organization and its culture. The process of setting strategy must to be concerned with the external environment – most notably with market, customer, industry and macro conditions. Attending to the people side is important too, but don’t expect a strategic planning process to serve as the primary intervention for organizational change. If you need to align around a common vision and guiding principles then commit to doing that philosophical work. But please don’t confuse that with the work required to set a true strategy.

If you find yourself grasping for unrealistic expectations, you will likely face conundrum number four: the inability to say “no”. Stay tuned for part four (link) of my series.

Read the other blogs in my five part series.

The cautionary tale of 5 scary strategic planning mistakes.

Part I – Don’t self-sabotage

Part II – Avoid side swipes

Part IV – Be willing to say “no”

Part V – Don’t get too tuckered out


Begin with the end in mind

Missed the targetWhen we think about the pitfalls of conducting market research, our minds tend to focus on all of the mistakes you can make when collecting data or analyzing the results.  You can find other posts on this blog, for example, that discuss why it is important to collect data in a way that can be generalized to the entire universe being studied, why intentions do not necessarily translate into actions, and why correlation does not equate to causation.

But even if you are diligent about ensuring that your overall methodology is solid, there is another oversight that can potentially cause even more problems in your research: conducting research that isn’t actionable.

During my time with a previous employer, we once had an international gear and apparel brand contact us (we’ll call them “Brand X” for confidentiality) that had just completed a large-scale segmentation of their customers through another vendor.  While that vendor was qualified to do the work, the segmentation analysis had resulted in 12 market segments.  There’s nothing inherently wrong with that from a methodological perspective,  but anyone charged with marketing products will agree that it’s incredibly difficult to split your focus so many directions.  If they tried to do so, they would likely dilute their overall messaging to the point that it simply wasn’t cohesive.

Brand X asked us to try and salvage the project by taking the initial results and refining them into a more manageable set of segments for Brand X in the future.  We were able to help, but in the end, the study took roughly 50% more time and resources to complete because they weren’t specific up front about what they were looking to accomplish with the segmentation and the constraints that would need to be in place for it to be usable.

At Corona, we are always mindful of this potential blunder, so we encourage our clients to think not only about how to conduct the research, but also why the research is being conducted.  We often set 3-5 major goals for the research up front that can be used to vet any other survey questions or focus group topics in order to ensure the end result will meet the needs for which the research was undertaken in the first place.  By understanding how you will be able to use the results, you can design research in a way that will ensure the results will allow you to make those tough decisions in the end.