Celebrating Beth’s 9 Year Anniversary

A few weeks ago we celebrated Beth‘s 9 year anniversary here at Corona Insights, and as she stretches towards her big 10 year milestone, we asked her to name a few things that have changed over the years. This was her reply.

3 things that have changed the most:

  1. My puppies have grown up.  We adopted Maya from the Boulder Valley Humane Society after I defended my dissertation and about two months before I started at Corona.  She was 11 months old when I joined Corona and her 10th birthday is next week.  Dexter met up with us in Albuquerque.  He was not quite two years old when we adopted him and he’ll be 11 in a few months. (I should note that Alabama, my 16 year old leopard gecko, doesn’t look a day older than he did back in 2006.)
    Beth & Maya 2006
  2. Survey research has been a moving target.  In 2006, it was still reasonable to do a general population survey with nothing but landline RDD.  Cell-only households were still few and far between – primarily highly-mobile 20-somethings like myself (at the time).  Online surveys were widely considered to be junk and there were huge coverage gaps in households with internet service.  But all that has changed rapidly, and now cell-phones are a mandatory component of general population surveys, online methods are used even for political polling, and on the flip side, mail surveys are making a comeback as address-based samples still provide the best coverage.
  3. Telecommuting has improved immensely.  10 years ago VOIP was still a confusing, techie service, but software like Skype and Google Talk -> Chat -> Hangouts (along with bandwidth improvements) has made daily office communications so much more seamless (and so much cheaper than the limited-number-of-long-distance-minutes cell-phone plans that used to support the bulk of inter-office communication).  From our Brady Bunch-style staff meetings (picture the grid of heads) to our casual “let’s Skype while we make lunch” hangouts, being at home is a lot more like being in the office than it used to be.

Happy Anniversary, Beth! Cheers to many more!

How to Choose your own Adventure when it comes to Research

One of the things we’ve been doing at Corona this year that I’ve really enjoyed is resurrecting our book club. I enjoy it because it’s one way to think about the things we are doing from a bigger picture point of view, which is a welcome contrast to the project-specific thinking we are normally doing. One topic that’s come up repeatedly during our book club meetings is the pros and cons of different types of research methodology.

Knowing what kind of research you need to answer a question can be difficult if you have little experience with research. Understanding the different strengths and weaknesses of different methodologies can make the process a little easier and help ensure that you’re getting the most out of your research. Below I discuss some of the key differences between qualitative and quantitative research.

Qualitative Research

Qualitative research usually consists of interviews or focus groups, although other methodologies exist. The main benefit of qualitative research is that it is so open. Instead of constraining people in their responses, qualitative research generally allows for free-flowing, more natural responses. Focus group moderators and interviewers can respond in the moment to what participants are saying to draw out even deeper thinking about a topic. Qualitative research is great for brainstorming or finding key themes and language.

Qualitative data tend to be very rich, and you can explore many different themes within the data. One nice feature of qualitative research is that you can ask about topics that you have very little information about. For example, you might have a question in a survey that asks, “Which of the following best describes this organization? X, Y, Z, or none of the above.” This quantitative question assumes that X, Y, and Z are the three ways that people describe this organization, which requires at least some knowledge. A qualitative research question for this topic would ask, “How would you describe this organization?”. This is one of the reasons why qualitative research is great for exploratory research.

The primary weakness of qualitative research is that you can’t generate a valid population statistic from it. For example, although you could calculate what percent of people in focus groups said that Y was a barrier to working with your organization, you couldn’t generalize that estimate to the larger population. However, if you just wanted to identify the main barriers, then that would be possible with qualitative research. So even if 30% of focus group participants reported this barrier, we don’t know what percent of people overall would report that same barrier. We would only be able to say that this is a potential barrier. It’s important to think carefully about whether or not this would be a weakness for your research project.

Quantitative Research

The main goals of quantitative research are to estimate population quantities (e.g., 61% of your donors are in Colorado) and test for statistical difference between groups (e.g., donors in Colorado gave more money than those in other states). With quantitative research, you’re often sacrificing depth of understanding for precision.

One of the benefits to quantitative research, aside from being able to estimate population values, is that you can do a lot of interesting statistical analyses. Unlike a small sample of 30 people from focus groups, a large sample of 500 survey respondents allows for all sorts of analyses. You can look for statistical differences between groups, identify key clusters of respondents based on their responses, see if you can predict people’s responses from certain variables, etc.

There usually is not one single best way to answer a question with data, so thinking through your options and the benefits afforded by those options is important. And as always, we’re here to help you make these decisions if the project is complicated.

Does Prison Make People Find Religion?

We recently pondered prison and religious beliefs here at Corona, so we went poking around for data on the subject.  We found a Pew Forum survey of prison chaplains where they estimated the religious affiliation of prison inmates here:  http://www.pewforum.org/2012/03/22/prison-chaplains-perspectives/.  We then compared those proportions to the proportions of religions in the general population, also estimated by the Pew Forum and found here:  http://www.pewforum.org/2015/05/12/americas-changing-religious-landscape/.

What we found was interesting.  On the surface, a look at religious affiliations shows that some religions are strongly overrepresented in prison while others are at least modestly underrepresented.

Does Prison Make People Find Religion.

Some of the figures probably aren’t consistent since the two studies used some different classifications, so we recognize, for instance, that “other non-Christian” religions may be overreported in prisons since that study included more subcategories.  However, the results still show some broad patterns of interest.  In particular, we see that Muslims are strongly overrepresented in prison populations compared to their presence in the general population, as are “other non-Christian” populations.  This second disparity is due in large part to significant numbers of pagans and Native American spiritualists in prison.

In contrast, Catholics appear to be pretty adept at staying out of trouble.

Perhaps the most interesting element, though, is that fact that non-religious people (or at least religiously apathetic people) are much more common outside prison than inside prison.  Are non-religious people more likely to stay out of trouble, or do people discover and embrace religion inside prisons?

There are a number of potential explanations.  An obvious theory is that people enduring trials in their lives may embrace religion, particularly Islam or Protestantism or other religions that are overrepresented behind bars.  Or perhaps religion is embraced nominally due to benefits or accommodations that can be extracted from the prison system.  It could also be that the data collection method – surveys of chaplains – is biased because chaplains disproportionately see or remember religious inmates.

Regardless of the reason, it’s an interesting phenomenon to consider.  Why do we see more religion behind bars than outside them?

Visualizing data: 5 Best practices

Visualizing data, whether through graphs, infographics, or other means, has grown in importance as both the amount of data and the tools to interpret data have both increased. The goal of any graphic should be to tell a story, but it is easy to allow that story to be sidetracked by poor design. So, what are some common principles to keep in mind no matter what graphing software you’re using? Read on.

First, this blog post is meant to be more nuts and bolts than big picture philosophy.  Knowing your audience, their information needs, time constraints, big picture goals, and so on should be your first step. At the same time, this isn’t a step-by-step how-to either. Specifics on how to change colors, fonts, or other options in your software of choice isn’t covered here. So consider this a guide for what lies in between: ideas on how to strengthen your visuals to tell the story you want to tell.

1. Communicate one point per graphic

This rule is often invoked for PowerPoint slides too. Decide what the important piece of information is and then communicate it. Use subsequent graphics to continue the story, if needed, rather than trying to communicate everything at once. In showing too much, your viewer may miss the point you originally had wanted to make.

Exhibit 1

Lot’s of conclusions could be drawn from the above graph, but which are you trying to make?

Below, it becomes clearer that we’re wanting to compare Pepsi’s and Coke’s main brands.

Exhibit 2

And yes, we’re understand that pie charts are the worst.

2. Go light on text

A strong visual can stand on its own. Don’t overwhelm the viewer with lots of text to read that only repeats what is already clearly shown. If needed, use text to emphasize the main takeaway only.

Exhibit 3a

Essentially, the text above is the graph in text form. Below, while the reader can still see all the differences, you’re drawing their attention to the finding you feel is most relevant.

Exhibit 4a

3. Stick with San Serif fonts

Don’t know the difference between Serif and San Serif fonts? Click here. Use the latter in labels as they are easier to read.

Exhibit 5

Exhibit 6

The difference may be subtle, but the san serif font is a little cleaner and easier on the eyes.

4. Use color wisely

Color can help communicate your point and draw a viewer’s focus.  For example, colors could coordinate with competitor brand colors, or you could only use color on the finding you want to draw attention to, or maybe contrasting colors to highlight opposing views makes the most sense. Whatever you do, don’t use color indiscriminately, and don’t let your software select it for  you.

Exhibit 7a

Above, color is randomly assigned. It can make a graph look busy and it may even suggest the color is trying to say something. Below, the graph is simpler and not distracting.

Exhibit 8a

In the next example, we use shades of green to show agreement and shades of orange to show disagreement.  While there are four categories of responses, you can quickly see where there is more agreement vs. disagreement just by glancing at the graph.

Using cool vs. warmer colors also aids this contrast.

Exhibit 9

If, for example, we only wanted to talk about a specific segment, we could use spot color to highlight only those results.

Exhibit 10

Exhibit 11

A few other notes on color: Be mindful of how it will look in various media. Some printers, screens, and projectors may not produce color the same way as you see it on your screen. Orange may look brown, or the difference between blue and green will be washed out. Also, remember that some people may be color blind or that your charts may be reproduced in black and white. Take additional steps to make interpretation easy, such as making sure your legend is ordered in the same way as graph bars.

5. Keep it clean

A lot of graph programs add everything including the kitchen sink to your graph. Legends, axis labels, data point labels, and so on. Furthermore, they give you a lot of options do spice it up even more. But you probably shouldn’t.

The kitchen sink…

Exhibit 12

Less is more…

Exhibit 13a


We often write about making graphs. Check out some of our other posts on the topic:

Weight on What Matters

In May, Kate and I went to AAPOR’s 70th Annual Conference in Hollywood, FL.  Kate did a more timely job of summarizing our learnings, but now that things have had some time to settle, I thought I’d discuss an issue that came up in several presentations, most memorably in Andy Peytchev’s presentation on Weighting Adjustments Using Substantive Survey Variables.  The issue is deciding which variables to use for weighting.  (And if I butcher the argument here, the errors are my own.)

Let’s take it from the top.  If your survey sample looks exactly like the population from which it was drawn, everything is peachy and there is no need for weighting.

the-hunt-for-the-last-respondentMost of the time, however, survey samples don’t look exactly like the populations from which they were drawn.  A major reason for this is non-response bias – which just means that some types of people are less likely to take the survey than other types of people.  To correct for this, and make sure that the survey results reflect the attitudes and beliefs of the actual population and not just the responding sample, we weight the survey responses up or down according to whether they are from a group that is over- or under-represented among the respondents.

So, it seems like the way to choose weighting variables would be to look for variables where the survey sample differs from the population, right?  Not so fast.  First we have to think about what weighting “costs” the margin of error for your survey.  Weights, in this situation, are measuring the extent of bias in the sample.  The size of the weights “costs” a proportional amount of expansion to the margin of error for the survey.  Meaning the precision of your estimates declines as your weighting effect increases.

What does that mean for selecting weighting variables?  It means you don’t want to do any unnecessary weighting.  Recall, the purpose of weighting is to ensure that survey results reflect the views of the population.  Let’s say the purpose of your survey is to measure preferences for dogs vs. cats in your population.  Before doing any weighting you look to see whether the proportion of dog lovers varies by age or gender or marital status or educational attainment (to keep it simple, let’s pretend you don’t have any complicated response biases, like all of the men in your survey are under 45).  If you find that marital status is correlated with preferences for dogs vs. cats, but age and gender and educational attainment aren’t, then you may want to weight your data by marital status, but not the other variables.

This makes sense, right?  If men and women don’t differ in their opinions on this topic, then it doesn’t matter whether you have a disproportionate number of women in your sample.  If you weight on gender when you don’t need to, you unnecessarily expand your margin of error for the survey without improving the accuracy of your results.  On the other hand, if married people have different preferences than single people, and your sample is skewed toward married people, by weighting on marital status you increase your margin of error, but compensate by improving the accuracy of your results.

The bottom line:  choose weighting variables that are correlated with your variables of interest as well as your non-response bias.

And that’s one to grow on!  (This blog felt reminiscent of an 80’s PSA, right?)

Question What You Know

What I’m about to disclose may seem weird, or familiar, depending on the kind of person you are.  Last week our Corona Book Club met to discuss our recent pick, and as I sat down to put together my thoughts on it, I was reminded of an ad campaign from my youth.  I googled the slogan (this is not so weird) and couldn’t find the ad online, so instead I pulled out my huge 3-ring binder where I save things like interesting ads and magazine articles and dug out my own personal copy from circa 1995 (that may be weird).

(Even more strange is that I only have a photocopy of this particular ad, so I have no idea where it came from, and it appears to be half of a two-page ad, so I’m not even sure what the ad is for. My guess is it was for Carnival Cruise Lines or Cirque du Soleil, but it could just as easily have been for perfume or Nike or Waterford Crystal – 90’s ads were full of angsty inspirational prose.  In fact, my google search turned up another blogger writing about one of my favorites – yes, also part of my hard copy collection.)

I digress.  As I wasQuestion what you know saying, the ad text, which reads: “What appears to be new may in fact be familiar. What appears to be familiar may in fact be new.  So question what you know … because you may not really know it at all.” resonated with a point made in the book: “Don’t treat everyday life as boring or obvious; do treat ‘obvious’ actions, settings and events as potentially remarkable.”.

The book is David Silverman’s, A very short, fairly interesting and reasonably cheap book about qualitative research.  In addition to his exhortation to pay attention to things that may seem unremarkable, he also encourages researchers to explore other research methods and other sources of data.  He points out, accurately, that most commercial qualitative research is limited to interviews and focus groups.  He suggests we broaden our horizons to consider observational research methods, analysis of natural language, written documents, and so on.  He provides examples to show the value and possibilities in these alternatives.

Having read this book, it seems to me that my binder of nearly-vintage ads could be a useful data source for studying the Gen X persona.  Intrepid grad students can request a copy.

And if you recognize this ad, please tell us about it!

What your response rate says about engagement

Businessman Filling FormWhen we think about tracking customer satisfaction via surveys, the analysis is almost always on the survey responses themselves: how many said they were satisfied, what is driving satisfaction, and so on. (See a related post on 4 ways to report customer satisfaction.)

Not shocking (and of course we should look at the results to questions we ask), but there is another layer of data that can be analyzed: the data about the survey itself.

First and foremost is response rate.  (Quick review: response rate is the proportion of people who respond to an invitation to take a survey; read more here.) Response rate itself is important to reduce non-response bias (i.e., to reduce our concern that the people who do not respond are potentially very different that those who do respond), but it’s also a proxy for engagement. The more engaged your customers are with your organization, the more likely they will be to participate in your research. Therefore, tracking response rate as a separate metric as part of your overall customer dashboard can provide more depth in understanding your current relationship with customers (or citizens or members or…).

So, you’re probably now asking, “What response rate correlates to high engagement?” Short answer – it depends. Industry, topic matter, type of respondent, sampling, etc. can all make an impact on response rates. So while I’ll offer some general rules of thumb, take them with a grain of salt:

  • Less than 10%: Low engagement
  • 10-20%: Average engagement
  • 20-50%: High engagement
  • 50%+: Rock-star territory

Yes, we’ve had over 50% response to our clients’ surveys.

The important caveat here is to be weary of survey fatigue. If you are over surveying your customers, then response rates will decrease over time as people tire of taking surveys (shocking, right?). What is considered surveying too much will vary depending on the length of the survey and subject matter, but surveying monthly (or more frequently) will almost certainly cause fatigue, while surveying yearly (or less frequently) will probably not cause fatigue. One to 12 months? It’s a gray area. (Feel free to contact us for an opinion on your specific case).

Another potential source of survey meta data that you could use to assess engagement is depth of response to open-ended questions. The easiest way to measure this is to use word count as a proxy  – the more they write, presumably the more they care enough to tell you something.

For example, we did a large voter study for a state agency, and when asked their priorities on the given service topic, we received paragraph responses back. This, combined with other results, showed just how engaged they were with the topic (though not necessarily the agency in this case) and how much thought they had given it. Useful, as well as somewhat surprising, information for our client.

The next time you’re looking at survey data, be sure to look at more than just the responses themselves.


The Challenging In-Between: Bridging the gap among visionaries and operational experts

Over the years I’ve discovered that nonprofit executives and board members typically fall into two main categories: those who are boldly aspirational and those who are decidedly tactical. The first focuses on the big idea and its power to move people. They are lofty, passionate and effervescent. To the operationally-focused person they appear to dodge the important nuts and bolts. On the other hand, the executive whose natural talents lie in operations and the ability to get things done (often in spite of their visionary peers), are naturally challenged to let go and dream big. They seek the known. Their tendency to quickly dive into the weedy details is off-putting to the visionary.

Unfortunately there can be little in common between these divergent thinking styles and they often frustrate the heck out of each other. They use different language, or interpret a common term in opposite ways, and don’t know how to create the connective tissue that binds the two important orientations together.

BridgeWhat is the bridge? Strategy. The essence of strategy lies in charting the unknowns – where the industry is going to be, what customers will need (and demand) in the future, what donors will expect, and how the community will be doing. It also rests in a clear articulation of how the organization uniquely meets its customers’ needs in ways that rivals can’t or don’t. The strategist navigates unknowns and uncertainties. S/he keeps her eye on the 3-5 year horizon as she leads the development of a clear strategy – an articulation of what the organization will focus on over the next 3-5 years based upon well-founded decisions about the objective to be achieved, the scope within which it will work, and the competitive advantage to be leveraged.

I’m a big believer in the power of a strategy statement as described in the classic Harvard Business Review article from 2008 – Can you say what your strategy is? – by David J. Collis and Michael G. Rukstad. It is my go-to resource in this work and cannot recommend it strongly enough.

As a consultant I’m often the bridge builder – the strategy seeker – bringing together the operationally- and aspirationally-oriented executives. This bridge building is iterative. It takes time to spark a ha’s, establish common language, and build a team of executives focused on the same thing – future strategy.

Navigating Time and Space: Why we Include Geography

When I was a kid living in Colorado Springs, my family frequently drove to Denver.  We would go to watch baseball games or visit museums.  It is about 60 miles between Colorado Springs and Denver.  Back then, the speed limit was 60 to 65 miles per hour, and I had fun mentally calculating that since our Jeep was traveling about one mile per minute, then we would reach Denver in about one hour. Of course, this was all before the electronic geographic information revolution.

These days, we can quickly discover the distance from one place to another with just a few clicks on Google Maps or onboard GPS devices, and there are numerous software programs that calculate typical drive, bike or walk times.

Here at Corona, we leverage these geographic information technologies for many of our research projects when we suspect that there is a spatial (i.e., geographic) relationship with our key variables.  In other words, knowing the distance or travel time between two locations can reveal key insights.  For example, let’s say the Colorado Rockies baseball team wanted us to survey fans living in the Metro Area to understand what barriers prevent them from traveling to Coors Field to watch a game.  Our analysis would likely explore their opinions about going to a game (e.g., are tickets well priced, are games played at convenient times, etc), but we might expect that these opinions are influenced, in part, on the distance or travel time between their home and the ballpark. Fans living far away from the ballpark might have stronger convenience barriers than fans living closer to the ballpark.Drivetime Map

To explore this hypothesis, we can use GIS software to plot survey respondents’ homes. Then we decide to analyze by distance or by travel time. This choice depends on the research question we are trying to answer, as well as the context of the research. In a study of opinions about sound or light pollution, analyzing distance clearly makes more sense. If a behavior of interest involves walking or biking, then distance might be more important than travel time, considering walking 100 miles is a significant feat, but people frequently travel that distance by car. Alternatively, a study in a city where all the streets are linear and the speed limits are the same, drive time would be directly related to distance, so the unit of measurement wouldn’t matter.  However, in many of our projects, such as the Coors Field example, we are most interested in drive time.  Using drive times has a big advantage when the transportation system is not linear, which is often the case due to interstate highways, bridges, mountains, canyons, no-travel zones, construction, and a host of other reasons.  Considering drive time during rush hour is likely longer than an early weekend morning, our software allows us to specify a drive time to the day and hour.

So how does including geographic data benefit analysis and improve insights?  Most simply, we create custom segments based on distance, and we create easy-to-understand graphs that cross results to other questions by this variable. Segmentation is a good start, but we rarely stop there.  On many projects, the research demands more rigorous results, in which cases we will convert the data so that we can apply a more advanced analyses that tells us the strength of relationships to other key variables. For example, we can find out the extent that drive time to Coors field predicts fans’ perceived barrier to going there for a game.  In fact, we can explore multiple variables (e.g., ticket prices, fan devotion, and drive time) simultaneously to reveal patterns that would otherwise be difficult to tease apart. In some cases, we calculate drive times to other site, such as other leisure attractions.  We then incorporate that data to the analyses so that the results more closely reflect the real world, where decisions on how to spend leisure time are more complex.

While geography won’t provided all of the answers to our research questions, distance and drive time can be a key variable that helps explain what’s going on.  By using geographic technologies, we can efficiently explore this variable and sharpen our findings and recommendations.  In other words, it helps us paint a more complete picture.

Send us an email if you would like to discuss how analyzing spatial patterns could help answer your most important questions.