Sign up for the Corona Observer quarterly e-newsletter

Stay current with Corona Insights by signing up for our quarterly e-newsletter, The Corona Observer.  In it, we share with our readers insights into current topics in market research, evaluation, and strategy that are relevant regardless of your position or field.

* indicates required




We hope you find it to be a valuable resource. Of course, if you wish to unsubscribe, you may do so at any time via the link at the bottom of the newsletter.

 


There is more to a quality survey than margin of error

One of the areas in which Corona excels is helping our clients who aren’t “research experts” to understand how to do research in a way that will yield high-quality, reliable results.  One of the questions we are frequently asked is how many completed surveys are necessary to ensure a “good survey.”  While the number of surveys definitely has an impact on data quality, the real answer is that there are many things beyond sample size that you have to keep in mind in order to ensure your results are reliable.  Here is an overview of four common types of errors you can make in survey research.

Sampling Error

Sampling error is the one type of error that can be easily summarized with a number.  Because of this, many tend to think of it as the main way of reporting a survey’s quality.  Sampling error refers to the “margin of error” of the results of a survey caused by the fact that you didn’t survey everyone in the population you are surveying – only a sample.  The “error” occurs when you draw a conclusion based on a survey result that may have been different if you’d conducted a larger survey to gather a wider variety of opinions.  As an example, imagine that you wanted to conduct a survey of people at a concert about their reasons for attending.  In an extreme case, you could collect 10 random responses to your question and draw conclusions about the population from that, but chances are the next 10 people you hear from might have very different opinions.  As you collect more and more surveys, this becomes less of an issue.

It’s important to keep in mind, however, that the calculations for sampling error assume that 1) your sample was random (see coverage error below) and 2) everyone you chose for the survey ended up responding to the survey (see non-response error below) – neither of which are true a lot of time.  Because of that, it is important to realize that any margin of error calculation is likely only telling you part of the story about a survey’s quality.

Still, it is certainly true that (when obtained properly), larger sample sizes tend to result in more reliable results.  Generally speaking, a sample of 100 responses will give you a general feel for your population’s opinions (with margins of error around ±10 percent), 400 responses will give you reasonably reliable results (with margins of error around ±5 percent), and larger samples will allow you to examine the opinions of smaller segments of your audience, so obtaining a large sample size will always be a key consideration in survey research.

Measurement Error

Measurement error is one of the most difficult types of errors to identify and is probably the most common mistake made by amateur survey researchers.  Measurement error occurs when respondents don’t understand how to properly respond to a question due to the way it’s worded or the answer options you’ve provided.  For example, if you were to ask concert goers how long it took them to get to the venue that night, you might get a set of answers that look reasonable, but what if some picked up their friends on the way?  What if some went to dinner before the concert?  Similarly, if you asked whether they came to the concert because of the band or because they like the venue, you might conclude that a majority came for the band when the real reason was that many just wanted to hang out with their friends.

So how do you protect against this?  Whenever possible, it’s important to test your survey – even if it’s just with a friend or coworker who is not involved in the research.  Have them take the survey, then have them talk you through how they interpreted each question and how they decided which answer fit best.  Then, if necessary, make changes to your survey in any areas that were unclear.

Coverage Error

Once you have a well-designed set of survey questions developed, the next step is to determine how you are going to get people to take your survey.  It might be tempting to just put a link on the concert venue’s Facebook and Twitter pages to ask people their opinions, but the results of such a survey likely wouldn’t reflect the opinions of all concert goers because it’s unlikely that all of them use social media.  If you were to do a survey in this fashion, you might find that attendees tended to be very tech savvy and open to new ideas and approaches, when the reality was that those just happen to be characteristics of people who use social media. The results of your survey might be skewed because you didn’t “cover” everyone in the audience (not to mention the possible issue of some taking the survey that didn’t actually attend the concert).

In order to ensure your survey is as high quality as possible, look for ways to ensure that everyone you are trying to represent is included in your sampling frame (even if you randomly choose a subset to actually receive an invitation).  If it’s truly not possible to do so, be sure to at least keep the potential bias of your respondent pool in mind as you interpret the results.

Non-response Error

As the final type of common error, non-response error is caused by the fact that, no matter how well you have designed and implemented your survey, there are a lot of people out there who simply aren’t going to respond.  Similar to coverage error discussed previously, non-response can cause you to draw conclusions from your survey’s results that may not be reflective of the entire population you are studying.  For example, many concert goers wouldn’t want to be bothered to take a survey, so the results you get would likely only be representative of a type of person who either 1) didn’t value their time as highly or 2) was particularly interested in the survey topic.

Unfortunately, non-response error is extremely difficult to eliminate entirely and will be a concern with just about any survey.  The most common approach is to try and boost your response rate as much as possible through a combination of frequent reminders, incentives, and appealing to respondents’ desire to be helpful in your messaging about the study, but even the best surveys typically only achieve response rates of 30-40 percent.  If budget is no issue, perhaps the best solution is to conduct follow-up research with those that didn’t originally respond, but even then, there will always be some who simply refuse to participate.

~

When it comes down to it, there is no such thing as a perfect survey, so any study will necessarily need to balance data quality with your timeline and budget.  Many of Corona’s internal debates involve discussing ways to reduce these errors as effectively as possible for our clients, and we are always happy to discuss various tradeoffs in approaches and how they will impact data quality.  Regardless, we hope that the next time you see a headline about a survey with a margin of error used to represent its quality, you will keep in mind that there is a lot more to determining the quality of a survey than that one number.



Nonprofit Data Heads

Here at Corona, we gather, analyze, and interpret data for all types of nonprofits.  While some of our nonprofit clients are a little data shy, many are data-heads like us!  Indeed, several nonprofits (many of which we have worked for or partnered with) have developed amazing websites full of easy to access datasets.

Here are 4 of my favorite nonprofit data sources…check them out!!

The Data Initiative at the Piton Foundation

Not only do they sponsor Mile High Data Day, but the Piton Foundation produces a variety of user friendly data interfaces.  I really like the creative ways they allow website visitors to explore data–not just static pie and bar charts. Instead, their interface is dynamic and extremely customizable. While their community facts tool pulls most (but not all) of its data from the US Census, this tool is very easy and fun to use.  Further, they have already defined and labeled neighborhoods across the Denver Metro area, making it easy for users to compare geographies without trying to aggregate census tract or block group numbers. This is an invaluable feature for data users who don’t have access to GIS. I also appreciate the option to display margin of error on bar charts when its available.

Highlights:

  • Easy to use from novice to expert data user
  • Data available by labeled neighborhood
  • 7-County Denver Metro focus

Explore

OpenColorado

With over 1,500 datasets, OpenColorado is a treasure trove of raw data.  While this site doesn’t have a fancy user interface, it does provide access to data in many different file types, making it a great website for the intermediate to advanced data user with access to software such as GIS, AutoCAD, or Google Earth.  Most data on OpenColorado is from Front Range cities (e.g., Arvada, Boulder, Denver, Westminster) and counties (e.g., Boulder, Denver, Clear Creek), but unfortunately it is far from a comprehensive list, so you’d need to look elsewhere if your searching for information from Arapahoe County, for example.

There are over 200 datasets specific to the City and County of Denver.  I opened a few that caught my eye, including the City’s “Checkbook” dataset that shows every payment made from the City (by City department) to payees by year.  I give kudos to Denver and OpenColorado for facilitating this type of fiscal transparency.  I also downloaded a dataset (CSV) of all Denver Police pedestrian and vehicle stops for the past four years, which included the outcome of each stop along with the address, latitude and longitude.  For a GIS user, this is especially helpful if you want to search for patterns of police activity compared to other social and geographic factors.  Even without access to spatial software, this dataset is useful because it includes neighborhood labels.  I created a quick pivot table in Excel to see the top ten neighborhoods for cars being towed (so don’t park your car illegally in these neighborhoods).

Highlights:

  • Tons of raw data
  • Various file types, including shapefiles and geodatabases that are compatible with GIS, and KML files that are compatible with GoogleEarth
  • Search for data by geography, tags, or custom search words

Kids Count from the Colorado Children’s Campaign

Kids Count is a well-respected data resource for all things kids.  Each year, the Colorado Children’s Campaign (disclaimer, they are also our neighbor, working just two floors below us) produces the Kids Count in Colorado report, which communicates important child well-being indicators and indices statewide and by county when available.  The neat thing about Kids Count is that it’s also a national program, so you can compare how indicators in a specific county compare to the state and nation. In addition to the full report available as a PDF, you can also interact with a state map and point and click to access a summary of indicators by county.  Mostly, their data is not available in raw form, but their report does explain how they calculated their estimates and provides tons of contextual information that makes their key findings much more insightful.

Highlights:

  • Compare county data to state and national trends
  • Reports include easy to understand analysis and interpretation of data
  • Learn about trends overtime and across demographic groups

Outdoor Foundation

If you’re looking for information about outdoor recreation of any type in any state, there is probably an Outdoor Foundation report that has the data you’re seeking.  Based in Boulder, Colorado, the Outdoor Foundation’s most common reports communicate studies of participation rates by activity type, both at a top level and also by selected activity types such as camping, fishing, and paddle sports (haven’t yet heard of stand-up paddle boarding?  It’s one of the fastest growing in terms of participation).  The top-line reports show trends over the past ten years, while the more detailed Participation Reports break out participation, and other factors such as barriers to participation, by various demographics.  Multiple other special reports, focusing on topics such as youth and technology, round out what’s available from this site.

The participation and special reports are helpful, but I’m most impressed with the Recreation Economy reports, which are available nationwide and within each state.  These reports estimate the economic contribution of outdoor recreation, including jobs supported, tax revenue, and retail sales.  For example, the outdoor recreation economy supported about 107,000 jobs in Colorado in 2013.  Unfortunately, the raw data is not available for further analysis, but the summary results are still interesting and helpful.

Explore:


Art meets architecture in Denver this weekend

Looking for something fun to do this weekend in-between rides on the new A Line to DIA? Check out the arts and cultural activities during Doors Open Denver. Art meets architecture through pop-ups ranging from a nomadic art gallery to poetry, drama, and music performances among the 11 offerings. My favorite? Graffiti art. If you’ve been secretly wanting to learn the art of graffiti painting – and you’re 55 or older – then we’ve got the creative outlet for you. Bust through stereotypes as you create graffiti art inspired by two of Denver’s architectural gems.

  • April 23rd, 1-3 pm – Saturday’s pop-up will be hosted by Clyfford Still Museum on their front lawn. Clyfford Still Museum will give 4 – 20 minute architectural tours each day at 11:00, 11:30, 2:00 and 2:30.
  • April 24th, 1-3 pm – Sunday’s pop-up will be hosted by the new Rodolfo “Corky” Gonzales Library and include 3 tours led by architect Joseph Moltabano of Studiotrope, a Denver-based architecture and design agency. DPL staff will share how the library’s design informs their work. Since Sunday is Día del Niño the artist will be prepared to host a multi-generational event at the library.VSA Colorado/Access Gallery

Thanks to our collaborative partners: VSA Colorado/Access Gallery, studiotrope design, Denver Public Library, Studiotrope Designand Clyfford Still Museum. I’d like to give a special shout out to Damon McLeese of Access Gallery; Joseph Montalbano  of DPLstudiotrope; Ed Kiang, Viviana Casillas and Diane Lapierre of DPL; and Sonia Rae of Clyfford Still Museum.

Please join me in thanking the Bonfils-Clyfford Still Museum Stanton Foundation for funding this engaging spotlight on art and architecture.

For more information visit this Doors Open Denver link. 


Happy or not

A key challenge for the research industry – and any company seeking feedback from its customers – is gaining participation.

There have been books written on how to reduce non-response (i.e. increase participation), and tactics often include providing incentives, additional touch points, and finely crafted messaging. All good and necessary.

But one trend we’re seeing more of is the minimalist “survey.” One question, maybe two, to gather point-in-time feedback. You see this on rating services (e.g., Square receipts, your Uber driver, at the checkout, etc.), simple email surveys where you respond by clicking a link in the email, and texting feedback, to name a few.

Recently, travelling through Iceland and Norway, I caHappy or notme across this stand asking “happy or not” at various points in the airport (e.g., check-in, bathrooms, and luggage claim). Incredibly simple and easy to do. You don’t even have to stop – just hit the button as you walk by.

A great idea, sure, but did people use it? In short, yes. While I don’t know the actual numbers, based on observation alone (what else does a market researcher do while hanging out in airports?), I did see several people interact with it as they went by. Maybe it’s the novelty of it and in time people will come to ignore it, but it’s a great example of collecting in-the-moment feedback, quickly and effortlessly.

Now, asking one question and getting a checkbox response will not tell you as much as a 10 minute survey will, but if it gets people to actually participate, it is a solid step in the right direction. Nothing says this has to be your only data either. I would assume that, in addition to the score received, there is also a time and date stamp associated with responses (and excessive button pushes at one time should probably be “cleaned” from the data). Taken together, investigating problems becomes easier (maybe people are only unsatisfied in the morning rush at the airport?), and if necessary, additional research can always be conducted as well to further investigate issues.

What other examples, successful or not, are you seeing organizations use to collect feedback?


Your Baby Is Increasingly Special and Unique, Apparently

It seems like when I’m in the mall and hear parents talking to their kids, I hear unusual names more and more often.  I’ve been developing a theory that parents are enjoying creativity more and valuing tradition less when that birth certificate rolls around, so in keeping with Corona Insights tradition, I thought I’d explore it a little more with some data analysis.  Off I went to the Social Security Administration website to put together a database of names.

I took a look at the most popular baby names in 2014, and compared them with those of 2004, 1994, 1984, and so on, all the way back to 1884.  Are unusual names more common in 2014?  It was straightforward to analyze, even if it meant sifting through a lot of data.

First, I looked at the 30 most popular names in each decade, and compared them to the total number of babies born.  If there’s a trend toward giving babies more unusual names, then we would expect a smaller concentration of babies with the most common names.

And wow, is that true, particularly for girls.  Let’s examine female names first.

If we look at the 30 most common female baby names, they constituted 41 percent of baby girl names in 1884.  There was some variation over the next 70 years but not much, ranging from 36 to 43 percent.  In 1954, the figure still stood at 40 percent for the girls destined to duck under their desks in the Cold War.  (As an important methodological note, recognize that these aren’t the same 30 names that were most common in 1884 – I adjusted the top 30 in each decade to reflect the most popular names of each particular decade.  This holds true throughout the analysis – I’m not tracking the popularity of a specific set of names, but rather I’m examining the likelihood of parents following popular trends in naming.)

But then something happened.  By 1964, the figure had declined to 32 percent.  It stayed roughly at that level until 1994, when it dropped further to 24 percent.  And since then, it had declined dramatically to 18 percent in 2004 and 16 percent in 2014.  The most common female names in 2014 are not very widespread.

If the most common names are less widely used, the next question is what other names are being used?  Are parents merely spreading their wings a little to other relatively well-recognized names, or are they pushing the boundaries of names?  To test this, I broadened my analysis and looked at the 100 most common female names.  In 1884, the most common 100 names covered 70 percent of girls born that year.  Moving forward in time, we see a very similar pattern that we saw for the top 30.  The figure declined slightly through 1954 (65%), and then those hippies from the 60s started becoming parents.  The figure dropped 58% by 1964, 51% by 1974, and continues to decline.  In 2014, the top 100 female names covered only 31 percent of births.

So how much dispersion do we actually have here?  Let’s look at the top 500 female names in each decade.  Most of us probably couldn’t even come up with 500 different names, so surely they’re covering almost the entire female population, right?

Well, that certainly used to be the case.  In 1884, the top 500 names covered 90 percent of the female baby population, and sure enough, it follows the same pattern as my earlier analyses.  The figure floated between 87 and 89 percent up until 1954, with remarkable consistency.  After all, who can’t find a favorite name among the top 500?

A lot of modern people, apparently.  The figure dropped to 85 percent in 1964, 75 percent in 1974, and currently stands at 58 percent.  Think about that for a moment.  42 percent of girls today have a name that does not fall into the top 500 most common names of her decade.

How does such a phenomenon happen?  One might speculate that this is due to a trend for adopting spelling variants.  Evelyn, for example, has branched into both Evelyn and Evelynn.  While I suspect that this is a significant factor, though, it appears to not be the main factor.  Instead, what we see among our top 500 names for 2014 is that many names appear to be newly created, or at least exceedingly rare in past decades because they’ve never appeared on a top-500 list until now.  Names like Brynlee and Cataleya and Myla and Phoenix have replaced more standard names.

Another theory that I can’t confirm at this point is that perhaps the United States has more diverse immigration these days, which could be producing a greater diversity of baby names.

Now let’s take a look at male names.

The first thing we see is that male names have historically been compressed relative to female names.  Looking across all of the decades since 1884, there are 1,286 male names that have placed in the top 500 in popularity, while there are 1,601 female names.  So are male names still more concentrated among fewer options?  We’ll repeat the analysis we just did for female names.

If we look at the 30 most common male baby names, they constituted 56 percent of baby boy names in 1884.  Per our earlier observation, this is much more concentrated than the 41 percent that we saw for females.  Similar to female trends, though, the proportion was relatively stable for decades afterwards, still standing at 54 percent in 1954.

The proportion began dropping in the 1960s, but was more stable than female names.  By 1964, the figure had declined gracefully to 51 percent, then 46 and 45 percent in the 1970s and 1980s.  The major decentralization for boys began in earnest in 1994, when the figure dropped to 35 percent, then 25 percent in 2004 and 20 percent in 2014, which isn’t notably higher than the female figure at this point.

An interesting difference by gender occurs when we examine the top 100 male names.  Whereas the distribution of female names was only minor through the 1950s, the distribution of male names actually decreased during that era.  In other words, the 100 most common names became slightly more concentrated for boys from 1884 to 1954.  Names became more dispersed through the 1920s, but the trend then reversed.  The proportion of boys with top-100 names dropped from 74 percent to 69 percent between 1884 and 1924, then rose back to 76 percent by 1954.  Perhaps during hard times of depression and war, parents get more conservative when naming boys.  Or maybe mothers working on World War II assembly lines became enamored with mass production.

However, from 1954 on, male names paralleled the diffusion of female names, dropping steadily to only 42 percent today.  This is still more concentrated than the 31 percent figure for females, but is far lower today than at any time in the past 130 years.

Finally, we look at the top 500 male names.  Have males had the same dispersal as females?

Contrary to other findings, male names were actually slightly more dispersed among the top 500 than female names in 1884.  The 500 most popular male baby names constituted 89 percent of births, compared to 90 percent for females.  But this discrepancy didn’t last long.  While the top 500 female names dispersed slightly from 1884 through 1954, male names actually converged, reaching a high point of 94 percent in 1954.  So while parents were practicing more creativity in female names over this period, they were becoming less daring with male names, choosing more often to follow popular trends.

However, creativity took hold soon thereafter.  Male convergence dropped slightly to 93 percent by 1964, then dropped steadily to a figure of 71 percent in 2014.  So again, parents are increasingly choosing uncommon names for their babies in modern times, though to a much greater extend with boys than with girls.  As with the girls, these boys’ names appear to be a combination of new spellings and also new names that have never before shown up in the top 500, names such as Daxton and Finnegan and Kasen.

This is all well and interesting, but what does it mean?

I’m first interested in the differences for women versus men?  Why do parents feel greater freedom to give a female child an uncommon name?  Do they feel a greater need to make a female child stand out from the crowd, and if so, why?  Are males better situated to succeed with a more traditional name, or do more men simply get named after their fathers or other family members?  Is the difference sexism in a very indirect form, or is there some logical reason?  I’m at a loss to come up with a logical reason that doesn’t reflect different attitudes toward girl babies than boy babies, but I’d love to hear your theories.

While the level of standardization differs between males and females, though, the patterns are moving in the same direction, and doing so strongly.  Why are babies – both boys and girls – increasingly likely to be given uncommon names?  One can surmise that it describes a society where individualism is being sought out more and more.  It may also point toward a lesser desire or obligation to pass down family names and a lesser emphasis on tradition.  So are we increasingly a nation of creative individualists or are we increasingly lost and rootless?  Or both?


Building Empathy

In the past year I’ve been involved with a few projects at Corona that involve evaluating programming for teenagers. One commonality across these projects is that the organizations have been interested in building empathy in teenagers. As I’ve been reading through the literature on empathy, I’ve been thinking about how building empathy should be a goal of most nonprofits.

Perhaps not surprisingly, there’s research demonstrating that people are more likely to donate when they feel empathy for the recipient. This research builds upon the classic psychology research demonstrating that empathy increases the likelihood of altruism, especially when there are costs to being altruistic. It’s clear that empathy can play an important role in motivating people to give altruistically, but how can we build empathy especially for others who are not very similar to ourselves?

One useful way to build empathy in marketing materials is to create stories that allow people to connect to those who need help or to those who are helping. The idea that organizations should be engaging in storytelling to engage and attract stake holders has been recently promoted. Stories are most powerful when people are able to lose themselves in a character.  This is why reading or seeing a story from the first person perspective can be so powerful.

While you don’t necessarily need research to write an empathy-building story to use in marketing materials, research can provide useful information for creating those stories. Any data or information that you have collected about your donors or your recipients can provide a great foundation for creating a story. And if you develop new, empathy-building marketing materials, you might consider testing the impact of those materials.


DIY Tools: Network Graphing

Analyzing Corona’s internal data for our annual retreat is one of my great joys in life.  (It’s true – I know, I’m a strange one.)  For the last few years I’ve included an analysis of teamwork at Corona.  Our project teams form organically around interests, strengths, and capacity, so over the course of a year most of us have worked with everyone else at the firm on a project or two, and because of positions and other specializations some pairs work together more than others.  Visualizing this teamwork network is useful for thinking about efficiencies that may have developed around certain partnerships, and thinking about cross-training needs, and so on.  The reason I’m describing this is that I’ve tried out a few software tools in the course of this analysis that others might find useful for their data analysis (teamwork or otherwise).

For demonstration purposes, I’ve put together a simple example dataset with counts of shared projects.  In reality, I prefer to use other metrics like hours worked on shared projects because our projects are not all of equal size, and I might have worked with someone on one big project where we spent 500 hours each on it, and meanwhile I worked on 5 different small projects with another person where we logged 200 hours total.

But to keep it simple here, I start with a fairly straightforward dataset.  I have three columns: the first two are the names of pairs of team members (e.g., Beth – Kate, though I’m using letters here to protect our identities), and the third column has the number of projects that pair has worked on together in the last year.  To illustrate:

My dataset contains all possible staff pairs.  We have 10 people on staff, so there are 45 pairs.  I want to draw a network graph where each person is a vertex (or node), and the edge (or line) between them is thicker or thinner as a function of either the count of shared projects or the hours on shared projects.

This year I used Google Fusion Tables to create the network graph.  This is a free web application from Google.  I start by creating a fusion table and importing my data from a google spreadsheet.  (You can also import an Excel file from your computer or start with a blank fusion table and enter your data there.)  The new file opens with two tabs at the top – one called Rows that looks just like the spreadsheet I imported and the other called Cards that looks like a bunch of notecards each containing the info in one row of data.  To create the chart, I click the plus button to the right of those tabs and select “Add chart”.   In the new tab I select the network graph icon in the lower left, and then ask to show the link between “Name 1” and “Name 2” and weight by “Count of Shared Projects”.  It looks like this:

There are a few things I don’t love about this tool.  First, it doesn’t seem to be able to show recursive links (from me back to me, for example).  We have a number of projects that are staffed by a single person, and being able to add a weighted line indicating how many projects I worked on by myself would be helpful.  As it is, those projects aren’t included in the graph (I tried including rows in the dataset where Name 1 and Name 2 are the same, but to no avail).  As a result, the bubble sizes (indicating total project counts) for senior staff tend to be smaller on average, because more senior people have more projects where they work alone, and those projects aren’t represented.  Also, the tool doesn’t have options for 2D visualizations, so if you need a static image you are stuck with something like the above which is quite messy.

However, the interactive version is quite fun as you can click and drag the nodes to spin the 3D network around and highlight the connections to a particular person.

Another tool option that I’ve used in the past (and that is able to show recursive links and 2D networks) is an Excel template called NodeXL.  You can download the template from their website – you’ll need to install it (which requires a restart of your computer) – and then to use it just open your Windows start menu and type NodeXL. Instructions here.  I had some difficulties using it with Office 2016, but in Office 2013 it worked quite well.

If you try these out, share your examples with us!

 


Making improvements through A/B testing

This one or that oneThis one or that oneDid you know that when you visit Amazon.com the homepage you see may be different than the one someone else sees, even beyond the normal personalized recommendations? It’s been widely reported how Amazon is continually tweaking their homepage by running experiments, or A/B tests (sometimes referred to as split tests), to tease out what makes a meaningful impact on sales. Should this button be here or there? Does this call to action work?

For some research questions, asking people their opinion yields significant insight. For others, people just cannot give you an accurate answer. Would you be more likely to open an email with a question as a subject line or with a bold statement? You don’t really know until you try.

So, how does this work? In essence, you’re running experiments, and with any scientific experiment, you will want your control group (e.g., you don’t change anything) and your experiment group (e.g., the one you’re altering a variable with). Ideally, you randomize people into each so you don’t inadvertently influence your results by how people were selected.

So now, you have two groups. While you may want to test several items, it is easiest to test one item at a time (and run multiple experiments to test each subsequent item). This will help you isolate the impact of your change – change too many things and you won’t know what made the difference or whether if some changes were working against each other.

Finally, launch the tests and measure what happens. Did open rates differ between the two? Did engagement increase? Differences aren’t always dramatic, but even a slight change at scale can have significant impact. For instance, if we increase response on a survey by 2%, that could mean 100 additional responses for essentially no additional cost. If the change costs money – for instance one marketing piece costs more than the other – then a cost benefit analysis will need to be performed. Sure, “B” performed better, but better enough to cover the additional expense of doing it?

A few final quick tips: A/B testing is an ongoing endeavor. Maximum learning will occur over time by running many experiments. Remember, things change, so running even the same experiment over and over can still yield new insights. Finally, you don’t always have to split your groups in half. If you have 2,000 customers, you don’t need to split them into two groups of 1,000. Peeling off just 500 for an experiment may be enough and lower the chance of adverse effects.

Ok, enough with the theoretical. How does this work in real-life?

Take our own company as an example. Corona engages in A/B testing, both for our clients and our own internal learnings. For instance, we may tweak survey invites, incentive options, or other variables to gauge impact on response rates. Through such tests we’ve teased out the ideal placement for the link to the survey within an email, from whom such requests should come from, and many other seemingly insignificant variables (though they are anything but insignificant).

How about your organization? Let’s say you’re a nonprofit, since many of our clients are in the nonprofit sector. Here are a few ideas to get you started:

  • eNewsletters. Most newsletter platforms have the ability to do A/B testing. Test subject lines, content, colors, everything. Test days and send times.
  • Website. Depending on your platform, this may be easy or more difficult. Test appeals, images, and donate call to actions.
  • Ad testing. Facebook ads, Google ads, etc. Most platforms allow you make tweaks to continually optimize your performance.
  • Mailings. Alter your mailing to change the appeal, call to action, images, or even form of the mailing (e.g., letter vs. postcard).
  • Programming. In addition to marketing and communications, even your services could possibly be tested. What service delivery model works best? Creates the biggest change?

What other ideas would you want to test?


Where are we now? The new next era nonprofit

I spent the other afternoon sitting around a large table chatting with professionals from across the sector about leadership, and the competencies that an effective leader will need in 2025. As we were chatting about today’s realities – and the social, political, technical and economic factors affecting nonprofits – it struck me that we’ve been here before. Or at least I have. Where’s that you may ask? Contemplating the “next era” of the sector.

Nonprofit While our social consciousness is slow to evolve and too slow to change (think social equity and gender identity) we are witnessing change in the form of driver-less cars, “smart” cities, neuroscience, and the record number of Americans not in the workforce. Those topics weren’t showing up on my Facebook feed five years ago. Back then we weren’t contemplating car-free micro-apartments in Denver either.

What else is on the nonprofit leader’s to-do list today? Six recurring topics with new twists.

  1. $ - Figure out what impact investing really is and whether or not we can do it. I know you are secretly wondering if this really is a game changer or simply a spin on the same old, same old. It’s a game changer.
  2. Inclusiveness – Learn how we can create inclusive and accessible organizations that welcome and engage diverse people. We can’t keep kicking this can down the road.
  3. Innovation – Explore the edges of our work, seeking new ideas from unexpected places leveraging tools like design thinking.
  4. Mission impact – Admit to ourselves that we don’t really understand our customers or how to positively impact their lives in a meaningful way and that we may need to toss out some of our favorites.
  5. Engagement – Realize that too often we treat people transactionally. We think of them in buckets – volunteers, Facebook followers, donors, etc. We haven’t optimized our business models to cultivate engagement. Check out my Synergistic Business ModelTM if you’d like to learn more about this all-to-often ignored cornerstone of the nonprofit business model.
  6. Sustainability – Fess up that our business models aren’t really sustainable and that we need thoughtful, committed and generous people to stand by us for the next few years while we invest in figuring things out – or, more bravely, exit the market and let someone new and fresh bring 2025 solutions to the marketplace.

There are no bright, defining lines between the sectors, only smudges that get fainter every time we step on them. Younger generations could care less about your tax status. They want to know you are authentic, relevant, impactful and efficient. They expect you to do good. Period. Gen Y and the boomers are learning from them.

What competencies will a nonprofit leader likely need in 2025? My list begins with “intelligence” and the courage to explore, experiment and collaborate. Higher education is looking at multi-disciplinary learning. Perhaps nonprofits need to consider busting their silo’ed approaches too.

What’s on your list?

2025 will be here before we know it. Are you ready?