This one or that oneThis one or that oneDid you know that when you visit Amazon.com the homepage you see may be different than the one someone else sees, even beyond the normal personalized recommendations? It’s been widely reported how Amazon is continually tweaking their homepage by running experiments, or A/B tests (sometimes referred to as split tests), to tease out what makes a meaningful impact on sales. Should this button be here or there? Does this call to action work?

For some research questions, asking people their opinion yields significant insight. For others, people just cannot give you an accurate answer. Would you be more likely to open an email with a question as a subject line or with a bold statement? You don’t really know until you try.

So, how does this work? In essence, you’re running experiments, and with any scientific experiment, you will want your control group (e.g., you don’t change anything) and your experiment group (e.g., the one you’re altering a variable with). Ideally, you randomize people into each so you don’t inadvertently influence your results by how people were selected.

So now, you have two groups. While you may want to test several items, it is easiest to test one item at a time (and run multiple experiments to test each subsequent item). This will help you isolate the impact of your change – change too many things and you won’t know what made the difference or whether if some changes were working against each other.

Finally, launch the tests and measure what happens. Did open rates differ between the two? Did engagement increase? Differences aren’t always dramatic, but even a slight change at scale can have significant impact. For instance, if we increase response on a survey by 2%, that could mean 100 additional responses for essentially no additional cost. If the change costs money – for instance one marketing piece costs more than the other – then a cost benefit analysis will need to be performed. Sure, “B” performed better, but better enough to cover the additional expense of doing it?

A few final quick tips: A/B testing is an ongoing endeavor. Maximum learning will occur over time by running many experiments. Remember, things change, so running even the same experiment over and over can still yield new insights. Finally, you don’t always have to split your groups in half. If you have 2,000 customers, you don’t need to split them into two groups of 1,000. Peeling off just 500 for an experiment may be enough and lower the chance of adverse effects.

Ok, enough with the theoretical. How does this work in real-life?

Take our own company as an example. Corona engages in A/B testing, both for our clients and our own internal learnings. For instance, we may tweak survey invites, incentive options, or other variables to gauge impact on response rates. Through such tests we’ve teased out the ideal placement for the link to the survey within an email, from whom such requests should come from, and many other seemingly insignificant variables (though they are anything but insignificant).

How about your organization? Let’s say you’re a nonprofit, since many of our clients are in the nonprofit sector. Here are a few ideas to get you started:

  • eNewsletters. Most newsletter platforms have the ability to do A/B testing. Test subject lines, content, colors, everything. Test days and send times.
  • Website. Depending on your platform, this may be easy or more difficult. Test appeals, images, and donate call to actions.
  • Ad testing. Facebook ads, Google ads, etc. Most platforms allow you make tweaks to continually optimize your performance.
  • Mailings. Alter your mailing to change the appeal, call to action, images, or even form of the mailing (e.g., letter vs. postcard).
  • Programming. In addition to marketing and communications, even your services could possibly be tested. What service delivery model works best? Creates the biggest change?

What other ideas would you want to test?