Would you send an email campaign without split-testing first? In my opinion the answer to both questions should (almost) always be a firm ‘No!’
By definition, a split test (often referred to as A/B Testing) is one in which an email marketer takes two or more samples from a list, each sample considered to be representative of the entire list, and sends a different version of an email campaign to each sample to test the effectiveness of each variation.
Essentially, there is rarely a reason not to run a split test before deploying an email campaign, even if it’s as simple as a slight variation to the subject line.
TAKE A SAMPLE
I’m often asked, “What sample percentage should I be testing to?” There’s really no fixed answer – it depends mostly on how big your mailing list is, and what remaining portion of the list you would like to roll out the most successful version to. Provided your sample sets are meaningfully large, i.e. large enough to assume that all groups can be considered equally ‘average’, you can use relatively small percentages. On the other hand, you don’t want to use up 50% of your list from testing, only to find out that one the creatives was twice as successful that the other(s).
ROLL-OUT, ROLL-OUT
I’m also frequently asked, “Which version should I roll out?” You might think this should be obvious based on what you are testing, but I’ve seen (what I consider to be) a less-successful split rolled out on more than one occasion. For example, there was an A/B test for which one version had a click-through rate that was over 2% higher than the other, so it was rolled out to the rest of the list. This seems fair enough, except the thing being tested was subject line! Despite the two creatives being exactly the same, the campaign that was actually rolled featured the slightly worse performing subject line (by open rate). Notably, the rolled-out version also had double the amount of unsubscribers.
It’s also important to test different creative variants, for example by changing the location of your call-to-action (CTA) to analyse whether having it at the top/bottom of your email impacts your click-throughs and conversions. Remember that every data set performs differently, so split testing helps build a profile of your overall customer base over a period of time. Similarly, remember that testing more than one variable at a time only limits the amount of meaningful analysis you can carry out on the results.
TAKE COMMAND(ER)...
In Campaign Commander, you’re never more than 4 clicks away from viewing the statistics for a current or completed split test. Below is a snapshot of a campaign that had been deployed six hours previously – there is a clear winner, and the remaining 50% of the list received their version shortly after.
The amount of time to leave your tests to collect stats before rolling out the full campaign is another frequently asked question. A 48-hour window gives a very accurate idea of how the two versions have performed, but tight business plans and strict deadlines often mean that marketers need to roll-out the most successful version (so-far) quicker than they would have liked.
A: UNCERTAINTY / B: KNOWLEDGE
One very recent split test from one of our clients is a great example of the impact this sort of strategy can have on the overall success of your email campaigns. A simple A/B test on subject yielded a difference in open rate that was twice as high in Test A as Test B. As the test was only sent out to an overall 10% of the whole mailing list, this left the remaining 90% to receive the best version thus potentially increasing the overall ROI of the campaign by as much as 100%. What was the difference between the subject lines? One was short and direct, and the other was mysterious and leading. As if often the case (but rarely implemented in practice), the mysterious subject line was the winner, but this is a topic best saved for a later blog post!
CAN YOU REPEAT THAT?
Something frequently overlooked is the important of keeping track of your past testing. If you can analyse your last 20 successful subject lines or creative layouts, this will inevitably help you create consistently improving email campaigns. And the best thing about split testing? It’s really quick, very easy to do, and can have a hugely beneficial impact on your campaign results and ROI.
You might ask yourself, “Why don’t I split-test?”