Top Tips for Successful Split Testing

Tami Urban, Marketing Strategist

Author Bio

Tami brings years of hands-on experience driving revenue for her clients as an email marketing strategist. This experience includes working with hundreds of B2B and B2C clients in a range of industries. She guides her clients to see success in their email and social marketing efforts as well as working with her clients on split testing, creative optimization, segmentation, engagement and overall best practices. For fun, you can find Tami cooking, enjoying a good bottle of wine, or checking out a show at the local theater.

Every marketer has had campaigns that didn’t produce the intended open and click-through results. In these situations, your email communications most likely require a change. But how do you know which change will improve the results? With any optimizing your email marketing performance, A/B testing will help provide the optimal answer.

A/B Testing

A/B testing is exactly what it sounds like: two versions of an element (A and B) and a metric that defines your success. To determine which version is better, you subject both versions to testing at the same time. In the end, you measure which version was more successful with a sample population and select that version for sending to the remainder of your subscribers.

In a typical A/B test, you can send a control version of your email to 10% of your subscribers and a revised version to another 10%. The more successful message can then be sent to the remaining 80% of your subscribers, taking full advantage of the lift in results.

What to Test First

Even though every A/B test is unique, certain common email elements are tested regularly. The easiest variable to test is the subject line, which historically provides some of the biggest gains. Typically, the subject line with the highest open rate is considered the winner; of course, there are many aspects of the subject line that could be tested.

Subject Line Testing Options

Length: Test a short subject line vs. a longer one.

Content vs. Creativity: Use a subject line that stirs curiosity vs. one that is straightforward.

Personalization: Try a subject line that uses the subscriber’s name vs. a generic one.

Questions: Use a subject line that asks a question vs. one that doesn’t.

Capitalization: Experiment with one or more words in all caps vs. none.

Special Characters: Incorporate a few special characters vs. none.

Value vs. Promotion: Attempt to solve a proposed problem with your subject line vs. a simple product promotion.

Other Email Elements to Test

  • Preheader vs. no preheader or linked preheader text vs. a text-only version
  • Call to action button vs. plain text. If using a button, test its wording, size, color and placement.
  • What time of day or day of the week garners the highest open rate
  • A complete email message vs. one that requires a click-through
  • Pricing and promotional offers, such as dollar off vs. percentage off
  • Images that include people with your product vs. product-only images
Caution! Bump Ahead

When A/B Tests Fall Short

Lack of Defined and Measurable Goals

A marketer must know what is being tested and how to measure each test. For example, does a red call to action button perform better than a green button? Do emails typically perform better on Mondays than Fridays? These are changes that can be easily measured.

Marketers run into trouble when their theories are too vague, such as testing two entirely different email designs with multiple variables. While you can technically conduct a test this way, it will be unclear which of the many changes led one message to outperform the other. An A/B test should have two and only two variables for comparison.

When Testing is Stopped Too Soon

The test emails have been sent and the results are coming in. After one hour, one message has outperformed the other, so you run with it only to learn later that the other message did better in the end. The more time you allow to examine the results, the more accurate the results will be. Give a split test at least 4-6 hours before reviewing initial results and 24 hours for the final results. Did the test definitively prove or disprove your original question? Be sure to attribute your results to the specific testing variable. And remember, one winning test does not a rule make. Rinse and repeat the test at least three to five times before making sweeping changes or conclusions.

Assuming Another Company’s Strategy Will Work For You

Do not make the mistake of listening to published findings from other companies and apply those findings to your subscribers. Industry standards and best practices need to be tested to ensure they work for your audience. Continually test your assumptions and remember that not all audiences will respond to the same type of subject line, call to action, email design, landing page, etc.

So if you are looking to increase revenue and improve the results of your current marketing efforts, start running some A/B tests. Consider running one new test every other month, if not once a month. One retailer recently tested their call to action button color (red vs. gold) and found that the gold one brought over 350% more in revenue versus the red button. All other elements of the messages were identical and the results were tested multiple times. The overall open and click rates stayed the same, but the proof was in the conversions. Now I’m not telling you to change all your buttons to gold. What I am saying is that it’s completely worth testing to find out what works best for your subscribers.

Remember: Be specific about what you’re going to test, identify your goal and how you’ll measure your test’s success, learn from your findings, and retest until your results are proven over and over. Happy testing!