When you first hear AB testing, it can easily take you back to college statistics, where you begin to envision standard deviation graphs and complex test conditions. Many people even shy away from A/B testing because they believe once they understand the basics, they won’t have the time or resources to run tests.

Testing is one of the most important things you will do in email marketing. Everything from your subject line to your call to action will impact whether or not the contact chooses to engage with you. Small tweaks over time can drive big results, and help keep your contacts engaged during every stage of the buying process.

Let’s break down the basics:

What is AB Testing?

In simple terms, A/B testing (or split testing) is when you take 2 variations and test them against each other to identify which performs best.

Common A/B Tests:

  • Sender (John@ vs. Marketing@)
  • Subject Line
  • Personalization vs. Non Personalization (Hi John vs. Hi There)
  • HTML vs. Plain Text
  • Call-To-Action (Free Trial vs. Start For Free)
  • Day/Time of Email Send
  • Timing Between Communications (Once a Week vs. Once a Month)
  • Landing Page

Testing on different parts of your emails and campaigns can be critical to your overall conversion process.

For example, subject lines and day/time of your email send will impact whether your email gets opened, while your call to action will determine whether or not someone chooses to convert.

How to A/B Test

Testing emails is an easy and simple way to start driving more conversions from your email marketing.  Here are the 5 easy steps to implement a successful experiment:

1. Decide what you want to test

In order to avoid skewed results, stick with testing one variant at a time. Reason being? If you use the same A/B test to test Subject Line and Sender, you may not be able to pinpoint the success or failure of either variant.

It is best practice to run higher impact tests in the beginning of your testing. Save the lower priority items for further down the road when you are fine-tuning the details.  Some examples of high impact variants include sender, subject line, HTML vs plain text.

Create both test email templates. Keep everything the same except for the one variant you want to test. In the test below, we tested two subject lines against each other.


2. Choose your sample size

Nothing sounds scarier than sending out a test to a large group of contacts that completely flops. Since marketers are not mathematicians, test on at least 100 contacts.

If you test on too few contacts, chances are you will not see a clear-cut winner. The goal is to find the lowest amount of contacts you can comfortably test on, while gathering good data. As you find your sample sweet spot, stick to testing on 100-500 contacts each time.

Pull out the segment you want to send your email out to.  From there, portion out the sample randomly.  If you are using Hatchbuck, all you have to do is select the number of contacts you want to test.

3. Run the test

Split the your sample size in half and send one template to one half and the second template to the  other.  In Hatchbuck, we make this step easy.  Just select your sample audience, Click ‘Send Email’ from your templates, then select the two test templates.  Hatchbuck will evenly split your list between the two templates.

4. Pick the winner

Around Day 5-7, you should have pretty good picture of your results. If you look into your results too soon, you will likely see a significant change in your stats. If you wait too long, you are also likely to run into skewed results.

If you are using Hatchbuck, review the “Email by Template” report to decide the winner of the experiment.  Look at the open rates and click through rates.  

5. Test again

As you test different variants, keep the top performing variant as you move forward. For instance, in the example above, we first tested the subject lines.  Once we found a winner, we then tested the email copy to continue to fine-tune our prospect campaign.  


For the best results early in your testing process, dedicate one week to testing each item independently. As you gain experience with A/B testing, you can get a better idea of your audience, what gets them engaged, and make the necessary changes to your testing time frame.

Once you have ran a couple tests to fine-tune your email, it’s time to put your email to the test (literally).  You will have confidence to email larger segmented lists and drive more ROI from your email marketing.

We all go in with opinions on what we think our contacts want to receive from us, but the proof is in the stats.  Go into AB testing free from opinions and guess work, and collect the data necessary to help you better target, communicate, and convert your prospects over time!