<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=1628844417177726&amp;ev=PageView&amp;noscript=1">
FREE CONSULTATION
FREE CONSULTATION

Our Blog

Our latest articles, all in one place.

Put Your Emails to the A/B Test

DATE PUBLISHED: May 05, 2017
Category: Marketing Strategy
 

email marketingNot happy with your email conversion rates? Let A/B tests, also known as conversion rate optimization tests, be your guide to ascertaining what's working and what's not.

A/B Testing may sound complicated and daunting, but it's actually fairly easy to implement when you know what you want to test. A/B testing is essentially just an experiment where marketers test the performance of two different versions of one piece of content.

It's all about using evidence rather than gut instinct to determine which subject line, landing page, CTA, the size of a subscribe button, or whatever gets the most conversions -- or the most opens.

The A/B test can be considered the most basic kind of randomized controlled experiment,” says Kaiser Fung, founder of the applied analytics program at New York's Columbia University. “In its simplest form, there are two treatments and one acts as the control for the other.

The 3 Keys to A/B Testing

Test Only a Single Element at a Time

The first key to A/B testing is to establish which variable you will assess against your predetermined goal, whether that goal is click-through rates, open rates, conversion rates or something else. For instance, will the element you evaluate be the text itself, the text size or color, the CTA placement, a graphic, or a different element you're interested in? Select a single variable and develop two different versions of it. (Aside: yes, you can conduct tests for more than one element of a piece of content, but concentrate on only one for each trial.) 

stefan-stefancik-257625-unsplash

For example, let's say you're a solar firm and want to reach out to bottom-of-the-funnel leads. The objective of your email is to offer a free on-site residential solar system estimate. You're reasonably comfortable with your email copy -- so you'll keep that identical across the two trial versions. You're also happy with placement of your logo and the overall design of the email, but your primary concern is your CTA, so that's the element you'll change.

Perhaps you've decided that the CTA of Version A, known as the "the control," will read, "Start saving money today!" Version B, which is where you generate your change, will read, "Click here to set up your free consultation now." The difference is subtle, with Version A being a bit more aggressive compared to B -- but it's something nonetheless worth testing. Remember: Don't send Version B to low-scoring or top-of-the-funnel leads because you won't generate "apples to apples" comparisons.

Choose an A/B Test Sample Size

Now comes the fun part: Sending out the eBlasts to a large enough sample size.  The goal is to send the trial to only a percentage of your full target audience, in this case, your bottom-of-the-funnel leads. Then after a predetermined time, check the results to discover which version performs best, and send that version to the remainder of your target audience.

To achieve statistically significant results, you'll need a list of at least 1,000 addresses. As previously noted, our example was geared towards bottom-of-the-funnel leads. Don't send Version B to low-scoring or top-of-the-funnel leads because you won't be generating "apples to apples" comparisons.

anete-lusina-146152-unsplashAccount for Mobile Users and Desktop Users

Engage in "blocking," which lets you account for any differences between mobile and desktop users, according to Fung.

Blocking involves taking the same audience -- your bottom-of-the-funnel leads -- and determining which use mobile devices to read your emails and which use desktops. Randomly assign an equal number to each AB trial to ensure each group of recipients is playing on a level field.

How to Measure Success of Your A/B Testing

Look back at the objective you hoped to achieve before implementing the A/B testing. For example, let's say you wanted to increase the click-through rate of your CTA from 4.16% to 8%. Utilize your tracking software to  evaluate how each version performed. Did Version B accomplish or outperform your goal, or do you need to experiment with another trial? If you found improvement and reached or topped your 8%, that's terrific, but perhaps you could optimize it further, which would mean conducting another A/B test using an alternative variable.

Of course, you can also drill deeper to figure out which leads filled out contact forms on the landing pages, but the purpose of this post is to explore the nuts and bolts of actually conducting A/B testing. If you're looking for more tips about analyzing commonly tested elements beyond CTAs,  such as subject lines, images, and promotions, check out HubSpot's thoughts

Looking to roll out a killer email marketing strategy? We'd love to work with you.