What is A/B testing?

A/B Testing, sometimes referred to as split testing, is the act of comparing two variables against each other and observing which of the two performs better.

For example, a button that says “BUY NOW” (Variable A) could be compared against another button that says “BUY NOW 5% OFF” (Variable B).

Guide to A/B Testing

To find out which variable proves to be the most attractive, all incoming traffic needs to be split to let you show Variable A to 50% of your visitors, and B to the other half. The one that generates the best conversion rate (that is, the one that people click on the most) is the best performer.

This is a simple example to demonstrate what A/B testing is. It has become very popular in the last five years or so, and for good reasons: It is highly effective.

In the paragraphs below, we intend to give you a complete view of how A/B testing works and why you should be doing it.

Why You Should A/B Test

The case for A/B testing is very strong: When any changes are being made in design, performing tests along the way means you can backup design decisions with data, for example.

These occurrences are very common:

  • Members of a design team will disagree on what is the best path to pursue
  • Client and designer will disagree as to which variation of an interface will work better
  • Business or Marketing team and design team will disagree on which design will work better

Apart from being a platform for everyone to raise their sometimes heated personal opinions and biases, discussions like these usually lead nowhere other than hour-long, heavy-loaded meetings. Data is by far, the best way to settle these debates. A client wouldn’t be arguing that a blue button is better than a red one if he knew the red variation would increase his revenue by .5% (or let’s say, $500/day).

A design team wouldn’t argue over which imagery to use if they knew that a certain variation increases retention. A/B testing helps teams deliver better work, more efficiently.

Going further, it also allows you to improve key business metrics. Testing – specifically if it’s conducted continually – enables you to optimize your interface and make sure your website is delivering the best results possible.

Picture for a moment an ecommerce store.

The goal is to increase the number of checkouts. A/B testing only the listing page would have a really small effect on the total number of checkouts. It wouldn’t move the needle significantly and neither would optimizing just the homepage header, for example.

However, running tests to optimize all areas – from the menus, all the way to the checkout confirmation – will result in a compound effect that will make more of an impact.

How A/B Testing Works

The A/B Testing process is actually fairly straightforward and can be summarized in a few key steps:

  • Understand your Metrics
    The first and most important step is having a clear understanding of what your current metrics are. You should go through your analytics tools and understand the most problematic areas, where people are most likely to drop off and which areas need improvement. In general, be very mindful of where you stand in terms of business metrics too: one good example is revenue.
  • Identify Your Goals
    The next step is identifying your goals. What are you trying to achieve with A/B testing? Looking at your analytics and understanding your metrics will help you here. You might want to solve a drop off issue in the checkout process. Maybe increase signup form conversions and generate more leads. Whatever it is, it’s essential to have it well stated to guide your tests.
  • Identify elements to be tested
    Now that you know what you are trying to achieve, you need to identify the elements that affect that specific metric. Let’s assume that when looking at the metrics, you noticed an alarming number of people don’t finish checking-out when they arrive at the payment screen. Your goal was to reduce the number of drop-offs and as a result, increase conversions and ultimately, revenue. Since the peak drop-off is in that page, you need to break down the elements that can be tested. In a simple checkout page, it would probably be something like:

    • Field Labels
    • Required Fields
    • Call to Action: Color, Copy, Positioning, Size
    • Copy
    • Positioning of key information, such as total price or taxes
    • And more

    Once you know which elements you should be testing, you’ll move on to creating variations for each one.

  • Create the variations
    With the interface elements in mind, the next step is working on variations for each one. If you are just getting started and don’t have a lot of traffic, be sure not to over-do this step. With only a few hundred visitors, testing hundreds of variables won’t produce any significant results. Adapt the numbers based on your own numbers. In any case, there are a broad amount of variables you can test on each element.

    • Example 01: For a specific button, you generally can modify the color, size, positioning, label and hover effect.
    • Example 02: For the copy, you can test different lengths, different value propositions, different tone, different positioning and more.
  • Run the experiments
    With the variables set, start running the experiments. Most modern tools will make this as easy as pressing a button. Visitors will be randomly assigned to the variations / control, and accounted for in the overall results. You’ll be watching the statistical confidence grow as more visitors go through the variations, and soon enough will have enough data to determine which one performs better.
  • Reporting and digesting the results
    With the experiment compete, you can start digesting the results and generating a report. The idea here is to understand what happened, which variables worked better, and what the effect was in your key metrics. You will also be able to note what the likely effects will be after the change is made, outline a plan for implementing the optimal changes and start planning your next round of testings.

What To Test

This is a key question that will directly affect how successful you are when running A/B testings. There is no universal answer, since it depends almost entirely on your needs and your context. You need to have clear goals in mind and, with those goals as a starting point, decide which elements are worth testing and what will be your variables. Let’s go through a few different scenarios:

Let’s assume you are a tech startup looking to increase the number of leads you get through your website. You have a homepage where you explain your services and a signup page where users can request a demonstration. In this scenario, there are multiple things you could be testing:

  1. Messaging for the homepage. Is the copy you are presenting optimal for the audience? Can you improve or try to offer different value propositions? Test variables here: Different value propositions, different wordings, different tone.
  2. Imagery supporting the copy. Do you have any imagery on the homepage? If so, try something different. If you don’t use imagery, try using something to support the written content to help convey your message.
  3. Change the Call To Action. How are you taking users to the signup form? Test different button colors, sizes and labels. Try different positionings, maybe consider adding some social proof in the homepage itself.
  4. In the signup form, there are a multitude of things you could be testing. The field titles, the number and type of fields you have, whether they are required or not, the CTA color, label or size. You could go beyond, maybe try social proof and other elements too.

We could dive even deeper, but this is a useful breakdown of an extremely simple example: Homepage and Signup Page. Depending on your website, things might be more complex. An ecommerce site looking to reduce cart drop-offs would have different challenges, so should be testing different things. The same goes for a blog that wants to increase the number of email signups, for example.

Things To Avoid

If it’s done right, A/B Testing is incredibly powerful but most designers, developers and marketers who first get started with testing variates aren’t necessarily data scientists and haven’t taken the time to understand common pitfalls when working with data. Peter Borden, from SumAll, wrote a blog post in 2014 describing how A/B testing almost got him fired. He used Optimizely at the time, and an update addressed the concerns described in that blog post, but still, it’s worth the read. Testing tools do their best to prevent mistakes – but you should still be aware of common bad practices.

Consider Statistical Confidence

You shouldn’t conclude a certain experiment until you have enough data to make the results significant. You can’t infer much from the results if only a handful of conversions were observed. Most tools will indicate the basic level of statistical confidence of a certain experiment, so make sure to consider it.

Don’t test all variables at once

Let’s assume a scenario where your landing page has a blue button with the label “BUY NOW”, and your goal is to test a variation to see if you can improve the conversion rate. In this quick experiment, if you change the button color to red, the text to “BUY NOW 5% OFF” and notice an increase in clicks, you won’t be able to tell whether the copy change or the color change produced those results. Make sure you are testing each variable (in this case, color and copy,) individually.

Don’t test in different time periods

If you have an ecommerce store focused on Christmas-themed home decoration and your goal is to increase the overall conversion from visitor to customer, you might notice that conversions increase naturally as you get closer to Christmas. Since the date is coming, visitors are more likely to decide to buy something. As a result, if you run variable A of a test 4 weeks before Christmas, and variable B one week after Christmas, the results you observe won’t be a direct result of your change but instead, will have been influenced by external factors. Always make sure you are splitting up the audience and running tests at the same time.

What Tools To Use?

You’ll find dozens of A/B Testing tools in the market catering for all needs. Some of them are entry-level, designed specifically for beginners to get going fast: They’ll likely include a visual editor, easy installation and onboarding, with no-coding required to run the testings. Some of them go as far as also allowing you to create entire landing pages.

On the other side of the spectrum, there are tools built specifically for enterprise needs. These are built to handle tests with millions of visitors and have features that will enable an advanced tester to make the most of it. They will integrate well with other platforms and use algorithms to automatically optimize campaigns.

Pricing will vary depending on the focus of a tool. While starter level tools will have free tier plans, advanced and enterprise-levels can easily go up to the thousands of dollars per month. If you are just getting started, take a look at Optimizely or VWO. They are easy to set up, learn how to use and get going.

If you want to learn more about the tools available, we’ve done an in-depth analysis of our favorite tools – ranging from beginner level to enterprise. Make sure to test out various alternatives and do your own analysis, taking into consideration your context and needs.

Closing Notes and Reference Articles

A/B Testing is extremely powerful. If executed right, it can drive significant results and improved metrics. Having a culture of testing constantly and driving decisions with data will make sure you are continually improving your product.

If you are not testing right now, you should get started.

After going through this blog post, you will probably have a few ideas in your mind. Metrics you want to improve, areas in your website that you know need some work. You might even have ideas of what you actually want to test; different imagery, different copy or colors. Try out some of the tools mentioned, get familiar with A/B testing and start testing immediately.

You will notice the results.

If you want to get even more familiar with the concept and dive deeper into specific areas (statistical confidence, for example), here are a few interesting resources that might help you.

You should check this great piece from 20Bits about Statistical Analysis and A/B Testing. There is also 538’s take on the difficulties of executing research with data.

In general, it’s also a great idea to look at case studies and observe what other people and companies have done that produced great results. You’ll find multiple case studie

s at Optimizely’s and VWO’s blogs, with VWO also offering Ideafox – a tool for you to search through studies to find ideas focused on your industry.

Lastly, if you are interested about testing and content marketing, and want to learn more about what top publishers are doing to test headlines and images, Contently wrote a great piece in early 2015: “How BuzzFeed, R29 and Other Top Publishers Optimize Their Headlines and Images” that’s definitely worth a read.