Effective A/B Testing (Part 1)

Why Most Tests Fail Before They Begin

Niels Christian Laursen
Written by Niels Christian Laursen

This isn’t a blog post about winning or losing tests. It’s about knowing whether your tests will tell you anything useful at all. Early in our testing journey, 44% of our experiments didn’t yield significant results. That means we learned nothing from them. They weren’t wins. They weren’t even informative failures. They were just noise. Read on to learn how to avoid the noise and get to the good tunes.

At Umbraco, we’ve spent hundreds of hours running A/B tests. And to be honest, we’ve failed a lot. In our eagerness to get started, we jumped headfirst into testing. The assumption was; the more tests we run, the more we learn. The result? A lot of wasted time, and very little actual insight. 

If you're just starting with A/B testing, or even if you’ve been at it for a while, chances are you will make the same mistake we did: running tests that never had a chance of producing significant results.

The 3 Factors For Generating Significant Results

If you want to avoid wasting your time, here are the three key factors we now use to qualify any A/B test before we hit publish.

1. Sample Size

Significance requires data. If a page doesn’t get enough traffic, it’s not worth testing. No matter how tempting it is.

A simple rule we follow:
Check how many page views a page had in the past 60 days. Divide that number by two (since half the visitors will see the original and half the variant). That gives you your available sample size.

2. Conversions

You can’t measure improvement without a metric that matters. In A/B testing, that means tracking actual conversions. Not just clicks or scroll depth.

Look at the key metric you want to improve for the page in question. Are there enough conversions happening to generate a reliable signal? If not, the test won’t be meaningful, even if traffic is high.

3. Impact Potential

Small changes usually lead to small improvements. If you’re testing something minor, like changing button color, you’ll need a huge sample size to prove it worked, AND the impact is still likely to be small. This can be an okay time investment on websites with millions of users. But for most of us, it's not worth it.

Instead, we focus on bigger changes. If you can expect a 10-30% lift on conversions, you'll need less data to prove it's real, and you’ll get results faster.

How We Approach Testing Today

These days, we use Umbraco Engage to manage our A/B tests. It helps us stay focused on tests that matter by making it easy to:

  • Prioritize high-traffic, high-conversion pages
  • Track meaningful metrics tied directly to business goals
  • Set up and monitor tests from right inside the CMS

With built-in tools to define test goals and calculate sample sizes, it’s much easier to avoid the kind of mistakes we made and turn ideas into results faster.

Take a product tour of Umbraco Engage

Takeaways

To avoid running A/B tests that go nowhere, make sure you:

When we started doing this, we immediately saw fewer wasted tests, less time needed for running tests, and far more useful results.

Coming Up In Part 2

Even once you’re testing on the right pages with solid ideas, that’s not enough. In Part 2, we’ll cover how to estimate whether a test will yield reliable results and how to calculate the sample size you need before you even start.

It’s the part we wish we’d known much sooner. If you feel the same, you can sign up to get a notification as soon as the next blog post is out:

Notify me!