Codegarden

Don’t miss Umbraco’s event of the year

Get your ticket →

What Is A/B Testing? Run Smarter Experiments with Confidence

What is A/B testing?


A/B testing is a method of comparing two versions of a webpage, element, or message to see which performs better against a specific goal, such as clicks, signups, or purchases. Visitors are split between Version A (the control) and Version B (the variant), and the winner is determined using performance data and statistical confidence.

TL;DR

  • A/B testing helps you improve conversion rates using real user behavior instead of guesswork.

  • Start with one hypothesis and test one variable at a time.

  • Choose one measurable goal per test (for example clicks, form completions, or purchases).

  • Run tests long enough to reach a reliable result.

  • Use what you learn to improve the next experiment, not just pick a winner.


Why guessing is not a strategy

In digital marketing, intuition can spark ideas. It should not be the final decision-maker.

If you have ever debated which headline to use, wondered if a different CTA would convert better, or tried to justify a design change with "it just feels right," A/B testing gives you a better way to decide.

Instead of arguing opinions, you test alternatives with real users and measure outcomes. That means you can make changes with confidence because the data supports the decision..

This guide is for:

  • Marketers who want more value from existing traffic

  • Content teams who want to improve performance without constant developer handoffs

  • Growth teams building a repeatable experimentation process

You will learn what A/B testing is, how it works, how to run a good test, what mistakes to avoid, and how to run A/B tests more efficiently with Umbraco Engage inside your CMS.

OpenAI icon Explore A/B testing with AI
Open ChatGPT

Why A/B testing matters

At its core, A/B testing helps you compare two versions of a page or element to learn which one performs better.

You might test:

  • A headline

  • A CTA button label

  • A hero image

  • A form layout

  • The placement of a signup form

The core benefit is simple: A/B testing replaces assumptions with evidence.

What A/B testing helps you do

  • Increase conversions without increasing traffic

  • Learn what actually influences user behavior

  • Reduce internal debate by using shared data

  • Build a repeatable optimization process across marketing and content teams

When teams treat their website as an ongoing series of experiments, they learn faster and improve results more consistently.

A/B testing terms (quick definitions)

  • Control (Version A): The current version of the page or element.

  • Variant (Version B): The new version you are testing against the control.

  • Conversion: The action you want users to complete (for example a click, signup, or purchase).

  • Hypothesis: A clear statement describing what you are changing, what you expect to happen, and why.

  • Sample size: The number of visitors needed to judge the result reliably.

  • Statistical significance / confidence: A measure of how likely it is that your result is real and not caused by chance.

A/B testing example (simple and practical)

Imagine you run an online store and want to improve clicks to a new product category.

  • Version A: A static hero banner with the CTA "Shop Now"

  • Version B: A short video hero with the CTA "Watch the Experience"

Both versions might look good. Your team may prefer one based on taste. But A/B testing lets your audience decide.

After running the test long enough, you compare the results against your chosen goal (for example product views, add-to-cart clicks, or checkouts). If one version consistently performs better with sufficient confidence, you have a data-backed winner.

That is the practical value of A/B testing: smarter decisions with less guessing.


How A/B testing works (step by step)

1. Set one clear goal

Start by deciding what success looks like.

Examples:

  • Click a CTA button

  • Complete a form

  • Sign up for a newsletter

  • Reach a thank-you or confirmation page

  • Start a trial

Use one primary goal per test. If you track too many outcomes at once, it becomes harder to interpret the result.

2. Write a focused hypothesis

A good hypothesis keeps the test grounded in a reason, not a random change.

Example hypothesis

If we move the CTA higher on the page, more users will see it earlier and click it, which will increase signup conversions.

This works because it is:

  • Specific

  • Measurable

  • Tied to user behavior

3. Choose one variable to test

Change one thing at a time so you can attribute the result correctly.

Good variables for A/B testing:

  • Headline copy

  • CTA text

  • Hero image

  • Form length

  • Element placement

  • Page layout (when only one structural change is being tested)

If you change multiple things at once, you may get a lift, but you will not know which change caused it.

4. Define control vs. variant

  • Control (A): Your current version

  • Variant (B): The new version you want to test

Keep everything else the same. This is what makes the result trustworthy.

5. Split traffic randomly

Visitors should be randomly split so each version is shown to comparable audiences.

Most A/B testing platforms handle this automatically. The key is consistency and randomization, not manual assignment.

6. Run the test long enough

Do not stop a test early just because one version looks like it is winning after a day.

Run the experiment long enough to:

  • Reach an adequate sample size

  • Cover normal traffic cycles (including weekdays and weekends)

  • Avoid false positives caused by short-term spikes

7. Review the result and learn from it

Once the test has enough data, evaluate:

  • Which version performed better?

  • Was the difference statistically reliable?

  • What did you learn about user behavior?

  • What should you test next?

Even an inconclusive result is useful if it helps you rule out weak ideas and focus on better ones.

Statistical basics (without the jargon overload)

You do not need a PhD in statistics to run useful A/B tests, but you do need a few fundamentals.

Sample size matters

If only a small number of users entered the experiment, results can be misleading. The smaller your sample, the more likely it is that random variation explains the difference.

Confidence matters

Many teams use a 95% confidence threshold as a standard. In some low-traffic situations, teams may use 90%, but only when the decision risk is acceptable.

Test duration matters

Let your test run through a full traffic cycle and avoid ending it purely because the graph looks exciting.

In other words: patience improves decisions.

Common A/B testing mistakes (and how to avoid them)

Even experienced teams make these mistakes. Avoiding them will improve both your results and your confidence in testing.

1. Testing without a hypothesis

Changing something "just to see what happens" is not a strategy.

Fix: Define what you are changing, why you believe it will help, and what metric will prove it.

2. Stopping the test too early

Early results often look dramatic. They are also often wrong.

Fix: Decide your sample size and expected duration before launch. Let the test run.

3. Testing too many things at once

If you change the headline, image, CTA, and layout at the same time, you lose clarity.

Fix: Test one variable at a time until you have enough traffic for more advanced experimentation.

4. Optimizing for the wrong metric

A higher click-through rate is not always better if downstream conversions drop.

Fix: Tie each test to a business outcome and track secondary metrics to catch side effects.

5. Running tests with no follow-up process

If results are not documented, teams repeat the same ideas and lose momentum.

Fix: Log each test with:

  • Hypothesis

  • Variable tested

  • Goal

  • Result (win, loss, inconclusive)

  • Next action

6. Using A/B testing to avoid bigger UX decisions

Testing button labels will not fix a broken user journey.

Fix: Use A/B testing to validate good ideas, not to replace strategic thinking.

Advanced A/B testing tips (when you are ready)

Once you have a few solid tests behind you, you can improve your experimentation program with more advanced practices.

Track micro-conversions

Not every test needs to optimize the final sale immediately.

Useful micro-conversions include:

  • Scroll depth

  • Clicks on internal links

  • Form starts

  • CTA clicks

  • Product detail views

These signals can help you learn faster, especially on pages earlier in the journey.

Segment your results

One variant may work better for:

  • Mobile vs. desktop users

  • New vs. returning visitors

  • Different traffic sources


Segmenting results helps you avoid broad conclusions that hide important differences.

Use behavior data to prioritize test ideas

Heatmaps, analytics, and session recordings can show where users get stuck, hesitate, or ignore important elements.

This helps you test smarter ideas first.

Use AI for idea generation, not for replacing data

AI can help you generate hypotheses, draft variations, and summarize possible interpretations. It should not replace actual experiment results.

If you use CROBot or ChatGPT to brainstorm tests, treat the output as input to your testing process, then validate everything with real user behavior.

A/B testing in a CMS: what to look for

If your goal is to run more experiments, the "best" A/B testing tool is often the one your team can actually use consistently.

Many external tools are powerful, but they can also add friction:

  • Separate interfaces and workflows

  • Longer setup times

  • Developer dependencies for simple content tests

  • Fragmented reporting between CMS and analytics tools

What matters in an A/B testing platform

Look for:

  • Ease of use for marketers and editors

  • Reliable traffic splitting and result reporting

  • Goal tracking that matches your business metrics

  • Integration with your CMS and analytics stack

  • Clear reporting that teams can act on

  • Flexibility to start simple and expand later

For many teams, the real win is not maximum complexity. It is faster learning with fewer handoffs.

Why use Umbraco Engage for A/B testing?

If your team manages content in Umbraco, a CMS-native testing workflow can speed up experimentation significantly.

Why teams choose a CMS-native approach

With Umbraco Engage, you can run A/B tests inside the same environment where your team manages content, which helps reduce operational friction and speeds up iteration.


Benefits of a CMS-native workflow:


  • Marketers can launch tests without waiting on long dev cycles for simple content changes

  • Content, goals, and experiment setup stay closer together

  • Teams can evaluate results in a familiar workflow

  • You can connect A/B testing with analytics and personalization efforts


That matters when your goal is not just to run one test, but to build a repeatable experimentation habit.

A/B testing + analytics + personalization (working together)

A/B testing is strongest when it is part of a broader optimization workflow.

With Umbraco Engage, teams can combine:

  • A/B testing to compare variants

  • Analytics to measure outcomes and behavior

  • Personalization to tailor experiences for different audiences


This gives marketers a more complete way to optimize content and customer journeys over time, not just isolated page elements.

See Umbraco Engage A/B testing in action

Want to see how A/B testing works inside Umbraco Engage?

Take the interactive product tour of A/B testing in Umbraco Engage on the A/B testing feature page.

What to do with A/B test results

Running the test is only half the job. The value comes from what you do next.

Interpret the outcome correctly

Most test outcomes fall into one of three groups:

  • Winner: The variant outperforms the control with sufficient confidence

  • Inconclusive: No meaningful difference was detected

  • Loser: The control performed better than the variant

All three outcomes are useful if you document them and use them to improve future decisions.

Separate signal from noise

A small lift is not automatically a win. Ask:

  • Is the effect large enough to matter?

  • Is the result statistically reliable?

  • Did any secondary metrics get worse?

This protects your team from implementing changes that look good on a chart but do not improve real outcomes.

Build a testing knowledge base

Documenting your tests creates a reusable library of what works for your audience.

You can manage this in Airtable, a spreadsheet, or any shared system as long as it stays structured and accessible.

Explore Umbraco and A/B testing

Do you want to see it for yourself? Try out our interactive product tour ofA/B testing in Umbraco Engage

Take a tour of A/B testing in Umbraco

Start testing, start learning

You do not need a huge team or a complicated stack to begin A/B testing.

Start with one page, one hypothesis, and one meaningful goal. Then build from there.

If your team uses Umbraco and wants to run experiments with less friction, Umbraco Engage gives marketers a practical way to test, learn, and improve performance from inside the CMS.

Ready to explore Umbraco Engage?

Illustration of a person starting an A/B test in a CMS interface, with growth symbols like arrows and stars emerging from the screen.

Getting started is simple. With Umbraco Engage, launching your first A/B test is just a click away.

 

Want help getting started?

Frequently Asked Questions (FAQ) about A/B testing

A/B testing is a method of comparing two versions of a webpage, message, or element to see which performs better based on a specific goal, such as clicks, signups, or purchases.

AB testing means the same thing as A/B testing. It is simply a different way of writing the term.

An A/B test is a controlled experiment where one group of visitors sees Version A and another group sees Version B so you can compare performance.

A/B testing means testing two alternatives against each other to identify which version performs better for a defined objective.

A/B testing is used to improve conversion rates, engagement, and user experience by validating changes with real user behavior instead of assumptions.

You can test headlines, CTA buttons, images, form fields, layouts, navigation labels, and content placement, as long as you define a clear goal and test one variable at a time.

It should run until you have enough traffic and conversions to make a reliable decision, ideally across a full traffic cycle that includes weekdays and weekends. In this blogpost we explore the question of "How long should an A/B test run"

A/B testing in a CMS means creating and running experiments within your content management system, which can reduce setup friction and make it easier for marketers to launch and evaluate tests.

Summary

  • A/B testing helps teams make better website decisions using evidence instead of opinions.

  • The best tests start with a clear hypothesis, one variable, and one measurable goal.

  • Reliable results depend on enough traffic, enough time, and disciplined interpretation.

  • A CMS-native workflow can make experimentation faster and easier to scale.

  • Umbraco Engage helps teams run A/B tests closer to their content, analytics, and personalization workflows.