3. Get as many significant winners as possible
This was the real breakthrough.
We stopped testing ideas just because we could. Instead, we started asking harder questions before running a test:
- Do we believe this could meaningfully improve the experience?
- Is the potential lift worth the effort?
- Is there a strategic reason to run this?
The result? Fewer tests, but more winners. When we became more selective, the success rate went up. Not because our luck changed, but because our focus did.
“You won’t win every test. But you’ll win more often if you only run the ones that are worth it.”
4. Maximize total conversion rate lift
This is where we’re heading now.
Instead of aiming for more winners, we’re aiming for more impact. That means fewer small tweaks and more bold changes. Grouping multiple updates into one variant. Focusing less on isolating effects and more on growing outcomes. This is often referred to as leap tests
Sometimes that means learning less from a test. But it often means gaining more in real business value.
If you are a true A/B-testing purist, this goes against a lot of advice, but if we just adhere to the theory and isolate everything, and don’t have quite the traffic to run isolated experiments in 2 weeks, then grouping together can be a great way to gain quick wins.
A mindset shift is what turns testing into optimization
Here’s how our goals changed, and what that meant for results: