In this article
A/B testing is the most rigorous way to know what actually works on your website — as opposed to what you think works. But badly run A/B tests produce false confidence and wrong decisions. The good news is the most common mistakes are completely avoidable once you know what they are.
Planning a Test That Will Actually Teach You Something
The most important step happens before you run any test. You need a clear hypothesis: "I believe that changing X to Y will increase Z because of W." The "because" is what separates a test that teaches you something from one that just produces a number.
Good hypothesis: "I believe showing the discount amount in the headline ($15 off vs 15% off) will increase popup email capture rate because visitors respond more concretely to dollar amounts than percentages when the typical order value is over $80."
Bad hypothesis: "Let's try a different headline and see what happens."
The good hypothesis gives you something to learn even if the variant loses. Maybe dollar amounts aren't better for your audience — that's still valuable knowledge that refines your model of what works.
Sample Size and How Long to Run Tests
Ending a test too early is the most common A/B testing mistake. You see variant B leading after 3 days and call it a winner. But early results are noisy — random variation looks like real difference until you have enough data to average it out.
Minimum guidelines: run until each variant has received at least 100 conversions. For low-traffic or low-conversion-rate pages, this might take weeks or months. That's fine. A test run to completion with clean results is worth infinitely more than a test stopped early for a wrong conclusion.
Statistical calculators like the one on AB Testguide will tell you the required sample size for your baseline conversion rate and minimum detectable effect size. Use one before you start your test, not after, so you know how long to run.
Interpreting Results Without Fooling Yourself
When your test reaches significance, resist the temptation to immediately declare a permanent winner and move on. Ask a few questions first:
- Is the result consistent across segments? Did the variant win on desktop but lose on mobile? Winning overall but losing on a specific important segment might mean the variant is actually the wrong choice.
- Is there a rational explanation for why the winner won? If you can't explain why the winning variant converted better, be cautious — it might be a statistical fluke that won't hold up.
- What does this result imply for future tests? Every test should feed your next hypothesis. If dollar amounts beat percentages in the headline, maybe they'll also beat percentages in the body copy.
Ready to put this into practice?
Pops Builder gives you all the tools covered in this article — popups, social proof, A/B testing, and more. Free plan available.