A/B Test Significance Calculator

Free statistical significance calculator for PPC split tests. Enter your visitors and conversions for each variant — this significance calculator tells you instantly whether your results are reliable or just noise.

Enter your test data
Variant A (Control)
Variant B (Test)
Results
Statistically significant — Variant B wins
You can be 97.3% confident that the difference is real, not due to chance.
3.00%
Conv. rate A
3.80%
Conv. rate B
+26.67%
Lift (B vs A)
97.3%
Confidence
90%
95%
99%

How This Significance Calculator Works

This A/B test calculator uses a z-test for two proportions — the standard statistical method for comparing conversion rates between two groups. Here is the process:

1Calculate the conversion rate for each variant: rate = conversions / visitors
2Compute the pooled conversion rate across both variants
3Calculate the standard error and z-score
4Convert the z-score to a confidence level using the normal distribution

A result is considered statistically significant at 95% confidence when there is less than a 5% probability the observed difference occurred by chance. This is the industry standard for PPC A/B testing and split testing.

Tips for Reliable A/B Tests

  • Run tests long enough. Most PPC tests need at least 7-14 days to account for day-of-week variation, even if you hit sample size requirements earlier.
  • Test one variable at a time. If you change the headline and the landing page simultaneously, you cannot attribute results to either change.
  • Aim for 25+ conversions per variant. Below this threshold, conversion rate estimates are unstable and your ab test sample size is too small for reliable significance.
  • Do not peek and stop early. Checking results repeatedly and stopping when significance is first reached inflates your false positive rate. Set your sample size target upfront.

Frequently Asked Questions

What is statistical significance in A/B testing?

Statistical significance tells you whether the difference between two variants (A and B) is real or just due to random chance. A result is typically considered significant at a 95% confidence level, meaning there is only a 5% probability the observed difference happened by chance. This significance calculator uses a z-test for two proportions to determine the confidence level of your A/B test results.

How many visitors do I need for a statistically significant A/B test?

The sample size depends on your baseline conversion rate, the minimum detectable effect (how small a difference you want to detect), and your desired confidence level. As a rule of thumb, most PPC A/B tests need at least 1,000 visitors per variant and 25+ conversions per variant to produce reliable results. Use this ab test sample size calculator guideline: the smaller the difference you want to detect, the more traffic you need.

What confidence level should I use for PPC split tests?

For most PPC split tests, 95% confidence is the standard threshold. This means you accept a 5% chance of a false positive. For high-stakes decisions (large budget changes, landing page overhauls), consider using 99%. For quick iterative tests with lower risk, 90% can be acceptable. This split test calculator shows results at all three levels so you can decide based on your context.

Can I use this calculator for Google Ads ad copy tests?

Yes. Enter the number of impressions (or clicks to the landing page) as visitors, and the number of conversions (clicks, leads, or purchases) for each ad variant. This ab test significance calculator works for any two-variant comparison: ad headlines, landing pages, bidding strategies, or audience segments.

Run automated A/B tests across all your client accounts with AdsCockpit.

Ready to manage Google Ads
without the chaos?

We're onboarding agencies one by one. Apply for early access and we'll reach out personally.

Get access
early access · limited spots · no commitment