Sample Size Calculator: How Many Responses Do You Need?
Choosing the right sample size is one of the most critical decisions in any research study, survey, or experiment. Too small a sample and your results lack statistical power, meaning you might miss real effects. Too large and you waste time, money, and resources collecting more data than necessary.
This guide explains the sample size formula, walks through multiple worked examples covering survey research and A/B testing, and covers important adjustments like the finite population correction. By the end, you will know exactly how to calculate the sample size you need for your next project.
Why Sample Size Matters
Every statistical result comes with uncertainty. When you survey 50 people instead of 5,000, your estimates are less precise. Sample size directly controls three things:
- Margin of error. Larger samples produce narrower confidence intervals and smaller margins of error.
- Statistical power. Larger samples make it easier to detect real differences or effects. A study with insufficient power can fail to find a genuine effect simply because the sample was too small.
- Generalisability. A well-sized sample better represents the population, making your conclusions more credible.
Under-powered studies are a widespread problem in published research. Calculating your required sample size before collecting data is not just good practice. In clinical trials and regulated research, it is a requirement.
The Core Sample Size Formula
For estimating a population proportion (the most common use case in surveys), the formula is:
Where:
- = required sample size
- = Z-score corresponding to your desired confidence level
- = estimated proportion (use 0.5 if unknown, as this maximises the required sample size)
- = desired margin of error (as a decimal)
Common Z-Scores by Confidence Level
| Confidence Level | Z-Score |
|---|---|
| 90% | 1.645 |
| 95% | 1.960 |
| 99% | 2.576 |
Worked Example 1: Customer Satisfaction Survey
A company wants to survey its customers about satisfaction. They want 95% confidence with a margin of error of 5%. They have no prior data on the proportion of satisfied customers.
Given: , (unknown, use conservative estimate), .
Always round up: n = 385 respondents. This is the classic result that many researchers memorise. For 95% confidence and a 5% margin of error with an unknown proportion, you need about 385 responses.
Try it yourself
Use our Sample Size Calculator to compute the exact sample size for your survey parameters instantly.
Worked Example 2: Tighter Margin of Error
A political polling firm needs 99% confidence with a 2% margin of error. From previous polls, they estimate the proportion of voters supporting a candidate is around 0.45.
Given: , , .
Rounding up: n = 4,106 respondents. Notice how demanding a smaller margin of error and higher confidence dramatically increases the required sample size. Going from 5% to 2% margin of error increased the sample from 385 to over 4,000.
The Finite Population Correction
The standard formula assumes sampling from an infinitely large (or very large) population. If your population is relatively small, you can reduce the required sample size using the finite population correction (FPC):
Where is the sample size from the standard formula and is the total population size.
Worked Example 3: Small Company Survey
A company with 500 employees wants to survey staff satisfaction with 95% confidence and a 5% margin of error. Using the standard formula:
Now apply the finite population correction:
Rounding up: n = 218 employees. The finite population correction reduced the required sample from 385 to 218. This makes intuitive sense: you only have 500 employees, so surveying 218 of them (44%) already gives excellent coverage.
As a rule of thumb, apply the finite population correction when your sample would exceed 5% of the total population.
Sample Size for A/B Testing
In A/B testing (comparing two versions of a webpage, email, or product), the sample size calculation uses a different formula based on effect size, statistical power, and significance level.
For comparing two proportions, the required sample size per group is approximately:
Where:
- = Z-score for the significance level (1.96 for a two-tailed test at 5%)
- = Z-score for the desired power (0.842 for 80% power)
- = baseline conversion rate (control group)
- = expected conversion rate (treatment group)
Worked Example 4: Website Conversion Test
An e-commerce site has a current checkout conversion rate of 3% (). They want to detect a 1 percentage point lift to 4% () with 80% power and 95% confidence.
You need approximately 5,298 visitors per group, or about 10,596 total. This shows why A/B tests for small effect sizes require substantial traffic volumes. If the site only gets 1,000 visitors per day, this test would take about 11 days to run.
Try it yourself
Plug in your own parameters with our Sample Size Calculator and verify your results with the Confidence Interval Calculator.
Key Factors That Affect Sample Size
Margin of Error (E)
Halving the margin of error quadruples the required sample size. This is because appears squared in the denominator of the formula. Going from 5% to 2.5% margin of error multiplies the sample by 4.
Confidence Level
Higher confidence requires a larger Z-score, which increases the sample. Moving from 95% to 99% confidence increases the sample by roughly 75% (because ).
Population Proportion (p)
The product is maximised when , giving a value of 0.25. If you have prior evidence that the proportion is far from 0.5 (say 0.1 or 0.9), you can use a smaller sample. For example, if , then , which is 36% of the maximum. This is why the conservative approach of using always gives the largest (safest) sample size estimate.
Effect Size
In experimental studies (A/B tests, clinical trials), the minimum detectable effect size is a crucial input. Smaller effects need larger samples to detect. Doubling the effect size you want to detect reduces the required sample by a factor of 4.
Common Mistakes in Sample Size Calculations
- Using the wrong formula. The proportion formula and the mean formula are different. For estimating a population mean, you need: , where is the population standard deviation.
- Forgetting to account for non-response. If you expect a 60% response rate, divide your required sample by 0.6 to determine how many people to contact. Needing 385 completed surveys with a 60% response rate means sending to at least 642 people.
- Ignoring the finite population correction. For small populations, the standard formula overestimates the required sample, wasting resources.
- Rounding down instead of up. Always round up to the next whole number. Rounding down means your actual margin of error will be slightly larger than desired.
Frequently Asked Questions
What sample size do I need for a 95% confidence level?
It depends on your margin of error and the expected proportion. The most commonly cited figure is 385, which assumes 95% confidence, a 5% margin of error, and an unknown proportion (p = 0.5). If you want a tighter margin of error or are estimating a mean rather than a proportion, the required sample will differ.
Can I use a smaller sample if I know the proportion is not 50%?
Yes. If you have reliable prior data suggesting the proportion is, say, 10%, you can use in the formula, which reduces the required sample size. However, if you are wrong about the proportion, your actual margin of error will be larger than planned. Using is the safe default.
How does sample size relate to statistical power?
Statistical power is the probability that your study will detect a real effect if one exists. The standard target is 80% power (some studies aim for 90%). Power increases with sample size. Doubling the sample size does not double the power, but it does increase it substantially when the starting sample is under-powered.
What is the minimum acceptable sample size?
There is no universal minimum, but general guidelines suggest at least 30 observations for the Central Limit Theorem to apply (allowing normal approximation). For surveys, 100 is often considered a practical minimum. For A/B tests, you typically need thousands per variation. The right answer always depends on your specific margin of error, confidence level, and effect size requirements.
Do I need to adjust sample size for multiple comparisons?
Yes. If you are testing multiple hypotheses or comparing multiple groups, the chance of a false positive increases. You may need to use a Bonferroni correction (divide your significance level by the number of comparisons) or increase your sample to maintain adequate power after correction. This is particularly important in A/B/C tests or factorial experiments.
Related Articles
Confidence Intervals Explained: What They Actually Mean
Understand confidence intervals correctly. Learn the z and t interval formulas, how to choose confidence levels, the effect of sample size, and common misinterpretations.
How to Calculate Standard Deviation: Step-by-Step Guide
Understand standard deviation from the ground up. Learn the difference between population and sample standard deviation with a full worked example using a real dataset.
How to Solve Quadratic Equations: 3 Methods with Examples
Learn how to solve quadratic equations using the quadratic formula, factoring, and completing the square. Step-by-step worked examples with clear explanations.