For most studies, the enrollment ratio is 1 (ie, equal enrollment between both groups).
Some studies will have different enrollment ratios (2:1, 3:1) for additional safety data.
" />Group 1 " /> | |
Group 2 " /> | |
Enrollment ratio For most studies, the enrollment ratio is 1 (ie, equal enrollment between both groups). Some studies will have different enrollment ratios (2:1, 3:1) for additional safety data. " /> |
Known population This value is determined by examining previous literature of a similar patient population. " /> | |
Study group " /> |
Known population The mean and standard deviation are determined by examining previous literature of a similar patient population. " /> | |
Study group " /> |
Alpha Most medical literature uses a value of 0.05. " /> | |
Power Most medical literature uses a value of 80-90% power (β of 0.1-0.2) " /> | |
Sample Size | |
---|---|
Group 1 | 690 |
Group 2 | 690 |
Total | 1380 |
Study Parameters | |
---|---|
Incidence, group 1 | 35% |
Incidence, group 2 | 28% |
Alpha | 0.05 |
Beta | 0.2 |
Power | 0.8 |
This calculator uses a number of different equations to determine the minimum number of subjects that need to be enrolled in a study in order to have sufficient statistical power to detect a treatment effect. 1
Before a study is conducted, investigators need to determine how many subjects should be included. By enrolling too few subjects, a study may not have enough statistical power to detect a difference (type II error). Enrolling too many patients can be unnecessarily costly or time-consuming.
Generally speaking, statistical power is determined by the following variables:
To calculate the post-hoc statistical power of an existing trial, please visit the post-hoc power analysis calculator .
New and popular, cite this page.
Show AMA citation
We've filled out some of the form to show you this clinical calculator in action. Click here to start from scratch and enter your own patient data.
One of the most common questions I get asked by people doing surveys in international development is “how big should my sample size be?”. While there are many sample size calculators and statistical guides available, those who never did statistics at university (or have forgotten it all) may find them intimidating or difficult to use.
If this sounds like you, then keep reading. This guide will explain how to choose a sample size for a basic survey without any of the complicated formulas. For more easy rules of thumb regarding sample sizes for other situations, I highly recommend Sample size: A rough guide by Ronán Conroy and The Survey Research Handbook by Pamela Alreck and Robert Settle.
This article is a short introduction to the topic for a more in-depth coverage of the topic consider enrolling in the free online course offered by University of Florida .
This advice is for:
This advice is NOT for:
Most statisticians agree that the minimum sample size to get any kind of meaningful result is 100. If your population is less than 100 then you really need to survey all of them.
A good maximum sample size is usually around 10% of the population, as long as this does not exceed 1000. For example, in a population of 5000, 10% would be 500. In a population of 200,000, 10% would be 20,000. This exceeds 1000, so in this case the maximum would be 1000.
Even in a population of 200,000, sampling 1000 people will normally give a fairly accurate result. Sampling more than 1000 people won’t add much to the accuracy given the extra time and money it would cost.
Suppose that you want to survey students at a school which has 6000 pupils enrolled. The minimum sample would be 100. This would give you a rough, but still useful, idea about their opinions. The maximum sample would be 600, which would give you a fairly accurate idea about their opinions.
Choose a number closer to the minimum if:
Choose a number closer to the maximum if:
In practice most people normally want the results to be as accurate as possible, so the limiting factor is usually time and money. In the example above, if you had the time and money to survey all 600 students then that will give you a fairly accurate result. If you don’t have enough time or money then just choose the largest number that you can manage, as long as it’s more than 100.
If you would like to learn more about Survey Data Collection consider taking the free course offered by University of Michigan and University of Maryland. Enroll here.
While the previous rules of thumb are perfectly acceptable for most basic surveys, sometimes you need to sound more “scientific” in order to be taken seriously. In that case you can use the following table. Simply choose the column that most closely matches your population size. Then choose the row that matches the level of error you’re willing to accept in the results.
You will see on this table that the smallest samples are still around 100, and the biggest sample (for a population of more than 5000) is still around 1000. The same general principles apply as before – if you plan to divide the results into lots of sub-groups, or the decisions to be made are very important, you should pick a bigger sample.
Note: This table can only be used for basic surveys to measure what proportion of the population have a particular characteristic (e.g. what proportion of farmers are using fertiliser, what proportion of women believe myths about family planning, etc). It can’t be used if you are trying to compare two groups (e.g. control versus intervention) or two points in time (e.g. baseline and endline surveys). See Sample size: A rough guide for other tables that can be used in these cases.
It’s a dirty little secret among statisticians that sample size formulas often require you to have information in advance that you don’t normally have. For example, you typically need to know (in numerical terms) how much the answers in the survey are likely to vary between individuals (if you knew that in advance then you wouldn’t be doing a survey!).
So even though it’s theoretically possible to calculate a sample size using a formula, in many cases experts still end up relying rules of thumb plus a good deal of common sense and pragmatism. That means you shouldn’t worry too much if you can’t use fancy maths to choose your sample size – you’re in good company.
Once you’ve chosen a sample size, don’t forget to write good survey questions , design the survey form properly and pre-test and pilot your questionnaire .
Photo by James Cridland
Tags Monitoring & Evaluation
13 May 2021
12 May 2021
10 September 2017
Register now
A practical question when designing a customer feedback survey or experiment is to work out the required sample size. That is, what is the smallest number of data points required in the survey or experiment? There are three basic approaches: rules of thumb based on industry standards, working backwards from budget, and working backwards from confidence intervals. Each of these approaches is useful in some circumstances.
Different industries have different rules of thumb when it comes to testing. These rules of thumb are not entirely made up; their logic relates to the confidence intervals analyses described later in this article.
Some examples of common rules of thumb are:
A second common approach is to identify the budget and work backwards, using the following formula:
Sample size = (Total budget - fixed costs)/cost per data point
This may sound crude, but the budget for a study is a way of working out the appetite for risk of the organization that has commissioned the study, and, as discussed in the next section, this is at the heart of determining sample size.
One of the reasons that the minimum sample sizes guidelines vary so much is that the true minimum sample size required for any study depends on the signal to noise ratio of the data. If the data intrinsically has a high level of noise in it, such as political polls and market research, then a large sample is required. In tightly controlled environments, such as those used in sensory studies, there is less noise and thus smaller sample sizes are acceptable. When testing medical devices, the outcome is to see if the device is problem free or not, rather than to estimate any specific rate, so an even smaller sample size is appropriate.
One formal method for working out sample sizes is to have researchers specify the required level of uncertainty they can deal with, expressed a confidence interval, and work out the sample size required to obtain this. For example, see here and here for examples and discussions, respectively.
This is the textbook solution to working out sample size, and there are lots of nice theoretical concepts to help (e.g., power analysis). However, in practice, the approach only works when you have a good idea what the likely result will be and what the likely uncertainty will be (i.e., sampling error), and this is often not the case, outside of the world of clinical trials.
Learn how to statistically test Net Promoter Score in Displayr
[1] https://www.fda.gov/downloads/MedicalDevices/NewsEvents/WorkshopsConferences/UCM424735.pdf
Get access to all the premium content on displayr, last question, we promise, what type of survey data are you working with (select all that apply).
Market research Social research (commercial) Customer feedback Academic research Polling Employee research I don't have survey data
Find out the sample size.
This calculator computes the minimum number of necessary samples to meet the desired statistical constraints.
Confidence Level: | ||
Margin of Error: | ||
Population Proportion: | Use 50% if not sure | |
Population Size: | Leave blank if unlimited population size. | |
This calculator gives out the margin of error or confidence interval of observation or survey.
Confidence Level: | ||
Sample Size: | ||
Population Proportion: | ||
Population Size: | Leave blank if unlimited population size. | |
Related Standard Deviation Calculator | Probability Calculator
In statistics, information is often inferred about a population by studying a finite number of individuals from that population, i.e. the population is sampled, and it is assumed that characteristics of the sample are representative of the overall population. For the following, it is assumed that there is a population of individuals where some proportion, p , of the population is distinguishable from the other 1-p in some way; e.g., p may be the proportion of individuals who have brown hair, while the remaining 1-p have black, blond, red, etc. Thus, to estimate p in the population, a sample of n individuals could be taken from the population, and the sample proportion, p̂ , calculated for sampled individuals who have brown hair. Unfortunately, unless the full population is sampled, the estimate p̂ most likely won't equal the true value p , since p̂ suffers from sampling noise, i.e. it depends on the particular individuals that were sampled. However, sampling statistics can be used to calculate what are called confidence intervals, which are an indication of how close the estimate p̂ is to the true value p .
The uncertainty in a given random sample (namely that is expected that the proportion estimate, p̂ , is a good, but not perfect, approximation for the true proportion p ) can be summarized by saying that the estimate p̂ is normally distributed with mean p and variance p(1-p)/n . For an explanation of why the sample estimate is normally distributed, study the Central Limit Theorem . As defined below, confidence level, confidence intervals, and sample sizes are all calculated with respect to this sampling distribution. In short, the confidence interval gives an interval around p in which an estimate p̂ is "likely" to be. The confidence level gives just how "likely" this is – e.g., a 95% confidence level indicates that it is expected that an estimate p̂ lies in the confidence interval for 95% of the random samples that could be taken. The confidence interval depends on the sample size, n (the variance of the sample distribution is inversely proportional to n , meaning that the estimate gets closer to the true proportion as n increases); thus, an acceptable error rate in the estimate can also be set, called the margin of error, ε , and solved for the sample size required for the chosen confidence interval to be smaller than e ; a calculation known as "sample size calculation."
The confidence level is a measure of certainty regarding how accurately a sample reflects the population being studied within a chosen confidence interval. The most commonly used confidence levels are 90%, 95%, and 99%, which each have their own corresponding z-scores (which can be found using an equation or widely available tables like the one provided below) based on the chosen confidence level. Note that using z-scores assumes that the sampling distribution is normally distributed, as described above in "Statistics of a Random Sample." Given that an experiment or survey is repeated many times, the confidence level essentially indicates the percentage of the time that the resulting interval found from repeated tests will contain the true result.
Confidence Level | z-score (±) |
0.70 | 1.04 |
0.75 | 1.15 |
0.80 | 1.28 |
0.85 | 1.44 |
0.92 | 1.75 |
0.95 | 1.96 |
0.96 | 2.05 |
0.98 | 2.33 |
0.99 | 2.58 |
0.999 | 3.29 |
0.9999 | 3.89 |
0.99999 | 4.42 |
In statistics, a confidence interval is an estimated range of likely values for a population parameter, for example, 40 ± 2 or 40 ± 5%. Taking the commonly used 95% confidence level as an example, if the same population were sampled multiple times, and interval estimates made on each occasion, in approximately 95% of the cases, the true population parameter would be contained within the interval. Note that the 95% probability refers to the reliability of the estimation procedure and not to a specific interval. Once an interval is calculated, it either contains or does not contain the population parameter of interest. Some factors that affect the width of a confidence interval include: size of the sample, confidence level, and variability within the sample.
There are different equations that can be used to calculate confidence intervals depending on factors such as whether the standard deviation is known or smaller samples (n<30) are involved, among others. The calculator provided on this page calculates the confidence interval for a proportion and uses the following equations:
where is z score is the population proportion and are sample size is the population size |
Within statistics, a population is a set of events or elements that have some relevance regarding a given question or experiment. It can refer to an existing group of objects, systems, or even a hypothetical group of objects. Most commonly, however, population is used to refer to a group of people, whether they are the number of employees in a company, number of people within a certain age group of some geographic area, or number of students in a university's library at any given time.
It is important to note that the equation needs to be adjusted when considering a finite population, as shown above. The (N-n)/(N-1) term in the finite population equation is referred to as the finite population correction factor, and is necessary because it cannot be assumed that all individuals in a sample are independent. For example, if the study population involves 10 people in a room with ages ranging from 1 to 100, and one of those chosen has an age of 100, the next person chosen is more likely to have a lower age. The finite population correction factor accounts for factors such as these. Refer below for an example of calculating a confidence interval with an unlimited population.
EX: Given that 120 people work at Company Q, 85 of which drink coffee daily, find the 99% confidence interval of the true proportion of people who drink coffee at Company Q on a daily basis.
Jeovany martínez-mesa.
1 Latin American Cooperative Oncology Group - Porto Alegre (RS), Brazil.
2 Universidade Federal de Santa Catarina (UFSC) - Florianópolis (SC), Brazil.
Renan rangel bonamigo.
3 Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA) - Porto Alegre (RS), Brazil.
The importance of estimating sample sizes is rarely understood by researchers, when planning a study. This paper aims to highlight the centrality of sample size estimations in health research. Examples that help in understanding the basic concepts involved in their calculation are presented. The scenarios covered are based more on the epidemiological reasoning and less on mathematical formulae. Proper calculation of the number of participants in a study diminishes the likelihood of errors, which are often associated with adverse consequences in terms of economic, ethical and health aspects.
Investigations in the health field are oriented by research problems or questions, which should be clearly defined in the study project. Sample size calculation is an essential item to be included in the project to reduce the probability of error, respect ethical standards, define the logistics of the study and, last but not least, improve its success rates, when evaluated by funding agencies.
Let us imagine that a group of investigators decides to study the frequency of sunscreen use and how the use of this product is distributed in the "population". In order to carry out this task, the authors define two research questions, each of which involving a distinct sample size calculation: 1) What is the proportion of people that use sunscreen in the population?; and, 2) Are there differences in the use of sunscreen between men and women, or between individuals that are white or of another skin color group, or between the wealthiest and the poorest, or between people with more and less years of schooling? Before doing the calculations, it will be necessary to review a few fundamental concepts and identify which are the required parameters to determine them.
First of all, we must define what is a population . Population is the group of individuals restricted to a geographical region (neighborhood, city, state, country, continent etc.), or certain institutions (hospitals, schools, health centers etc.), that is, a set of individuals that have at least one characteristic in common. The target population corresponds to a portion of the previously mentioned population, about which one intends to draw conclusions, that is to say, it is a part of the population whose characteristics are an object of interest of the investigator. Finally, study population is that which will actually be part of the study, which will be evaluated and will allow conclusions to be drawn about the target population, as long as it is representative of the latter. Figure 1 demonstrates how these concepts are interrelated.
Graphic representation of the concepts of population, target population and study population
We will now separately consider the required parameters for sample size calculation in studies that aim at estimating the frequency of events (prevalence of health outcomes or behaviors, for example), to test associations between risk/protective factors and dichotomous health conditions (yes/no), as well as with health outcomes measured in numerical scales. 1 The formulas used for these calculations may be obtained from different sources - we recommend using the free online software OpenEpi ( www.openepi.com ). 2
When approaching the first research question defined at the beginning of this article (What is the proportion of people that use sunscreen?), the investigators need to conduct a prevalence study. In order to do this, some parameters must be defined to calculate the sample size, as demonstrated in chart 1 .
Description of different parameters to be considered in the calculation of sample size for a study aiming at estimating the frequency of health ouctomes, behaviors or conditions
Population size | Total population size from which the sample will be drawn and about which researchers will draw conclusions (target population) | Information regarding population size may be obtained based on secondary data from hospitals, health centers, census surveys (population, schools etc.). |
The smaller the target population (for example, less than 100 individuals), the larger the sample size will proportionally be. | ||
Expected prevalence of outcome or event of interest | The study outcome must be a percentage, that is, a number that varies from 0% to 100%. | Information regarding expected prevalence rates should be obtained from the literature or by carrying out a pilot-study. |
When this information is not available in the literature or a pilot-study cannot be carried out, the value that maximizes sample size is used (50% for a fixed value of sample error). | ||
Sample error for estimate | The value we are willing to accept as error in the estimate obtained by the study. | The smaller the sample error, the larger the sample size and the greater the precision. In health studies, values between two and five percentage points are usually recommended. |
Significance level | It is the probability that the expected prevalence will be within the error margin being established. | The higher the confidence level (greater expected precision), the larger will be the sample size. This parameter is usually fixed as 95%. |
Design effect | It is necessary when the study participants are chosen by cluster selection procedures. This means that, instead of the participants being individually selected (simple, systematic or stratified sampling), they are first divided and randomly selected in groups (census tracts, neighborhood, households, days of the week, etc.) and later the individuals are selected within these groups. Thus, greater similarity is expected among the respondents within a group than in the general population. This generates loss of precision, which needs to be compensated by a sample size adjustment (increase). | The principle is that the total estimated variance may have been reduced as a consequence of cluster selection. The value of the design effect may be obtained from the literature. When not available, a value between 1.5 and 2.0 may be determined and the investigators should evaluate, after the study is completed, the actual design effect and report it in their publications. |
The greater the homogeneity within each group (the more similar the respondents are within each cluster), the greater the design effect will be and the larger the sample size required to increase precision. In studies that do not use cluster selection procedures (simple, systematic or stratified sampling), the design effect is considered as null or 1.0. |
Chart 2 presents some sample size simulations, according to the outcome prevalence, sample error and the type of target population investigated. The same basic question was used in this table (prevalence of sunscreen use), but considering three different situations (at work, while doing sports or at the beach), as in the study by Duquia et al. conducted in the city of Pelotas, state of Rio Grande do Sul, in 2005. 3
Sample size calculation to estimate the frequency (prevalence) of sunscreen use in the population, considering different scenarios but keeping the significance level (95%) and the design effect (1.0) constant
Health center users investigated in a single day (population = 100) | 90 | 59 | 96 | 78 | 97 | 80 | ||
All users in the area covered by a health center (population size = 1,000) | 464 | 122 | 687 | 260 | 707 | 278 | ||
All users from the areas covered by all health centers in a city (population size = 10,000) | 796 | 137 | 1794 | 338 | 1937 | 370 | ||
The entire city population (N = 40.000) | 847 | 138 | 2072 | 347 | 2265 | 381 |
p.p.= percentage points
The calculations show that, by holding the sample error and the significance level constant, the higher the expected prevalence, the larger will be the required sample size. However, when the expected prevalence surpasses 50%, the required sample size progressively diminishes - the sample size for an expected prevalence of 10% is the same as that for an expected prevalence of 90%.
The investigator should also define beforehand the precision level to be accepted for the investigated event (sample error) and the confidence level of this result (usually 95%). Chart 2 demonstrates that, holding the expected prevalence constant, the higher the precision (smaller sample error) and the higher the confidence level (in this case, 95% was considered for all calculations), the larger also will be the required sample size.
Chart 2 also demonstrates that there is a direct relationship between the target population size and the number of individuals to be included in the sample. Nevertheless, when the target population size is sufficiently large, that is, surpasses an arbitrary value (for example, one million individuals), the resulting sample size tends to stabilize. The smaller the target population, the larger the sample will be; in some cases, the sample may even correspond to the total number of individuals from the target population - in these cases, it may be more convenient to study the entire target population, carrying out a census survey, rather than a study based on a sample of the population.
When the study objective is to investigate whether there are differences in sunscreen use according to sociodemographic characteristics (such as, for example, between men and women), the existence of association between explanatory variables (exposure or independent variables, in this case sociodemographic variables) and a dependent or outcome variable (use of sunscreen) is what is under consideration.
In these cases, we need first to understand what the hypotheses are, as well as the types of error that may result from their acceptance or refutation. A hypothesis is a "supposition arrived at from observation or reflection, that leads to refutable predictions". 4 In other words, it is a statement that may be questioned or tested and that may be falsified in scientific studies.
In scientific studies, there are two types of hypothesis: the null hypothesis (H 0 ) or original supposition that we assume to be true for a given situation, and the alternative hypothesis (H A ) or additional explanation for the same situation, which we believe may replace the original supposition. In the health field, H 0 is frequently defined as the equality or absence of difference in the outcome of interest between the studied groups (for example, sunscreen use is equal in men and women). On the other hand, H A assumes the existence of difference between groups. H A is called two-tailed when it is expected that the difference between the groups will occur in any direction (men using more sunscreen than women or vice-versa). However, if the investigator expects to find that a specific group uses more sunscreen than the other, he will be testing a one-tailed H A .
In the sample investigated by Duquia et al., the frequency of sunscreen use at the beach was greater in men (32.7%) than in women (26.2%).3 Although this what was observed in the sample, that is, men do wear more sunscreen than women, the investigators must decide whether they refute or accept H 0 in the target population (which contends that there is no difference in sunscreen use according to sex). Given that the entire target population is hardly ever investigated to confirm or refute the difference observed in the sample, the authors have to be aware that, independently from their decision (accepting or refuting H 0 ), their conclusion may be wrong, as can be seen in figure 2 .
Types of possible results when performing a hypothesis test
In case the investigators conclude that both in the target population and in the sample sunscreen use is also different between men and women (rejecting H 0 ), they may be making a type I or Alpha error, which is the probability of rejecting H 0 based on sample results when, in the target population, H 0 is true (the difference between men and women regarding sunscreen use found in the sample is not observed in the target population). If the authors conclude that there are no differences between the groups (accepting H 0 ), the investigators may be making a type II or Beta error, which is the probability of accepting H 0 when, in the target population, H 0 is false (that is, H A is true) or, in other words, the probability of stating that the frequency of sunscreen use is equal between the sexes, when it is different in the same groups of the target population.
In order to accept or refute H 0 , the investigators need to previously define which is the maximum probability of type I and II errors that they are willing to incorporate into their results. In general, the type I error is fixed at a maximum value of 5% (0.05 or confidence level of 95%), since the consequences originated from this type of error are considered more harmful. For example, to state that an exposure/intervention affects a health condition, when this does not happen in the target population may bring about behaviors or actions (therapeutic changes, implementation of intervention programs etc.) with adverse consequences in ethical, economic and health terms. In the study conducted by Duquia et al., when the authors contend that the use of sunscreen was different according to sex, the p value presented (<0.001) indicates that the probability of not observing such difference in the target population is less that 0.1% (confidence level >99.9%). 3
Although the type II or Beta error is less harmful, it should also be avoided, since if a study contends that a given exposure/intervention does not affect the outcome, when this effect actually exists in the target population, the consequence may be that a new medication with better therapeutic effects is not administered or that some aspects related to the etiology of the damage are not considered. This is the reason why the value of the type II error is usually fixed at a maximum value of 20% (or 0.20). In publications, this value tends to be mentioned as the power of the study, which is the ability of the test to detect a difference, when in fact it exists in the target population (usually fixed at 80%, as a result of the 1-Beta calculation).
In cases where the exposure variables are dichotomous (intervention/control, man/woman, rich/poor etc.) and so is the outcome (negative/positive outcome, to use sunscreen or not), the required parameters to calculate sample size are those described in chart 3 . According to the previously mentioned example, it would be interesting to know whether sex, skin color, schooling level and income are associated with the use of sunscreen at work, while doing sports and at the beach. Thus, when the four exposure variables are crossed with the three outcomes, there would be 12 different questions to be answered and consequently an equal number of sample size calculations to be performed. Using the information in the article by Duquia et al. 3 for the prevalence of exposures and outcomes, a simulation of sample size calculations was used for each one of these situations ( Chart 4 ).
Type I or Alpha error | It is the probability of rejecting H0, when H0 is false in the target population. Usually fixed as 5%. | It is expressed by the p value. It is usually 5% (p<0.05). |
For sample size calculation, the confidence level may be adopted (usually 95%), calculated as 1-Alpha. | ||
The smaller the Alpha error (greater confidence level), the larger will be the sample size. | ||
Statistical Power (1-Beta) | It is the ability of the test to detect a difference in the sample, when it exists in the target population. | Calculated as 1-Beta. |
The greater the power, the larger the required sample size will be. | ||
A value between 80%-90% is usually used. | ||
Relationship between non-exposed/exposed groups in the sample | It indicates the existing relationship between non-exposed and exposed groups in the sample. | For observational studies, the data are usually obtained from the scientific literature. In intervention studies, the value 1:1 is frequently adopted, indicating that half of the individuals will receive the intervention and the other half will be the control or comparison group. Some intervention studies may use a larger number of controls than of individuals receiving the intervention. |
The more distant this ratio is from one, the larger will be the required sample size. | ||
Prevalence of outcome in the non-exposed group (percentage of positive among the non-exposed) | Proportion of individuals with the disease (outcome) among those non-exposed to the risk factor (or that are part of the control group). | Data usually obtained from the literature. When this information is not available but there is information on general prevalence/incidence in the population, this value may be used in sample size calculation (values attributed to the control group in intervention studies) or estimated based on the following formula: PONE=pO/(pNE+(pE*PR) ) |
where pO = prevalence of outcome; pNE = percentage of non-exposed; pE = percentage of exposed; PR = prevalence ratio (usually a value between 1.5 and 2.0). | ||
Expected prevalence ratio | Relationship between the prevalence of disease in the exposed (intervention) group and the prevalence of disease in the non-exposed group, indicating how many times it is expected that the prevalence will be higher (or lower) in the exposed compared to non-exposed group. | It is the value that the investigators intend to find as HA, with the corresponding H0 equal to one (similar prevalence of the outcome in both exposed and non-exposed groups). For the sample size estimates, the expected outcome prevalence may be used for the non-exposed group, or the expected difference in the prevalence between the exposed and the non-exposed groups. |
Usually, a value between 1.50 and 2.00 is used (exposure as risk factor) or between 0.50 and 0.75 (protective factor). | ||
For intervention studies, the clinical relevance of this value should be considered. | ||
The smaller the prevalence rate (the smaller the expected difference between the groups), the larger the required sample size. | ||
Type of statistical test | The test may be one-tailed or two-tailed, depending on the type of the HA. | Two-tailed tests require larger sample sizes |
Ho - null hypothesis; Ha - alternative hypothesis
Female: 56%(E) | n=1298 | n=388 | n=487 | n=134 | n=136 | n=28 | |||
Male:44%(NE) | n=1738 | n=519 | n=652 | n=179 | n=181 | n=38 | |||
White: 82%(E) | n=2630 | n=822 | n=970 | n=276 | n=275 | n=49 | |||
Other: 18%(NE) | n=3520 | n=1100 | n=1299 | n=370 | n=368 | n=66 | |||
0-4 years: 25%(E) | n=1340 | n=366 | n=488 | n=131 | n=138 | ND | |||
>4 anos: 75%(NE) | n=1795 | n=490 | n=654 | n=175 | n=184 | ND | |||
≤133: 50%(E) | n=1228 | n=360 | n=458 | n=124 | n=128 | n=28 | |||
>133: 50%(NE) | n=1644 | n=480 | n=612 | n=166 | n=170 | n=36 | |||
E=exposed group; NE=non-exposed group; r=NE/E relationship; PONE=prevalence of outcome in the non-exposed group (percentage of positives in non-exposed group), estimated based on formula from chart 3 , considering an PR of 1.50; PR=prevalence ratio/incidence or expected relative risk; n= minimum necessary sample size; ND=value could not be determined, as prevalence of outcome in the exposed would be above 100%, according to specified parameters.
Estimates show that studies with more power or that intend to find a difference of a lower magnitude in the frequency of the outcome (in this case, the prevalence rates) between exposed and non-exposed groups require larger sample sizes. For these reasons, in sample size calculations, an effect measure between 1.5 and 2.0 (for risk factors) or between 0.50 and 0.75 (for protective factors), and an 80% power are frequently used.
Considering the values in each column of chart 3 , we may conclude also that, when the nonexposed/exposed relationship moves away from one (similar proportions of exposed and non-exposed individuals in the sample), the sample size increases. For this reason, intervention studies usually work with the same proportion of individuals in the intervention and control groups. Upon analysis of the values on each line, it can be concluded that there is an inverse relationship between the prevalence of the outcome and the required sample size.
Based on these estimates, assuming that the authors intended to test all of these associations, it would be necessary to choose the largest estimated sample size (2,630 subjects). In case the required sample size is larger than the target population, the investigators may decide to perform a multicenter study, lengthen the period for data collection, modify the research question or face the possibility of not having sufficient power to draw valid conclusions.
Additional aspects need to be considered in the previous estimates to arrive at the final sample size, which may include the possibility of refusals and/or losses in the study (an additional 10-15%), the need for adjustments for confounding factors (an additional 10-20%, applicable to observational studies), the possibility of effect modification (which implies an analysis of subgroups and the need to duplicate or triplicate the sample size), as well as the existence of design effects (multiplication of sample size by 1.5 to 2.0) in case of cluster sampling.
Suppose that the investigators intend to evaluate whether the daily quantity of sunscreen used (in grams), the time of daily exposure to sunlight (in minutes) or a laboratory parameter (such as vitamin D levels) differ according to the socio-demographic variables mentioned. In all of these cases, the outcomes are numerical variables (discrete or continuous) 1 , and the objective is to answer whether the mean outcome in the exposed/intervention group is different from the non-exposed/control group.
In this case, the first three parameters from chart 4 (alpha error, power of the study and relationship between non-exposed/exposed groups) are required, and the conclusions about their influences on the final sample size are also applicable. In addition to defining the expected outcome means in each group or the expected mean difference between nonexposed/exposed groups (usually at least 15% of the mean value in non-exposed group), they also need to define the standard deviation value for each group. There is a direct relationship between the standard deviation value and the sample size, the reason why in case of asymmetric variables the sample size would be overestimated. In such cases, the option may be to estimate sample sizes based on specific calculations for asymmetric variables, or the investigators may choose to use a percentage of the median value (for example, 25%) as a substitute for the standard deviation.
There are also specific calculations for some other quantitative studies, such as those aiming to assess correlations (exposure and outcome are numerical variables), time until the event (death, cure, relapse etc.) or the validity of diagnostic tests, but they are not described in this article, given that they were discussed elsewhere. 5
Sample size calculation is always an essential step during the planning of scientific studies. An insufficient or small sample size may not be able to demonstrate the desired difference, or estimate the frequency of the event of interest with acceptable precision. A very large sample may add to the complexity of the study, and its associated costs, rendering it unfeasible. Both situations are ethically unacceptable and should be avoided by the investigator.
Conflict of Interest: None
Financial Support: None
* Work carried out at the Latin American Cooperative Oncology Group (LACOG), Universidade Federal de Santa Catarina (UFSC), and Universidade Federal de Ciências da Saúde de Porto Alegre (UFCSPA), Brazil.
Como citar este artigo: Martínez-Mesa J, González-Chica DA, Bastos JL, Bonamigo RR, Duquia RP. Sample size: how many participants do I need in my research? An Bras Dermatol. 2014;89(4):609-15.
Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service
Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve
Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground
Know how your people feel and empower managers to improve employee engagement, productivity, and retention
Take action in the moments that matter most along the employee journey and drive bottom line growth
Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people
Get faster, richer insights with qual and quant tools that make powerful market research available to everyone
Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts
Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market
Explore the platform powering Experience Management
Popular Use Cases
Market Research
The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.
Updated December 8, 2023
How can you calculate sample size, reduce the margin of error and produce surveys with statistically significant results? In this short guide, we explain how you can improve your surveys and showcase some of the tools and resources you can leverage in the process.
But first, when it comes to market research, how many people do you need to interview to get results representative of the target population with the level of confidence that you are willing to accept?
However, if all of this sounds new to you, let's start with what sample size is.
Free eBook: The complete guide to determining sample size
Sample size is a term used in market research to define the number of subjects included in a survey, study, or experiment. In surveys with large populations, sample size is incredibly important. The reason for this is because it's unrealistic to get answers or results from everyone - instead, you can take a random sample of individuals that represent the population as a whole.
For example, we might want to compare the performance of long-distance runners that eat Weetabix for breakfast versus those who don't. Since it's impossible to track the dietary habits of every long-distance runner across the globe, we would have to focus on a segment of the survey population. This might mean selecting 1,000 runners for the study.
That said, no matter how diligent we are with our selection, there will always be some margin of error (also referred to as confidence interval) in the study results, that's because we can't speak to every long-distance runner or be confident of how Weetabix influences (in every possible scenario), the performance of long-distance runners. This is known as a "sampling error."
Larger sample sizes will help to mitigate the margin of error, helping to provide more statistically significant and meaningful results. In other words, a more accurate picture of how eating Weetabix can influence the performance of long-distance runners.
So what do you need to know when calculating the minimum sample size needed for a research project?
Confidence interval (or margin of error).
The confidence interval is the plus-or-minus figure that represents the accuracy of the reported. Consider the following example:
A Canadian national sample showed "Who Canadians spend their money on for Mother's Day." Eighty-two percent of Canadians expect to buy gifts for their mom, compared to 20 percent for their wife and 15 percent for their mother-in-law. In terms of spending, Canadians expect to spend $93 on their wife this Mother's Day versus $58 on their mother. The national findings are accurate, plus or minus 2.75 percent, 19 times out of 20.
For example, if you use a confidence interval of 2.75 and 82% percent of your sample indicates they will "buy a gift for mom" you can be "confident (95% or 99%)" that if you had asked the question to ALL CANADIANS, somewhere between 79.25% (82%-2.75%) and 84.75% (82%+2.75%) would have picked that answer.
Confidence interval is also called the "margin of error." Are you needing to understand how the two calculations correlate?
The confidence level tells you how confident you are of this result. It is expressed as a percentage of times that different samples (if repeated samples were drawn) would produce this result. The 95% confidence level means that 19 times out of twenty that results would fall in this - + interval confidence interval. The 95% confidence level is the most commonly used.
When you put the confidence level and the confidence interval together, you can say that you are 95% (19 out of 20) sure that the true percentage of the population that will "buy a gift for mom" is between 79.25% and 84.75%.
Wider confidence intervals increase the certainty that the true answer is within the range specified. These wider confidence intervals come from smaller sample sizes. When the costs of an error is extremely high (a multi-million dollar decision is at stake) the confidence interval should be kept small. This can be done by increasing the sample size.
Population size is the total amount of people in the group you're trying to study. If you were taking a random sample of people across the U.K., then your population size would be just over 68 million (as of 09 August 2021).
This refers to how much individual responses will vary between each other and the mean. If there's a low standard deviation, scores will be clustered near the mean with minimal variation. A higher standard deviation means that when plotted on a graph, responses will be more spread out.
Standard deviation is expressed as a decimal, and 0.5 is considered a "good" standard deviation to set to ensure a sample size that represents the population.
After you've considered the four above variables, you should have everything required to calculate your sample size.
However, if you don't know your population size, you can still calculate your sample size. To do this, you need two pieces of information: a z-score and the sample size formula.
A z-score is simply the numerical representation of your desired confidence level. It tells you how many standard deviations from the mean your score is.
The most common percentages are 90%, 95%, and 99%.
z = (x – μ) / σ
As the formula shows, the z-score is simply the raw score minus the population mean and divided by the population's standard deviation.
Once you have your z-score, you can fill out your sample size formula, which is:
If you want an easier option, Qualtrics offers an online sample size calculator that can help you determine your ideal survey sample size in seconds. Just put in the confidence level, population size, margin of error, and the perfect sample size is calculated for you.
There are lots of variables to consider when it comes to generating a specific sample size. That said, there are a few best-practice tips (or rules) to ensure you get the best possible results:
To increase confidence level or reduce the margin of error, you have to increase your sample size. Larger sizes almost invariably lead to higher costs. Take the time to consider what results you want from your surveys and what role it plays in your overall campaigns.
Depending on your target audience, you may not be able to get enough responses (or a large enough sample size) to achieve "statistically significant" results.
If it's for your own research and not a wider study, it might not be that much of a problem, but remember that any feedback you get from your surveys is important. It might not be statistically significant, but it can aid your activities going forward.
Ultimately, you should treat this on a case-by-case basis. Survey samples can still give you valuable answers without having sample sizes that represent the general population. But more on this in the section below.
Yes and no questions provide certainty, but open-ended questions provide insights you would have otherwise not received. To get the best results, it's worth having a mix of closed and open-ended questions. For a deeper dive into survey question types, check out our handbook.
From market research to customer satisfaction, there are plenty of different surveys that you can carry out to get the information you need, corresponding with your sample size.
The great thing about what we do at Qualtrics is that we offer a comprehensive collection of pre-made, customer, product, employee, and brand survey templates. This includes Net Promoter Score (NPS) surveys, manager feedback surveys, customer service surveys, and more.
The best part? You can access all of these templates for free. Each one is designed by our specialist team of subject matter experts and researchers so you can be sure that our best-practice question choices and clear designs will get more engagement and better quality data.
As well as offering free survey templates, you can check out our free survey builder. Trusted by over 11,000 brands and 99 of the top 100 business schools, our tool allows you to create, distribute and analyze surveys to find customer, employee, brand, product, and market research insights.
Drag-and-drop functionality means anyone can use it, and wherever you need to gather and analyze data, our platform can help.
Once you have determined your sample size , you’re ready for the next step in the research journey. market research.
Market research is the process of gathering information about consumers' needs and preferences, and it can provide incredible insights that help elevate your business (or your customers') to the next level.
If you want to learn more, we've got you covered. Just download our free guide and find out how you can:
Qualtrics // Experience Management
Qualtrics, the leader and creator of the experience management category, is a cloud-native software platform that empowers organizations to deliver exceptional experiences and build deep relationships with their customers and employees.
With insights from Qualtrics, organizations can identify and resolve the greatest friction points in their business, retain and engage top talent, and bring the right products and services to market. Nearly 20,000 organizations around the world use Qualtrics’ advanced AI to listen, understand, and take action. Qualtrics uses its vast universe of experience data to form the largest database of human sentiment in the world. Qualtrics is co-headquartered in Provo, Utah and Seattle.
May 20, 2024
May 13, 2024
Experience Management
November 7, 2023
Brand Experience
June 27, 2023
June 20, 2023
April 1, 2023
Academic Experience
November 18, 2022
November 4, 2022
Stay up to date with the latest xm thought leadership, tips and news., request demo.
Ready to learn more about Qualtrics?
IMAGES
VIDEO
COMMENTS
As stated in CLSI EP09-A3 guideline, the general recommendation for the minimum sample size for validation studies to be conducted by the manufacturer is 100; while the minimum sample size for user-conducted verification is 40 . In addition, these documents clearly explain the requirements that should be considered while collecting the samples ...
In order to make up for a rough estimate of 20.0% of non-response rate, the minimum sample size requirement is calculated to be 254 patients (i.e. 203/0.8) by estimating the sample size based on the EPV 50, and is calculated to be 375 patients (i.e. 300/0.8) by estimating the sample size based on the formula n = 100 + 50i.
Calculate power given sample size, alpha, and the minimum detectable effect (MDE, minimum effect of interest). Calculate. Sample Size Power Experimental design. ... Estimating the required sample size before running an experiment that will be judged by a statistical test (a test of significance, confidence interval, etc.) allows one to: ...
The table shown on the right can be used in a two-sample t-test to estimate the sample sizes of an experimental group and a control group that are of equal size, that is, the total number of individuals in the trial is twice that of the number given, and the desired significance level is 0.05. [4] The parameters used are: The desired statistical power of the trial, shown in column to the left.
For example, to estimate the proportion of burnout in staff residents in a regional hospital, consider a sample with 15% burnout. Allowing for an MoE of 5% and a confidence level of 95%, the minimum sample size is 195.9. The recommended sample size can be set at 245, so as to allow for a 20% nonresponse rate.
Setup Tests. The discussion of sample size calculation in this article will be mostly based on two sample one-sided z-tests.. The reasons are that, in A/B testing scenarios, the testing statistics are usually sample mean (e.g. average XXX per user between the experiment period, where all the behaviors for a specific user in this period are regarded as a sample, and this metric is just the ...
Statistical power and sample size analysis provides both numeric and graphical results, as shown below. The text output indicates that we need 15 samples per group (total of 30) to have a 90% chance of detecting a difference of 5 units. The dot on the Power Curve corresponds to the information in the text output.
•The structure of the experiment •The method for analyzing the data •The size of the true underlying effect •The variability in the measurements •The chosen significance level (α) •The sample size Note: We usually try to determine the sample size to give a particular power (often 80%). 29 Effect of sample size 6 per group: 12 per ...
3) Plan for a sample that meets your needs and considers your real-life constraints. Every research project operates within certain boundaries - commonly budget, timeline and the nature of the sample itself. When deciding on your sample size, these factors need to be taken into consideration.
Sample size is the number of observations or data points collected in a study. It is a crucial element in any statistical analysis because it is the foundation for drawing inferences and conclusions about a larger population. When delving into the world of statistics, the phrase "sample size" often pops up, carrying with it the weight of ...
About This Calculator. This calculator uses a number of different equations to determine the minimum number of subjects that need to be enrolled in a study in order to have sufficient statistical power to detect a treatment effect. 1. Before a study is conducted, investigators need to determine how many subjects should be included.
How to choose a sample size (for the statistically challenged)
5.1: Experiments Required elements of an experiment, and how they differ from the elements of an observational study. Basic example of an experimental design. 5.2: Experimental units and sampling units Introduction to sampling units, experimental units, and the concept of level at which units are independent within an experiment.
Sample size calculations require assumptions about expected means and standard deviations, or event risks, in different groups; or, upon expected effect sizes. For example, a study may be powered to detect an effect size of 0.5; or a response rate of 60% with drug vs. 40% with placebo. [1] When no guesstimates or expectations are possible ...
Sample Size Determination: A Practical Guide for Health ...
If you use experiments to evaluate a product feature, and I hope you do, the question of the minimum required sample size to get statistically significant results is often brought up. In this article, we explain how we apply mathematical statistics and power analysis to calculate AB testing sample size.
Working out sample size from confidence intervals. One of the reasons that the minimum sample sizes guidelines vary so much is that the true minimum sample size required for any study depends on the signal to noise ratio of the data. If the data intrinsically has a high level of noise in it, such as political polls and market research, then a ...
This free sample size calculator determines the sample size required to meet a given set of constraints. ... Find Out The Sample Size. This calculator computes the minimum number of necessary samples to meet the desired statistical constraints. Confidence Level: ... as described above in "Statistics of a Random Sample." Given that an experiment ...
How to determine the correct sample size for a survey.
It is the ability of the test to detect a difference in the sample, when it exists in the target population. Calculated as 1-Beta. The greater the power, the larger the required sample size will be. A value between 80%-90% is usually used. Relationship between non-exposed/exposed groups in the sample.
If we have a total of 3.5 million users in a week, we actually only need 20% of all users (divided into two groups) to obtain a sufficient sample size with a 2% minimum detectable effect (MDE).
Sample size is the number of observations or individuals included in a study or experiment. It is the number of individuals, items, or data points selected from a larger population to represent it statistically. The sample size is a crucial consideration in research because it directly impacts the reliability and extent to which you can generalize those findings to the larger population.
Sample size is a term used in market research to define the number of subjects included in a survey, study, or experiment. In surveys with large populations, sample size is incredibly important. ... So what do you need to know when calculating the minimum sample size needed for a research project?