U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

Affiliations.

  • 1 Department of Public Health, Erasmus MC, University Medical Centre Rotterdam, PO Box 2040, 3000 CA, Rotterdam, The Netherlands. [email protected].
  • 2 Department of Business Economics, Erasmus University, Rotterdam, The Netherlands.
  • 3 Department of Health Economics, Policy and Law, Erasmus University, Rotterdam, The Netherlands.
  • PMID: 25726010
  • PMCID: PMC4575371
  • DOI: 10.1007/s40271-015-0118-z

Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.

PubMed Disclaimer

Similar articles

  • Discrete choice experiments in pharmacy: a review of the literature. Naik-Panvelkar P, Armour C, Saini B. Naik-Panvelkar P, et al. Int J Pharm Pract. 2013 Feb;21(1):3-19. doi: 10.1111/ijpp.12002. Epub 2012 Nov 20. Int J Pharm Pract. 2013. PMID: 23301529 Review.
  • Application of discrete choice experiments to enhance stakeholder engagement as a strategy for advancing implementation: a systematic review. Salloum RG, Shenkman EA, Louviere JJ, Chambers DA. Salloum RG, et al. Implement Sci. 2017 Nov 23;12(1):140. doi: 10.1186/s13012-017-0675-8. Implement Sci. 2017. PMID: 29169397 Free PMC article. Review.
  • Accounting for Scale Heterogeneity in Healthcare-Related Discrete Choice Experiments when Comparing Stated Preferences: A Systematic Review. Wright SJ, Vass CM, Sim G, Burton M, Fiebig DG, Payne K. Wright SJ, et al. Patient. 2018 Oct;11(5):475-488. doi: 10.1007/s40271-018-0304-x. Patient. 2018. PMID: 29492903
  • Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software. Lancsar E, Fiebig DG, Hole AR. Lancsar E, et al. Pharmacoeconomics. 2017 Jul;35(7):697-716. doi: 10.1007/s40273-017-0506-4. Pharmacoeconomics. 2017. PMID: 28374325
  • Conducting discrete choice experiments to inform healthcare decision making: a user's guide. Lancsar E, Louviere J. Lancsar E, et al. Pharmacoeconomics. 2008;26(8):661-77. doi: 10.2165/00019053-200826080-00004. Pharmacoeconomics. 2008. PMID: 18620460
  • Patient preferences for inflammatory bowel disease treatments: protocol development of a global preference survey using a discrete choice experiment. Schoefs E, Vermeire S, Ferrante M, Sabino J, Verstockt B, Avedano L, De Rocchis MS, Sajak-Szczerba M, Saldaña R, Straetemans N, Vandebroek M, Janssens R, Huys I. Schoefs E, et al. Front Med (Lausanne). 2024 Aug 14;11:1418874. doi: 10.3389/fmed.2024.1418874. eCollection 2024. Front Med (Lausanne). 2024. PMID: 39206174 Free PMC article.
  • Analyzing HPV Vaccination Service Preferences among Female University Students in China: A Discrete Choice Experiment. Hu L, Jiang J, Chen Z, Chen S, Jin X, Gao Y, Wang L, Wang L. Hu L, et al. Vaccines (Basel). 2024 Aug 9;12(8):905. doi: 10.3390/vaccines12080905. Vaccines (Basel). 2024. PMID: 39204031 Free PMC article.
  • Understanding general practitioner and pharmacist preferences for pharmacogenetic testing in primary care: a discrete choice experiment. McDermott JH, Sharma V, Beaman GM, Keen J, Newman WG, Wilson P, Payne K, Wright S. McDermott JH, et al. Pharmacogenomics J. 2024 Aug 9;24(5):25. doi: 10.1038/s41397-024-00344-z. Pharmacogenomics J. 2024. PMID: 39122683 Free PMC article.
  • Protocol for a discrete choice experiment: understanding preferences for seeking health services for survivors of sexual violence in Uganda. Stark L, Mutumba M, Ssewamala F, Brathwaite R, Brown DS, Atwebembere R, Mwebembezi A. Stark L, et al. BMJ Open. 2024 Aug 6;14(8):e081663. doi: 10.1136/bmjopen-2023-081663. BMJ Open. 2024. PMID: 39107025 Free PMC article.
  • Public Heterogeneous Preferences for Low-Dose Computed Tomography Lung Cancer Screening Service Delivery in Western China: A Discrete Choice Experiment. Tao W, Bao T, Gu T, Pan J, Li W, Li R. Tao W, et al. Int J Health Policy Manag. 2024;13:8259. doi: 10.34172/ijhpm.8259. Epub 2024 Jul 10. Int J Health Policy Manag. 2024. PMID: 39099484 Free PMC article.
  • de Bekker-Grob EW, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ. 2012;21(2):145–172. doi: 10.1002/hec.1697. - DOI - PubMed
  • Clark MD, Determann D, Petrou S, Moro D, de Bekker-Grob EW. Discrete choice experiments in health economics: a review of the literature. Pharmacoeconomics. 2014;32(9):883–902. doi: 10.1007/s40273-014-0170-x. - DOI - PubMed
  • Lancaster KJ. A new approach to consumer theory. J Polit Econ. 1966;74(2):132–157. doi: 10.1086/259131. - DOI
  • McFadden D. Conditional logit analysis of qualitative choice behavior. In: Zarembka P, editor. Frontiers in econometrics. New York: Academic Press; 1974. pp. 105–142.
  • Reed Johnson F, Lancsar E, Marshall D, Kilambi V, Muhlbacher A, Regier DA, et al. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health. 2013;16(1):3–13. doi: 10.1016/j.jval.2012.08.2223. - DOI - PubMed

Publication types

  • Search in MeSH

Related information

  • Cited in Books

LinkOut - more resources

Full text sources.

  • Europe PubMed Central
  • Ovid Technologies, Inc.
  • PubMed Central

Other Literature Sources

  • scite Smart Citations
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • For authors
  • Browse by collection
  • BMJ Journals

You are here

  • Volume 9, Issue 8
  • A protocol for a discrete choice experiment: understanding patient medicine preferences for managing chronic non-cancer pain
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-9873-3576 Marian Shanahan 1 ,
  • Briony Larance 1 , 2 ,
  • Suzanne Nielsen 1 , 3 ,
  • Milton Cohen 4 ,
  • Maria Schaffer 1 ,
  • Gabrielle Campbell 1
  • 1 National Drug and Alcohol Research Centre , UNSW Sydney , Sydney , New South Wales , Australia
  • 2 School of Psychology , University of Wollongong , Wollongong , New South Wales , Australia
  • 3 Monash Addiction Research Centre , Monash University , Melbourne , Victoria , Australia
  • 4 Vincent’s Clinical School, UNSW Medicine , University of New South Wales , Sydney , New South Wales , Australia
  • Correspondence to Dr Marian Shanahan; m.shanahan{at}unsw.edu.au

Introduction High rates of chronic non-cancer pain (CNCP), concerns about adverse effects including dependence among those prescribed potent pain medicines, the recent evidence supporting active rather than passive management strategies and a lack of funding for holistic programme have resulted in challenges around decision making for treatment among clinicians and their patients. Discrete choice experiments (DCEs) are one way of assessing and valuing treatment preferences. Here, we outline a protocol for a study that assesses patient preferences for CNCP treatment.

Methods and analysis A final list of attributes (and their levels) for the DCE was generated using a detailed iterative process. This included a literature review, a focus group and individual interviews with those with CNCP and clinicians who treat people with CNCP. From this process a list of attributes was obtained. Following a review by study investigators including pain and addiction specialists, pharmacists and epidemiologists, the final list of attributes was selected (number of medications, risk of addiction, side effects, pain interference, activity goals, source of information on pain, provider of pain care and out-of-pocket costs). Specialised software was used to construct an experimental design for the survey. The survey will be administered to two groups of participants, those from a longitudinal cohort of patients receiving opioids for CNCP and a convenience sample of patients recruited through Australia’s leading pain advocacy body (Painaustralia) and their social media and website. The data from the two participant groups will be initially analysed separately, as their demographic and clinical characteristics may differ substantially (in terms of age, duration of pain and current treatment modality). Mixed logit and latent class analysis will be used to explore heterogeneity of responses.

Ethics and dissemination Ethics approval was obtained from the University of New South Wales Sydney Human Ethics committee HC16511 (for the focus group discussions, the one-on-one interviews and online survey) and HC16916 (for the cohort). A lay summary will be made available on the National Drug and Alcohol Research Centre website and Painaustralia’s website. Peer review papers will be submitted, and it is expected the results will be presented at relevant pain management conferences nationally and internationally. These results will also be used to improve understanding of treatment goals between clinicians and those with CNCP.

  • discrete choice experiment
  • chronic non-cancer pain
  • preferences

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjopen-2018-027153

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

This discrete choice experiment (DCE) will elucidate how people with chronic non-cancer pain (CNCP) value different treatments that include both medicines and holistic goals of pain management.

Our DCE will be conducted in two samples: an already recruited diverse cohort of people with CNCP who have been prescribed opioids and a novel group of people with CNCP who may not have been prescribed opioids, recruited via social media.

The samples will include the most common pain conditions such as chronic back and neck problems, arthritis and migraines.

The study will estimate marginal willingness to pay for changes in number of medicines, level of pain interference, risk of addiction and preference of service provider.

The preference DCE surveys will be undertaken in Australia, which could affect generalisability to other settings.

Introduction

These are challenging times for both people with chronic non-cancer pain (CNCP) and those to whom they turn for treatment. Despite a significant increase in opioids being prescribed for CNCP in countries such as the USA, Canada and Australia 1–3 there is insufficient evidence on the long-term effectiveness of use. 4

Accompanying the increase in opioid prescribing there has been a concurrent increase in harms, with more than 64 000 opioid overdoses in the USA, 5 1300 in Australia 6 in 2016 and 8440 in Europe. 7 Responses to minimise harms associated with pharmaceutical opioids include increased regulatory controls such as prescription monitoring programme and limiting access to over-the-counter codeine in Canada, Australia and the USA. 8 Other strategies have focused on improved clinical practice, including limiting maximum doses and prescriber education. 9 However, taken together with busy general practitioners, a shortage of pain and addiction specialists, fear of addiction and the lack of accessible and affordable alternatives for pain management this has led to increased anxiety among many with CNCP. 10

With chronic pain reported by approximately one-third of the US population 11 and 39% percent of a representative Australian sample, 12 and potential rates of dependence varying between 1% and 24% 13 among those who are prescribed potent analgesic medicines, this represents a sizeable challenge.

The benefits and harms of opioids for CNCP are complex and contextual, and include factors such as age, co-morbidities, health status, type and duration of pain, concurrent medicines, patients’ ability and willingness to self-manage. Under-treated CNCP adversely affects patients’ well-being, 10 but there are few data to inform the range of treatment choices available, maximise treatment outcomes and patient adherence and minimise unintended consequences. In addition, prescribing decisions and patients’ expectations are complicated by the common side effects from many medicines used in CNCP, the lack of long-term evidence on efficacy, 14–17 the development of tolerance, fears of dependence and lack of funding for non-drug-based treatment options.

Recent evidence suggests that active rather than passive management strategies may ‘retrain the brain’ to reduce pain, 18 and that a multidisciplinary approach is likely to produce the most optimal outcomes, but the cost and availability of alternative treatments may affect patients’ treatment choices. In addition, cognitive behavioural therapy has been found to help patients modify situational factors and multi-modal therapies that combine exercise and related therapies with psychologically based approaches also help reduce pain and improve function more effectively than single modalities. 19–21

Preferences of clinicians and patients can impact prescribing patterns, uptake of interventions and treatment adherence, thus affecting the effectiveness of pain management. 22 It is important to understand why some people with CNCP resort to treatments that are expensive or without evidence of efficacy; and alternatively, why some stay on opioids long-term when not experiencing clinical benefit. For example, 34% of a cohort of CNCP participants reported that there had been no clinically significant change in their activity limitations, symptoms, emotions and overall quality of life since starting opioids. 23 Significant proportions of the cohort were using complementary or alternative interventions for their pain which have limited or no evidence of efficacy in chronic pain. 23 24 In addition, they often report that attending physiotherapy, specialised exercise classes or psychotherapy was often prohibitively expensive and unfunded whereas medicines and general practitioner (GP) visits are at least partially covered by the Australian Medicare and Pharmaceutical Benefits Schemes.

The discrete choice experiment (DCE) methodology allows for the identification of the preferences for various treatment options and potential trade-offs that individuals are willing to make. Moreover, DCEs have been widely used in the health literature to elicit preferences from patient groups on health and non-health outcomes. 25 26 Studies that have utilised the DCE methodology to examine patient preferences for managing CNCP have focused specifically on toleration of the adverse effects of nonselective nonsteroidal anti-inflamatory drugs (NSAIDs) and selective COX-2 (cyclo-oxygenase) inhibitors, 27 management of neuropathic pain, 28 surgical or non-surgical approaches for low back pain 29 ; and acupuncture or infra-red treatments for low back pain. 30 These studies have often been limited to specific treatments 27–30 and to limited conditions. 29 30 Here we outline a study protocol to elicit patient preferences for broader approaches to treatment for CNCP through use of a DCE by extending the range of attributes to encompass a wider range of treatment alternatives including holistic goals of pain management.

The aims of this study are to identify and value the factors that influence important treatment decisions among people living with CNCP, so we can better understand the choices they make. Specifically, we will assess:

Preferences for medicines.

Impact on choice of potential side effects including the possibility of addiction.

Willingness to pay (WTP) out of pocket for preferred options, and the extent to which costs may be a barrier.

The extent to which having input into treatment is important.

The degree to which pain interference is tolerated.

Methods and analysis

Overview of the dce.

DCEs are a method of eliciting and quantifying preferences and exploring trade-offs between the attributes (characteristics) of a treatment (or a good or service). Attribute-based DCEs permit the exploration of preferences for treatment options while varying the levels of each attribute. 26 31 32 DCEs are based on Lancaster’s economic theory of value (1966, 1971) and presume that individuals derive utility (or well-being) not from the good itself but rather from the attributes of that good. 33 34 They rely on an individual’s knowledge or perceptions of their own preferences, and on their ability to make trade-offs between alternatives in the presence of constraints such as money, time, availability and so on.

A DCE provides respondents with several hypothetical but reasonable choice sets. Each choice set consists of at least two alternatives that comprise a set of attributes each with various levels. Respondents are then asked to choose their preferred alternative in each choice set. 33 In making a choice, the respondent identifies the alternative that yields the highest utility to them. The attributes and their levels are important, as they drive decision making. When respondents make a choice, they make trade-offs between the levels of the various attributes that can then be analysed with logistic regressions. When a cost attribute is included, it is possible to indirectly estimate WTP values for particular attributes of treatment. 35–38 The dependent variable in the logistic regression represents the probability of choosing one alternative with specific attributes and levels over another. The independent variables are the attributes and their levels. It is feasible to account for heterogeneity through the use of covariates in mixed logit (MXL) or latent class (LC) models. 39 40

Consumer theory assumes deterministic behaviour, but choice theory asserts that individual behaviour is intrinsically probabilistic (random). Individuals have a concept of the value (indirect utility) for each choice, but the researcher does not know all the factors that might affect that choice. The utility estimate consists of the knowable part and the random or unknowable parts. The random part may be due to unobserved attributes, unobserved preference variation, specification or measurement error, or inter-individual differences in utility as a result of variation in tastes. 33 41 The utility function in the context of the DCE can be presented as follows:

where individual i will choose alternative j if, and only if, that alternative maximises their utility among all J alternatives. The utility (U) for individual i is conditional on choice j and decomposed into explainable or systematic V ij and non-explainable or random component ε ij . V ij can be further broken down into X jk , a vector of attributes of the treatment, and Z, a vector of N characteristics of the individual i, and β and γ are the respective coefficients to be estimated for K attributes, with γ n coefficients indicating the impact that the personal characteristics have on choice. 42

where y ij is equal to 1 if alternative j is chosen, and 0 otherwise and 1 is the choice if and only if

V ij + ε ij > V im + ε im for all j ≠ m which rearranges to

V ij - V im > ε im - ε ij.

Utilities are not observed, but by documenting the choices made, utilities can be estimated. 43 In addition (ε im − ε ij ) is not observed directly and so it is only possible to make observations up to a probability of occurrence with some distribution or density function. It is the choice of this distribution that affects interpretation of the probabilities. 33 Different density functions for the unobserved part of the utility ε ij lead to different families of probabilistic discrete choice models.

Undertaking a DCE requires several steps including the selection of the relevant attributes and their levels, obtaining a feasible design for the DCE survey, constructing and administering the survey and determining the best-fitting model.

Patient and public involvement

The final survey tool (the DCE), including the framing of the question, was developed after a focus group discussion and multiple one-on-one discussions with persons who self-report as having CNCP. They were recruited from members of Painaustralia. Painaustralia is Australia’s leading pain advocacy body representing the interests of a membership that includes health, medical, research and consumer organisations it works to improve the quality of life of people living with pain and to facilitate implementation of the National Pain Strategy Australia-wide. As further described below, the important constructs from this qualitative work informed the choice of attributes, levels and the final question. A lay summary of the findings will be made available on the National Drug and Alcohol Research Centre (NDARC) website and Painaustralia’s website.

Determining the attributes and levels for the DCE

The selection of attributes and their levels is a key step. There is a need to balance the number of attributes to adequately describe the good or service of interest; specifying too many attributes may hinder the respondents’ decision making. The number of attributes will vary with the complexity of the good being considered, but typically studies include four to eight attributes. Undertaking qualitative work to inform the selecting and framing improves the relevance and applicability of the findings. 44 45

Focus groups and telephone interviews with people living with CNCP

As a first step in this study, a literature review was undertaken to identify the important constructs to explore in subsequent focus groups and one-on-one discussions. The intent was to recruit 20 to 25 participants to participate in focus groups; however, it became apparent this was going to be difficult due to health status of participants and location. Therefore, one focus group (N=3 participants) and 13 one-on-one telephone interviews were conducted with people who had CNCP, to elicit views on topics such as self-management, knowledge of pain mechanisms, brain plasticity, relative importance of exercise, medicines, choice of treatment provider and barriers and facilitators to effective good treatment.

Telephone interviews with clinicians

In addition, interviews were conducted with a range of clinicians including pain specialists, general practitioners (urban and rural), clinical nurse specialists, physiotherapists and addiction specialists (N=8). Clinician interviews elicited additional information on barriers and facilitators to treatment and views on current modalities of treatment for CNCP.

Determining the list of attributes and levels

The final list of attributes included in the DCE experiment was generated using a detailed iterative process. The first phase involved a literature review undertaken by MSh to inform the development of list of possible factors previously identified as influencing patient choice of pain treatments. This list was reviewed and further developed among the broader pain and opioids in treatment (POINT) study investigators that include pain and addiction specialists, pharmacists and epidemiologists.

These attributes developed in the first phase of the study became the basis of (a) focus group discussions with patients and (b) telephone interviews with clinicians. Two authors (MSh and GC) reviewed the recorded transcripts separately and independently analysed data thematically. Attributes generated at this second phase included the following themes: potential side effects; concurrent medicines; necessity to work/care for others; barriers; complementary medicine; multi-modal therapies; costs; time to onset of effect; adherence/compliance; risk of addiction; co-morbidities; and self-management.

In the final phase, this broader list was reviewed by the broader POINT study investigator team, and a final list of attributes (and their levels) was agreed. Attributes (and number of levels) selected were number of medications (4), risk of addiction (4), side effects (2), pain interference (4), activity goals, source of information on pain (4), provider of pain care (4) and out-of-pocket costs (4).

Pilot study

The dce design.

Having selected the attributes, levels and number of alternatives (2), an experimental design for the survey was generated. Given the number of attributes and levels, a full factorial design including all possible combinations of attributes and their levels was not feasible. Therefore, a D-efficient experimental design that maximised model statistical efficiency by minimising the parameter standard errors was generated using Ngene. 46 The statistical efficiency of the design is improved if some prior information about these parameters is available. This can be coefficients from previous analysis or expert opinion. 43 46 In the design for the pilot study, the prior coefficients were set to zero.

Pilot-testing attributes and levels

A pilot study was conducted among 33 people living with CNCP and who had been prescribed opioids. These data were used to refine the final list of attributes and levels. Specifically, the number of levels for the attribute ‘risk of addiction to pain medicines’ was decreased from 4 to 2 levels (the two extremes), as respondents did not appear to distinguish between the middle two levels. (See table 1 for final list of attributes and levels). The pilot testing was also used assess the ease with which participants could complete the experiment: 64% reported that it was easy/very easy to complete the scenario questions, 27% found it difficult and 9% found it very difficult.

  • View inline

Final attributes and levels

Proposed study

Significant coefficients from the pilot study data (n=33) were used in the final experimental design. An efficient design of 80 scenarios, with 10 blocks was generated for the final design (each participant will be presented with one block of eight scenarios). See table 2 for an example of a scenario.

Example of scenario

Participants and survey procedures

There is no agreement on the correct sample size required for a DCE. 47 However, research has shown that in all DCE studies with efficient designs, model estimate precision increases rapidly at sample sizes greater than 150 and then flattens out at around 300. 48 It is also estimated that a minimum sample size of 200 respondents per sub-group be used for studies involving an analysis of differences between samples. 49 The proposed DCE will be administered to two groups of participants (see below) with the sample size of each group being 200 participants or greater. To examine the possibility of different treatment preferences in people living with CNCP we included two distinct groups. The POINT cohort consist of participants who have been prescribed opioids for CNCP and have been on long-term opioids for an average of 7 years at the time of the current study. The other sample includes CNCP recruited online. These participants are not necessarily prescribed opioids and we will examine the differences in treatment preferences between people prescribed and not prescribed opioids for CNCP.

Each participant will be randomly allocated to one of 10 blocks with each block having eight DCE questions. In addition to the DCE questions, a range of demographic and covariates (ie, age, gender, education, marital status) and clinical characteristics (duration of pain, number and type of medicines, pain interference scores) will be collected.

Pain and opioids in treatment (POINT) prospective cohort study

The first source includes participants in POINT study, a national prospective cohort of 1514 people living with CNCP. 23 The POINT study, currently in its fifth year, recruited participants through community pharmacies across Australia. Participants when recruited were: 18 years or older; living with CNCP (defined as pain lasting longer than 3 months); taking prescribed Schedule 8 opioids (including morphine, oxycodone, buprenorphine, methadone and hydromorphone) for CNCP for greater than 6 weeks when recruited; competent in English; mentally and physically able to participate in telephone and self-complete interviews; and did not have any serious cognitive impairments, as determined by the interviewer at the time of screening. The POINT cohort participants are interviewed annually over the phone, and the DCE survey will be included as part of the fifth-year interview. Participants in the POINT cohort study will be invited to participate in the survey and reasons for not participating will be recorded; the first consecutive 33 interviews of the fifth-year interview were administered the pilot study questionnaire and these participants will not complete a second DCE. The DCE will be mailed to participants prior to the date of interview along with an explanation of the study aims and consent forms. The DCE questionnaire will then be completed by the POINT interviewers over the phone as part of the regular POINT interview schedule. Covariates for the DCE will be drawn from baseline data and the most recent interview.

Online survey of people living with CNCP

A second group of respondents will be recruited online through Painaustralia, a national peak body and pain advocacy organisation, and through social media. This group will be asked to complete an identical DCE survey online (via Qualtrics, hosted at the University of New South Wales (UNSW) Sydney), plus selected demographic, pain characteristics, type of medicines, questions drawn from the POINT survey. Similar to the POINT cohort, participants who are eligible for the online survey will be aged 18 years or older, reside in Australia and are living with CNCP (defined as pain lasting longer than 3 months). Unlike the POINT cohort, however, the online sample will not be required to have been prescribed Schedule 8 opioids (although this is not an exclusion in the online survey).

Links to the online survey will be posted on the Painaustralia’s website, the NDARC website, and their associated Facebook pages, and twitter feeds. Recruitment will continue for 4 months (or until the current round of interviews of the cohort are complete) with the objective of achieving at least 200 surveys completed online. Respondents will be randomly allocated one of the 10 blocks, and demographic and covariates collected from the POINT cohort will match.

Data analysis

The data from the two participant groups will be initially analysed separately, as their demographic and clinical characteristics may differ substantially (in terms of age, duration of pain and current treatment modality). The analysis of the DCE responses will be analysed using Nlogit software. 46 Initially a multinomial logit model will be used. MXL and LC analysis will be used to explore heterogeneity of responses. Number of medicines and out-of-pocket costs will be treated as continuous variables; all categorical variables will be effects coded which means the constant will not be confounded with the grand mean and coefficients for base levels can be estimated. 50

Tables of coefficients for the levels and covariates will be presented with relevant statistical measures including pseudo r-squared, log likelihood test and the Akaike information criterion (AIC) to test for goodness of fit of the model. In addition, the marginal rate of substitution (the negative ratio between any two estimated coefficients) will be calculated. This will allow policy makers and clinicians to understand the relative importance of different attributes, and the respondents’ willingness to give up some amount of one attribute in order to obtain more of another.

Article summary

The DCE approach offers great potential for informing clinicians as to patient preferences for pain management. Where preferences do not align with current evidence, the findings will provide an opportunity to develop strategies for improving knowledge. If preferred options are those that are known to be effective but also more expensive for the patient, the results can be used to inform policy makers. However, there are methodological limitations that are common to all DCEs. In our study, one challenge was to select attributes and levels that both reflect treatment for CNCP and outcomes but result in a practical number to include. Our choice to use eight attributes likely places higher cognitive demand on respondents but we sought to mitigate this by only requiring each person to complete eight DCE choices.

Our DCE will be conducted in a large, diverse sample of people living with CNCP, including the most common pain conditions such as chronic back and neck problems. This DCE differs from previous studies in that it will elucidate how people value different CNCP treatments, not just medicines or not just surgery. This study will also permit the estimation of the marginal WTP for different treatment options and outcomes. Although the marginal WTP for preferred attributes will assist policy makers generally, some of the results may not be generalisable to resource-poor settings or countries without universal healthcare systems.

Ethics and dissemination

A lay summary of the findings will be made available on the NDARC website and Painaustralia’s website. Peer review papers will be submitted, and it is expected the results will be presented at relevant pain management conferences nationally and internationally. These results will also be used to improve understanding of treatment goals between clinicians and those with CNCP goals.

Written consent was obtained from those who attended the focus groups and verbal consent was obtained from those who volunteered for phone interviews (researchers were only aware of first name of telephone participants). Consistent with UNSW ethics, for the online DCE survey, consent was implicit in the decision to complete the survey after reading the participation information sheet. For the POINT cohort, consent has previously been obtained from participants and the DCE is part of the scheduled interview.

Acknowledgments

We thank members of Painaustralia for their support of this project through promoting it on their website and other social media, and for inviting members to participate in focus groups and other discussions. We also thank those who have completed the DCE survey and clinicians who contributed to the discussion of barriers and facilitators for managing chronic pain.

  • Pearson S-A ,
  • Fischer B ,
  • Kolodny A ,
  • Courtwright DT ,
  • Hwang CS , et al
  • Turner JA ,
  • Devine EB , et al
  • 5. ↵ National Institute of health, N. I. O. D. A. Overdoes death rates , 2017 . Available: https://www.drugabuse.gov/related-topics/trends-statistics/overdose-death-rates [Accessed 9 Nov 2017 ].
  • Australian Bureau of Statistics
  • European Monitoring Centre for Drugs and Drug Addiction
  • Phillips J ,
  • Reuben DB ,
  • Alvanzo AAH ,
  • Ashikaga T , et al
  • Johannes CB ,
  • Zhou X , et al
  • Campbell G ,
  • Bruno R , et al
  • Vowles KE ,
  • McEntee ML ,
  • Julnes PS , et al
  • Eisenberg E ,
  • McNicol ED ,
  • Furlan AD et al
  • Manchikanti L ,
  • Ailinani H ,
  • Koyyalagunta D , et al
  • Treadwell JR ,
  • Tregear SJ , et al
  • Holliday SM
  • Bernardy K ,
  • Busch AJ , et al
  • Haegerich TM
  • Williams ACdeC ,
  • Eccleston C ,
  • Morley S , et al
  • Oshima Lee E ,
  • Nielsen S ,
  • Determann D ,
  • Petrou S , et al
  • De Bekker-Grob E ,
  • Hauber AB ,
  • Mohamed AF , et al
  • Mühlbacher AC ,
  • Juhnke C , et al
  • Kløjgaard ME ,
  • Manniche C ,
  • Pedersen LB , et al
  • Cheng L-J ,
  • Zhang Y , et al
  • Ross D , et al
  • Amaya-Amaya M
  • Amaya-Amaya M ,
  • Louviere J ,
  • Hensher D ,
  • Tockhorn-Heidenreich A ,
  • Hernández R
  • Kolstad J ,
  • Rockers P , et al
  • McIntosh E ,
  • Frew E , et al
  • Goossens LMA ,
  • Utens CMA ,
  • Smeenk FWJM , et al
  • Lancsar E ,
  • Fiebig DG ,
  • Ben Akiva M ,
  • Al-Janabi H ,
  • Sutton EJ , et al
  • Collins AT ,
  • Bliemer M , et al
  • de Bekker-Grob EW ,
  • Donkers B ,
  • Jonker MF , et al
  • Reed Johnson F ,
  • Marshall D , et al
  • Johnson R ,

Contributors MSh was lead author and responsible for the study design, conducted the qualitative interviews and analysis, and the writing of the paper. GC and BL were involved qualitative interviews and its analysis. Msc contributed to the survey development and administration of the survey. MSh, BL, SN, MC and GC were involved in defining and selecting the attributes and levels. All authors provided detailed input to the paper.

Funding This was supported by the Australian National Health and Medical Research Council (NHMRC) (APP 1100822) and the Australian Government. The National Drug and Alcohol Research Centre, University of New South Wales Sydney is supported by funding from the Australian Government, under the Substance Misuse Prevention and Service Improvements Grant Fund. SN and GC are the recipients of an NHMRC Research Fellowship (#1132433 and #1119992).

Competing interests BL and SN report investigator-driven untied educational grants from Reckitt Benckiser/Indivior for studies of buprenorphine-naloxone and buprenorphine depot, the development of an opioid-related behaviour scale and a study of opioid substitution therapy uptake among chronic non-cancer pain patients. BL has also received investigator-initiated untied educational grants for post-marketing surveillance studies of opioids from Mundipharma Limited (a tamper-resistant oxycodone formulation) and Seqirus (tapentadol). These funders had no role in the design, conduct or interpretation of these studies. These studies were unrelated to the current discrete choice experiment protocol or broader pain and opioids in treatment study. SN has provided training around treatment of codeine dependence for which her institution received funding from Indivior. GC reports investigator-driven untied educational grants from Reckitt Benckiser for the development of an opioid-related behaviour scale. MC reports receiving fees from Mundipharma Limited for preparation and presentation of educational material.

Patient consent for publication Obtained.

Ethics approval Ethics approval was obtained from the UNSW Human Ethics committee HC16511 (for the focus group discussions, the one-on-one interviews and online survey) and HC16916 (for the cohort).

Provenance and peer review Not commissioned; externally peer reviewed.

Data availability statement No additional data are available.

Read the full text or download the PDF:

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

Enter the email address you signed up with and we'll email you a reset link.

  • We're Hiring!
  • Help Center

paper cover thumbnail

Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

Profile image of Bas Donkers

2015, The patient

Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.

Related Papers

Denzil Fiebig

The vast majority of stated preference research in health economics has been conducted in the random utility model paradigm using discrete choice experiments (DCEs). Ryan and Gerard (2003) have reviewed the applications of DCEs in the field of health economics. We have updated this initial work to include studies published between 2001 and 2007. Following the methods of Ryan and

sample size calculation discrete choice experiment

A discrete choice experiment (DCE) is a method to quantify preferences for goods and services in a population. Participants are asked to choose between sets of 2 hypothetical scenarios that differ in terms of particular characteristics. Their selections reveal the relative importance of each “attribute”, or characteristic, and the extent to which people consider trade-offs between characteristics. DCEs are increasingly used in healthcare and public health settings as they can inform the design of health-related interventions to achieve maximum impact. Specific efforts must be made in the development process to ensure relevance of DCEs to the communities in which they are administered. Herein, we build upon gaps in the prior literature by offering researchers a step-by-step process to guide DCE development for resource-limited settings, including detailed methodological considerations for each step and a specific actionable approach that we hope will simplify the process for other re...

PharmacoEconomics

Jordan Louviere

Health Policy and Planning

Barbara McPake

Understanding the preferences of patients and health professionals is useful for health policy and planning. Discrete choice experiments (DCEs) are a quantitative technique for eliciting preferences that can be used in the absence of revealed preference data. The method involves asking individuals to state their preference over hypothetical alternative scenarios, goods or services. Each alternative is described by several attributes and the responses are used to determine whether preferences are significantly influenced by the attributes and also their relative importance. DCEs are widely used in high-income contexts and are increasingly being applied in low- and middle-income countries to consider a range of policy concerns. This paper aims to provide an introduction to DCEs for policy-makers and researchers with little knowledge of the technique. We outline the stages involved in undertaking a DCE, with an emphasis on the design considerations applicable in a low-income setting.

Expert Review of Pharmacoeconomics & Outcomes Research

To investigate the impact of health policies on individual well-being, estimate the value to society of new interventions or policies, or predict demand for healthcare, we need information about individuals' preferences. Economists usually use market-based data to analyze preferences, but such data are limited in the healthcare context. Discrete choice experiments are a potentially valuable tool for elicitation and analysis of preferences and thus, for economic analysis of health and health programs. This paper reviews the use of discrete choice experiments to measure consumers' preferences for health and healthcare. The paper provides an overview of the approach and discusses issues that arise when using discrete choice experiments to assess individuals' preferences for health and healthcare.

Value in Health

habtamu tilahun

Mabel Amaya

Since their introduction in health economics in the early 1990s, research in the area of health care benefits valuation has seen an increased interest in the use of discrete choice experiments (DCEs). This is shown by the explosion of literature applying this technique to direct evaluation of different policy-relevant attributes of health care interventions as well as to look at other issues such as understanding labour supply characteristics, time preferences or uptake or demand forecasting (see Ryan and Gerard, 2003; Fiebig et al., 2005 for recent reviews). As previously introduced, the DCE is an attribute-based survey method for measuring benefits (utility). DCEs present respondents with samples of hypothetical scenarios (choice sets) drawn a priori from all possible choice sets according to statistical design principles. The choice sets comprise two or more alternatives, which vary along several characteristics or attributes of interest, and individuals are asked to choose one a...

The Patient - Patient-Centered Outcomes Research

Rabin Neslo

Health Economics

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

RELATED PAPERS

Deborah Marshall

The Patient: Patient-Centered Outcomes Research

Health Economics, Policy and Law

Health Economics Review

M. Van Driel

Social Science & Medicine

Cam Donaldson

Jui-Chen Yang

AMARZAYA JADAMBAA

Journal of Health Economics

John Brazier

Jorge E. Araña

Philip Russo , Michael Richards

International Journal of Technology Assessment in Health Care

Xander Koolman

Seija Kromm

International Journal of Health Policy and Management IJHPM , Abraha WoldeMichael

Marie-louise Essink-bot , Ewout Steyerberg , Bas Donkers

The European Journal of Health Economics

Galina Williams

International Journal of Pharmacy Practice

carol armour

Applied Health Economics and Health Policy

Jennifer Whitty

Journal of Pharmaceutical Policy and Practice

Farzad Peiravian

British Medical Bulletin

Sarah Ronaldson , Shehzad Ali

Value in health : the journal of the International Society for Pharmacoeconomics and Outcomes Research

Tima Mohammadi

Nyasule Neke

Jørgen Lauridsen

Social Science & Medicine

RELATED TOPICS

  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024

Erasmus University Rotterdam Logo

  • Help & FAQ

Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

  • Public Health
  • Business Economics
  • Health Technology Assessment (HTA)

Research output : Contribution to journal › Article › Academic

Original languageUndefined/Unknown
Pages (from-to)373-384
Number of pages12
Journal
Volume8
Issue number5
DOIs
Publication statusPublished - 2015

Research programs

  • EMC NIHES-05-63-02 Quality
  • EMC NIHES-02-65-01

Access to Document

  • 10.1007/s40271-015-0118-z
  • REPUB_81584-OA.pdf Final published version, 696 KB
  • http://hdl.handle.net/1765/81584

T1 - Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

AU - de Bekker - Grob, Esther

AU - Donkers, Bas

AU - Jonker, Marcel

AU - Stolk, Elly

N2 - Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.

AB - Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.

U2 - 10.1007/s40271-015-0118-z

DO - 10.1007/s40271-015-0118-z

M3 - Article

C2 - 25726010

SN - 1178-1653

JO - The patient: Patient-Centered Outcomes Research

JF - The patient: Patient-Centered Outcomes Research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

Esther w. de bekker-grob.

1 Department of Public Health, Erasmus MC, University Medical Centre Rotterdam, PO Box 2040, 3000 CA Rotterdam, The Netherlands

Bas Donkers

2 Department of Business Economics, Erasmus University, Rotterdam, The Netherlands

Marcel F. Jonker

3 Department of Health Economics, Policy and Law, Erasmus University, Rotterdam, The Netherlands

Elly A. Stolk

Associated data.

Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies; (2) to introduce and explain the required sample size for parameter estimates in DCEs; and (3) to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in health care.

Electronic supplementary material

The online version of this article (doi:10.1007/s40271-015-0118-z) contains supplementary material, which is available to authorized users.

Key Points for Decision Makers

The minimum sample size needed for a discrete-choice experiment (DCE) depends on the specific hypotheses to be tested.
DCE practitioners should realize that a small size effect may still be meaningful, but that a limited sample size prevents detection of such small effects.
Policy makers should not make a decision on non-significant outcomes without considering whether the study had a reasonable power to detect the anticipated outcome.

Introduction

Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions [ 1 , 2 ]. DCEs allow for a quantitative elicitation of individuals’ preferences for health care interventions, services, or policies. The DCE approach combines consumer theory [ 3 ], random utility theory [ 4 ], experimental design theory [ 5 ], and econometric analysis [ 1 ]. See Louviere et al. [ 6 ], Hensher et al. [ 7 ], Rose and Bliemer [ 8 ], Lancsar and Louviere [ 9 ], and Ryan et al. [ 10 ] for further details on conducting a DCE.

DCE-based research in health care is often concerned about establishing the impact of certain healthcare interventions and aspects (i.e., attributes) thereof on patients’ decisions [ 11 – 20 ]. Consequently, a typical research question is to establish whether or not individuals are indifferent between two attribute levels. For instance: Do patients prefer delivery at home more than in a hospital?; Do patients prefer a medical specialist over an nurse practitioner?; Do patients prefer every 5 year screening over every 10 year screening?; Do patients prefer a weekly oral medication over a monthly injection?; Do patients prefer the explanation of their medical results through a face-to-face contact more than through a letter? As a result, an important design question is the size of the sample needed to answer such a research question. When considering the required sample size, DCE practitioners need to be confident that they have sufficient statistical power to detect a difference in preferences when this difference is sufficiently large. A practical solution (that does not require any sample size calculations) is to simply maximize the sample size given the research budget at hand, i.e., trying to overpower the study as much as possible. This is beneficial for reasons other than statistical precision (e.g. to facilitate in-depth analysis). However, particularly in the health care area, the number of eligible patients and healthcare professionals is generally limited. Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of testing for specific hypotheses based on the parameter estimates produced [ 21 ].

The purpose of this paper is threefold. The first objective is to provide insight into whether and how researchers have dealt with sample size calculations for health care-related DCE studies. The second objective is to introduce and explain the required sample size for parameter estimates in DCEs. The final objective of this manuscript is to provide a step-by-step guide for the calculation of the minimum sample size requirements for DCEs in healthcare.

Literature Review

To gain insight into the current approaches to sample size determination, we reviewed health care-related DCE studies published in 2012. Older literature was ignored, as the research frontier for methodological issues has shifted a lot over the past years [ 1 , 22 ]. MEDLINE was used to identify healthcare-related DCE studies, replicating the methodology of two comprehensive reviews of the healthcare DCE literature [ 1 , 2 ]. The following search terms were used: conjoint, conjoint analysis, conjoint measurement, conjoint studies, conjoint choice experiment, part-worth utilities, functional measurement, paired comparisons, pairwise choices, discrete choice experiment, dce, discrete choice mode(l)ling, discrete choice conjoint experiment, and stated preference. Studies were included if they were choice-based, published as a full-text English language article, and applied to healthcare. Consideration was given to background information of the studies, and detailed consideration was given to whether and how sample size calculations were conducted. We also briefly describe the methods that have been used to obtain sample size estimates so far.

Literature Review Results

The search generated 505 possible references. After reading abstracts or full articles, 69 references met the inclusion criteria. The appendix shows the full list of references [Electronic Supplementary Material (ESM) 1]. Table  1 summarizes the review data. Most DCE studies were from the UK, with the USA, Canada, and Australia also major contributors. Studies having 4–6 attributes and 9–16 choice sets per respondent were commonly used in the published healthcare-related DCE studies in 2012. The sample sizes differed substantially between the DCE studies.

Table 1

Background information and sample size (method) used of published health care-related discrete-choice experiment studies in 2012 ( N  = 69)

Item (%)
Country of origin
 UK16 (23)
 USA13 (19)
 Canada10 (14)
 Australia7 (10)
 Germany6 (9)
 Netherlands4 (6)
 Denmark3 (4)
 Other19 (28)
Number of attributes
 2–35 (7)
 4–524 (35)
 625 (36)
 7–917 (25)
 >93 (4)
Number of choices per respondent
 8 or fewer14 (20)
 9–16 choices47 (68)
 More than 16 choices5 (7)
 Not clearly reported3 (4)
Sample size used
 <10022 (32)
 100–30028 (41)
 300–60017 (25)
 600–1,00010 (14)
 >1,0006 (9)
Sample size method used
 Parametric approach4 (6)
  Louviere et al. [ ]3 (4)
  Rose and Bliemer [ ]1 (1)
 Rule of thumb9 (13)
  Johnson and Orme [ , ]5 (7)
  Pearmain et al. [ ]2 (3)
  Lancsar and Louviere [ ]3 (4)
 Referring to studies8 (12)
  Review studies3 (4)
  Applied studies5 (7)
 Not (clearly) reported49 (71)

a Totals do not add up to 100 % as some studies were conducted in different countries, used a different number of attributes per discrete-choice experiment, used several subgroups of respondents, and/or used multiple sample size methods

Of 69 DCEs, 22 (32 %) had sample sizes smaller than 100 respondents, whereas 16 (23 %) of the 69 DCEs had sample sizes larger than 600 respondents; six (9 %) DCEs even had sample sizes larger than 1000 respondents. More than 70 % of the DCE studies (49 of 69) did not (clearly) report whether and what kind of sample size method was used; 12 % of the studies (8 of 69) just referred to other DCE studies to explain the sample size used. For example, Huicho et al. [ 23 ] mentioned that “Based on the experience of previous studies [ 24 , 25 ], we aimed for a sample size of 80 nurses and midwives”, and Bridges et al. [ 26 ] mentioned “In a previously published pilot study, the conjoint analysis approach was shown to be both feasible and functional in a very low sample size ( n  = 20) [ 27 ]”. In 13 % of the DCE studies (9 of 69 [ 28 – 36 ]), one or more of the following rules of thumb were used to estimate the minimum sample size required: that proposed by (1) Johnson and Orme [ 37 , 38 ]; (2) Pearmain et al. [ 39 ]; and/or (3) Lancsar and Louviere [ 9 ].

In short, the rule of thumb as proposed by Johnson and Orme [ 37 , 38 ] suggests that the sample size required for the main effects depends on the number of choice tasks ( t ), the number of alternatives ( a ), and the number of analysis cells ( c ) according to the following equation:

When considering main effects, ‘ c ’ is equal to the largest number of levels for any of the attributes. When considering all two-way interactions, ‘ c ’ is equal to the largest product of levels of any two attributes [ 38 ].

The rule of thumb proposed by Pearmain et al. [ 39 ] suggests that, for DCE designs, sample sizes over 100 are able to provide a basis for modeling preference data, whereas Lancsar and Louviere [ 9 ] mentioned “our empirical experience is that one rarely requires more than 20 respondents per questionnaire version to estimate reliable models, but undertaking significant post hoc analysis to identify and estimate co-variate effects invariably requires larger sample size”.

Four of 69 (6 %) reviewed DCE studies used a parametric approach to estimate the minimum sample size required (a parametric approach can be used if one assumes, for example based on the law of large numbers, that the focal quantity—an estimated probability or coefficient—is Normally distributed. This assumption facilitates the derivation of the minimum sample sizes required). That is, three studies used the parametric approach as proposed by Louviere et al. [ 6 ] and one study [ 40 ] reported the parametric approach as proposed by Rose and Bliemer [ 21 ]. Louviere et al. [ 6 ] assume the study is being conducted to measure a choice probability with some desired level of accuracy. The asymptotic sampling distribution (i.e., the distribution as sample size N  → ∞) of a proportion p N , obtained by a random sample of size N , is Normal with mean p (the true population proportion) and variance pq / N , where q  = 1−p. The minimum sample size to estimate the true proportion within α 1  % of the true value p with a probability α 2 or greater has to satisfy the requirement that Prob(| p N − p | ≤ α 1 p ) ≥  α 2 , which can be calculated using the following equation:

where Φ −1 is the inverse cumulative Normal distribution function, and r is the number of choice sets per respondent. Hence, the parametric approach as proposed by Louviere et al. [ 6 ] suggests that the sample size required for the main effects depends on the number of choice sets per respondent ( r ), the true population proportion ( p ), the one minus true population proportion ( q ), the inverse cumulative Normal distribution function ( Φ −1 ), the allowed deviation from the true population proportion ( α 1 ), and the significance level ( α 2 ).

The parametric approach that has been recently introduced by Rose and Bliemer [ 21 ] focuses on the minimum sample size required based on the most critical parameter (i.e., to be able to determine whether each parameter value is statistically significant from zero). This parametric approach can only be used if prior parameter estimates are available and not equal to zero. The minimum required sample size to state with 95 % certainty that a parameter estimate is different from zero can be determined according to the following equation:

where γ k is the parameter estimate of attribute k , and Σ γk is the corresponding variance of the parameter estimate of attribute k .

Comment on the State of Play

The disadvantage of using one of the rules of thumb mentioned in paragraph 2.2 is that such rules are not intended to be strictly accurate or reliable. The parametric approach as proposed by Louviere et al. [ 6 ] is not suitable for determining the minimum required sample size for coefficients in DCEs, as this approach focuses on choice probabilities and does not address the issue of minimum sample size requirements in terms of testing for specific hypotheses based on the parameter estimates produced. The parametric approach for minimum sample size calculation proposed by Rose and Bliemer [ 21 ] is solely based on the most critical parameter, so it is not specific to a certain hypothesis. It also does not depend on a desired power level for the hypothesis tests of interest.

Determining Required Sample Sizes for Discrete-Choice Experiments (DCEs): Theory

In this section we explain the analysis needed to determine the minimum sample size requirements in terms of testing for specific hypotheses for coefficients in DCEs. Our proposed approach is more general than the parametric approaches mentioned in Sect. 2 , as it can be used for any particular hypothesis that is relevant to the researcher. We outline which elements are required before such a minimum sample size can be determined, why these elements are needed, and how to calculate the required sample size. To provide a step-by-step guide that is useful for researchers from all different kinds of backgrounds, we strive to keep the number of formulas in this section as low as possible. Nevertheless, a comprehensive explanation of the minimum sample size calculation for coefficients in DCEs can be found in the appendix (ESM 2).

Required Elements for Estimating Minimum Sample Size

Before the minimum sample size for coefficients in a DCE can be calculated, the following five elements are needed:

  • Significance level ( α )
  • Statistical power level (1− β )
  • Statistical model used in the DCE analysis [e.g., multinomial logit (MNL) model, mixed logit (MIXL) model, generalized multinomial logit (G-MNL) model]
  • Initial belief about the parameter values
  • The DCE design.

Significance Level ( α )

The significance level α sets the probability for an incorrect rejection of a true null hypothesis. For example, if one wants to be 95 % confident that the null hypothesis will not be rejected when it is true, α needs to be set at 1−0.95 = 0.05 (i.e. 5 %). Conversely, if one decides to perform a hypothesis test at a 1− α confidence level, there is by definition an α probability of finding a significant deviation when there is in fact no true effect. Perhaps unsurprisingly, the smaller the imposed value of α (i.e., the more certainty one requires), the larger the minimum required sample size will be.

Statistical Power Level (1− β )

β indicates the probability of failing to reject a null hypothesis when the null hypothesis is actually false. The chosen value of beta is related to the statistical power of a test (which is defined as 1− β ). As we want to assess whether a parameter value (coefficient) is significantly different from zero, we can define the sample size that enables us to find a significant deviation from zero in at least (1− β ) × 100 % of the cases. For example, a statistical power of 0.8 (or 80 %) means that a study (when conducted repeatedly over time) is likely to produce a statistically significant result eight times out of ten. A larger statistical power level will increase the minimum sample size needed.

Statistical Model Used in the DCE Analysis

The calculation of the minimum required sample size also depends on the type of statistical model that will be used to analyze the DCE data (e.g., MNL, MIXL, G-MNL). The type of statistical model affects the number of parameters that needs to be estimated, the corresponding parameter values, and the parameter interpretation. As a consequence, the estimation precision of the parameters, which we will characterize through the variance covariance matrix of the estimated parameters, also depends on the statistical model that is used. In order to properly determine the estimation precision of each of the parameters, the statistical model needs to be specified.

Initial Belief About the Parameter Values

Of course, if the true values of the parameters (coefficients) were known, one would not need to execute the DCE. Nevertheless, before a minimum sample size can be determined, an initial estimate of the parameter values is required for two reasons. First, in models that are nonlinear in the parameters, such as choice models, the asymptotic variance–covariance matrix (AVC) depends on the values of the parameters themselves. This AVC is an intermediate stage in the sample size calculation (see Sect. 3.2 for more details), and reflects the expected accuracy of the statistical estimates obtained using the statistical model as identified under Sect. 3.1.3 . Second, before a power calculation can be done, one has to describe a specific hypothesis and the power one wants to achieve given a certain degree of misspecification (i.e., the degree to which the true coefficient value deviates from its hypothesized value). As null hypothesis, we will use the hypothesis that there is no influence so the coefficient equals zero. The initial estimate of the parameter value can then be used as value for the effect size. The closer to zero the effect size is, the more difficult it will be to find a significant effect and hence the larger the minimum sample size will be. To obtain some insight into these parameter values, a small pilot DCE study—for example with 20–40 respondents—may be helpful.

The large literature on efficient design generation indicates the importance of the design in getting accurate estimates and powerful tests. The DCE design is described by the number of choice sets, the number of alternatives per choice set, the number of attributes, and the combination of the attribute levels in each choice set. The DCE design has a direct influence on the AVC, which affects the estimation precision of the parameters, and hence will have a direct influence on the minimum sample size required. 1

Sample Size Calculation for DCEs

Once all five required elements mentioned in Sect. 3.1 have been determined, the minimum required sample size for the estimated coefficients in a DCE can be calculated. First, as an intermediate part of the sample size calculation, the AVC has to be established. That is, the statistical model (Sect. 3.1.3 ), the initial belief on the parameter values, denoted with γ (Sect. 3.1.4 ), and the DCE design (Sect. 3.1.5 ), are all needed to infer the AVC matrix, ∑ γ , of the estimated parameters. Details on how to construct the variance–covariance matrix from this information can be found, for example, in McFadden [ 4 ] for MNL and in Bliemer and Rose [ 41 ] for panel MIXL. A variance–covariance matrix is a square matrix that contains the variances and covariances associated with all the estimated coefficients. The diagonal elements of this matrix contain the variances of the estimated coefficients, and the off-diagonal elements capture the covariances between all possible pairs of coefficients. For hypothesis tests on individual coefficients, we only need the diagonal elements of ∑ γ , which we denote by Σ γk for the kth diagonal element.

Once the AVC, ∑ γ , of the estimated parameters has been established and the confidence level ( α ), the power level (1− β ), and the effect sizes ( δ ) are set, the minimum required sample size ( N ) for the estimated coefficients in a DCE can be calculated (see Eq.  4 ).

Each of the elements in this sample size calculation intuitively makes sense. In particular, with a larger effect size δ , a smaller sample size ( N ) will suffice to have enough power to find a significant deviation. Testing at a higher confidence level α increases z 1− α , 2 and thus increases the minimum required sample size ( N ). The same holds when more statistical power is desired, as this increases z 1− β . 3 When the variance-covariance matrix contains smaller variance ( ∑ γ k ) the minimum sample size ( N ) required decreases, as the estimates will be more precise. Smaller values for ∑ γ k can be obtained from using more choice sets, more alternatives per choice set or a more efficient design.

Determining Required Sample Sizes for DCEs: A Practical Example

In this section, a practical example is provided to explain, step-by-step, how the minimum sample size requirement for a DCE study can be calculated. This is illustrated using R-code, which can also be found at http://www.erim.eur.nl/ecmc .

The DCE study used for this illustration concerns a DCE about patients’ preferences for preventive osteoporosis drug treatment [ 12 ]. In this DCE study, patients had to choose between drug treatment alternatives that differed in five treatment attributes: route of drug administration, effectiveness, side effects (nausea), treatment duration, and out-of-pocket costs. The DCE design was orthogonal and contained 16 choice sets. Each choice set consisted of two unlabeled drug treatment alternatives and an opt-out option.

In what follows, we show in seven steps how the minimum sample size for coefficients can be calculated for the DCE on patients’ preferences for preventive osteoporosis drug treatment.

We first have to set the confidence through α. In the illustration, we choose α  = 0.05. The resulting confidence level is 95 %, assuming a one-tailed test 4 (Box 1)

The second step is to choose the statistical power level. For our illustration, we opt for a standard statistical power level of 80 % (i.e., β  = 0.20, hence 1− β  = 0.80) (Box 2).

The third step is to choose the statistical model to analyze the DCE data. For our illustration, we opt for an MNL model. In the R code, this affects the way the AVC needs to be calculated, which is outlined in step 6

The fourth step concerns the initial beliefs about the parameter values. The DCE illustration regarding patients’ preferences for preventive osteoporosis drug treatment contains five attributes (two categorical attributes and three linear attributes) [ 12 ], resulting in eight parameters to be estimated (see Table  2 column ‘parameter label’). We use the point estimates of the parameters as our guess of the coefficients and the effect sizes δ (see Table  2 column ‘initial belief parameter value’) (Box 3)

Table 2

Alternatives, attributes and levels for preventive osteoporosis drug treatment, their parameter labels, initial belief about parameter values, and discrete-choice experiment design codes (based on de Bekker-Grob et al. [ 12 ])

Parameter labelInitial belief parameter valueDCE design code
AlternativeAlternative label
 Constant (i.e., alternative specific constant for drug treatment; intercept)A1.23
 Alternative 1Drug treatment alternative I1
 Alternative 2Drug treatment alternative II1
 Alternative 3Opt-out alternative0
AttributeAttribute levels
 Drug administrationTablet once a month
Tablet once a weekB1–0.311
Injection every 4 monthsB2–0.211
Injection once a monthB3–0.441
 Effectiveness ( %)C0.028
55
1010
2525
5050
 Side effect nauseaD–1.10
No0
Yes1
 Treatment duration (years)E–0.04
11
22
55
1010
 Cost (€)F–0.0015
00
120120
240240
720720

The fifth step focuses on the DCE design. The DCE design requires eight parameters to be estimated (ncoefficients = 8). Each choice set contains three alternatives (nalts = 3); that is, two drug treatment alternatives, and one opt-out alternative. The DCE design contains 16 choice sets (nchoices = 16) (Box 4)

Table 3

Choice taskAlternativeConstantI. Route of drug administrationII. EffectivenessIII. NauseaIV. DurationV. Costs
AB1B2B3CDEF
1111005010120
1210101011240
1300000000
211001515720
221000100100
2300000000
31100025110240
3211005001720
3300000000
..........
..........
..........
161101010010720
162100125110
16300000000

alternative 1 = drug treatment alternative I; alternative 2 = drug treatment alternative II; alternative 3 = opt-out alternative; values 0 and 1 in column A mean ‘opt-out alternative’ and ‘drug treatment alternative’, respectively; value 1 in columns B1, B2, B3 means ‘tablet every week’, ‘infusion every 4 months’, and ‘infusion every month’, respectively; column C presents how effective (risk reduction of a hip fracture in %) a drug treatment alternative is; values 0 and 1 in column D mean ‘no nausea as a side effect’ and ‘nausea as a side effect’, respectively; column E presents the total treatment duration in years; and the values in column F present the out-of-pocket costs (€)

Each row should contain the coded attribute levels for that alternative. See Table  3 for how the DCE design for our illustration was coded (columns A–F). For example, row 1 corresponds to the first preventive drug treatment alternative in choice set 1: a drug treatment alternative (value 1, column A) that should be taken as a tablet every week (value 1, column B1), which will result in a 5 % reduction of a hip fracture (value 5, column C) without side effects (value 0, column D), for which the drug treatment duration will be 10 years (value 10, column E) and out-of-pocket costs of €120 are required (value 120, column F). Be aware that only the DCE design (i.e., the ‘white part’ of Table  3 ) should be in a text file, so that it can be read correctly in R (Box 5)

Having our statistical model, our initial beliefs about the parameter values (i.e., our guess of the effect sizes) and our DCE design matrix, we are able to compute the AVC matrix ( ∑ γ ) (Box 6)

The final step is to calculate the required sample size for the MNL coefficients in our DCE. Hereto we use Eq.  4 (Box 7)

Table 4

Minimum sample size required to obtain the desired power level 1− β for finding an effect when testing at a specific confidence level 1− α

 = 1−  =ConstantI. Route of drug administrationII. EffectivenessIII. NauseaIV. DurationV. Costs
AB1B2B3CDEF
0.10.6228721321173
0.050.63431111921274
0.0250.64581512632366
0.010.66792053553498
0.10.73391001721244
0.050.74561452532356
0.0250.76731903343467
0.010.779625043636010
0.10.84531392432335
0.050.86731903343467
0.0250.87932414253589
0.010.8911930853747412
0.10.96782023553498
0.050.9810226345646410
0.0250.91012532356747813
0.010.91215440069959716

As can be seen from Table  4 , one needs a minimum sample size of 190 respondents with a statistical power of 0.8 and assuming an α  = 0.05, whether ‘injection every 4 months’ is significantly different from ‘tablet once a month (reference attribute level)’ (Table  4 , column B2). If a smaller sample size of, for example, 111 respondents were to be used and no significant result to be found for this parameter, one has a statistical power of 0.6, assuming an α  = 0.05, to conclude that respondents do not prefer ‘tablet every month’ over ‘injection every 4 months’. As a proof of principle, we compared the standard errors and confidence intervals from the actual study [ 12 ] against the predicted standard errors and confidence intervals. The results showed that they were quite similar (Table  5 ), which gives further evidence that our sample size calculation makes sense.

Table 5

Parameter estimates and precision from an actual discrete-choice experiment study [ 12 ] relative to those predicted by the sample size calculations

AttributeMNL results actual study (  = 117) Predicted results based on 117 subjects
Parameter valueSE95 % CISE95 % CI
Constant (drug treatment)1.230.2180.81–1.660.1091.02–1.45
Drug administration (base level tablet once a month):
 Tablet once a week–0.310.070−0.45 to −0.170.099–0.50 to –0.12
 Injection every 4 months–0.210.097−0.41 to −0.020.108–0.43 to –0.01
 Injection once a month–0.440.100−0.64 to −0.250.094–0.63 to –0.26
Effectiveness (1 % risk reduction)0.030.0030.02–0.030.0020.02–0.03
Side effect nausea–1.100.104−1.30 to −0.890.065–1.22 to –0.97
Treatment duration (1 year)–0.040.010−0.06 to −0.020.010–0.06 to –0.02
Cost (€1)–0.00150.0002−0.002 to −0.0010.0002–0.002 to –0.001

CI confidence interval, SE standard error

a Number of observations 5589 (117 respondents × 16 choices × 3 options per choice, minus 27 missing values), Pseudo R 2  = 0.185, log pseudolikelihood = −1668.7

In this paper, we have summarized how researchers have dealt with sample size calculations for health care-related DCE studies. We found that more than 70 % of the health care-related DCE studies published in 2012 did not (clearly) report whether and what kind of sample size method was used. Just 6 % of the health care-related DCE studies published in 2012 used a parametric approach for sample size estimation. Nevertheless, the parametric approaches used were not suitable as a power calculation for determining the minimum required sample size for hypothesis testing for coefficients based on DCEs. To fill in this gap, we explained the analysis needed to determine the required sample size in DCEs from a hypothesis testing perspective. That is, we clarified that the following five elements are needed before such a minimum sample size can be determined: significance level ( α ), statistical power level (1− β ), statistical model used in the DCE analysis, initial belief about the parameter values, and the DCE design. An important feature of the resulting sample size formula is that the required sample size tends to grow exponentially. For example, when one wants a certain power level to detect an effect that is 50 % smaller, the required sample will be four times larger.

To build a bridge between theory and practice, we created a generic R-code as a practical tool for researchers to be able to determine the minimum required sample size for coefficients in DCEs. We then illustrate step-by-step how the sample size requirement can be obtained using our R-code. Although the R-code presented in this paper is for MNL only, the theory is also suitable for other choice models, such as the nested logit, mixed logit, scaled-MNL, or generalized-MNL.

Our approach for determining the minimum required sample size for coefficients in DCEs can also be extended to functions of parameters. For example, one might want to know whether patients are willing to pay a specific amount to increase effectiveness by 10 %. In order to test such a hypothesis, confidence intervals for a willingness-to-pay measure are needed. Once how these will be inferred from the limiting distribution of the parameters [ 42 ] is determined, Σ WTP (instead of Σ γ ) is known and the required sample size can be computed.

From a practical point of view, in health care-related DCEs, the number of patients and physicians that can be approached is often given, and sometimes rather small. Especially in these cases, our tool could indicate that power will be low. Using efficient designs (striving for small values for ∑ γ k ), more alternatives per choice set, or clear wording and layout are ways to increase the power that is achieved.

The approach presented in this paper can also be used to reverse engineer the power that a specific design has for a given sample size. This can help researchers who find an insignificant result to ensure that they had sufficient power to detect a reasonably sized effect.

The use of sample size calculations for healthcare-related DCE studies is largely lacking. We have shown how sample size calculations can be conducted for DCEs when researchers are interested in testing whether a particular attribute (level) affects the choices that patients or physicians make. Such sample size calculations should be executed far more often than is currently the case in healthcare, as under-powered studies may lead to false insights and incorrect decisions for policy makers.

Below is the link to the electronic supplementary material.

Acknowledgments

The authors thank Marie-Louise Essink-Bot and Ewout Steyerberg for their support regarding the osteoporosis drug treatment DCE study, Domino Determann for her support regarding the identification of healthcare-related DCE studies published in 2012, and Chris Carswell and John Bridges for their invitation to write this article. None of the authors have competing interests. This study was not supported by any external sources or funds.

Author contributions

EW de Bekker-Grob designed the study, conducted the review and DCE study, contributed to the analyses, and drafted the manuscript. B Donkers designed the study, performed the formulas, R-code and analyses, and drafted the manuscript. MF Jonker contributed to the R-code, the analyses, and to the writing of the manuscript. EA Stolk contributed to the writing of the manuscript. EW de Bekker-Grob and B Donkers have full access to all of the data in the study and can take responsibility for the integrity of the data and the accuracy of the data analysis. EW de Bekker-Grob acts as the overall guarantor.

1 All aspects of our sample size calculation are conditional on the design of the experiment and the implementation in a questionnaire. The survey design will have an impact on the precision of the parameters that should be accounted for through its effect on the anticipated parameter values. Also, the model specification has an impact on the precision of the parameters.

2 The value of α (Sect. 3.1.1 ) is used to determine the corresponding quantile of the Normal distribution ( z 1− α ) that is needed in the sample size calculations. The value of z 1− α for a given α can be found in the basic statistics textbooks or easily calculated in Microsoft Excel® using the formula NORMSINV(1− α ). The value of z 1− α for an α of 0.05 equals 1.64.

3 In the computation of the sample size, we need z 1− β , the quantile of the Normal distribution with Φ ( z 1− β ) = 1− β . Here again, Φ denotes the cumulative distribution function of the Normal distribution. Accordingly, the value for z 1– β for a given 1– β can be found in the basic statistics textbooks or easily calculated in Microsoft Excel® using the formula NORMSINV(1− β ); e.g., assuming a statistical power level of 80 %, the value z 1− β is 0.84 [i.e., NORMSINV(0.8)].

4 A one-tailed test is used if only deviations in one direction are considered possible; in contrast, a two-tailed test is used if deviations of the estimated parameter in either direction from zero are considered theoretically possible. Be aware that, for a two-tailed test, the alpha level should be divided by 2 (i.e., α /2).

E. W. de Bekker-Grob and B. Donkers contributed equally to this work.

Erasmus University

  • Choice Modelling
  • Software tools
  • Sample size requirements

Our paper ' Sample size requirements for discrete choice experiments in health care: a practical guide ' (The Patient, 2015) explains, step-by-step, how the minimum sample size requirement for a DCE study can be calculated and provides a general R-code . In that paper a ­DCE about patients’ preferences for preventive osteoporosis drug treatment is used as an illustration (its DCE-design and R-code  can be downloaded by opening the link).

  • Publications
  • Presentations
  • © 2024 Erasmus Universiteit Rotterdam
  • Privacy statement

Erasmus University

Browse Econ Literature

  • Working papers
  • Software components
  • Book chapters
  • JEL classification

More features

  • Subscribe to new research

RePEc Biblio

Author registration.

  • Economics Virtual Seminar Calendar NEW!

IDEAS home

Some searches may not work properly. We apologize for the inconvenience.

Sample size selection for discrete choice experiments using design features

  • Author & abstract
  • 9 References
  • Most related
  • Related works & more

Corrections

  • Assele, Samson Yaekob
  • Meulders, Michel
  • Vandebroek, Martina

Suggested Citation

Download full text from publisher, references listed on ideas.

Follow serials, authors, keywords & more

Public profiles for Economics researchers

Various research rankings in Economics

RePEc Genealogy

Who was a student of whom, using RePEc

Curated articles & papers on economics topics

Upload your paper to be listed on RePEc and IDEAS

New papers by email

Subscribe to new additions to RePEc

EconAcademics

Blog aggregator for economics research

Cases of plagiarism in Economics

About RePEc

Initiative for open bibliographies in Economics

News about RePEc

Questions about IDEAS and RePEc

RePEc volunteers

Participating archives

Publishers indexing in RePEc

Privacy statement

Found an error or omission?

Opportunities to help RePEc

Get papers listed

Have your research listed on RePEc

Open a RePEc archive

Have your institution's/publisher's output listed on RePEc

Get RePEc data

Use data assembled by RePEc

Survey Software & Market Research Solutions - Sawtooth Software

  • Technical Support
  • Technical Papers
  • Knowledge Base
  • Question Library

Call our friendly, no-pressure support team.

Sample Size Rule of Thumb for Choice-Based Conjoint (CBC)

sample size calculation discrete choice experiment

How large of a sample will I need for this project? That is a question that we often receive at Sawtooth Software Technical Support, and one that we are happy to consult on.

When it comes to sample size rule of thumb, there is rarely a one- size -fits-all answer, and conjoint research is no exception. Greater sample leads to greater precision, thus increased confidence in our estimates which is simple enough. But things get complicated as we take into consideration  population size, segmentation,statistical power, effect sizes, and deciding which estimates to prioritize. 

Calculating sample sizes often requires foreknowledge of unknowable factors. And though more samples leads to greater precision,it is also always balanced against the cost of recruiting respondents, incentivizing us to get away with the smallest panel of participants possible.

It can be useful to develop some heuristics to help to quickly determine an acceptable sample size range that you can work with for your project. You probably have your own set of homemade rules and best practices that you have developed over time. Let me share three rules of thumb for sample sizes that Sawtooth Software has utilized over the years specifically for estimating proper sample sizes for choice-based conjoint (CBC) experiments.

Need Sample for Your Research?

Let us connect you with your ideal audience! Reach out to us to request sample for your survey research.

Request Sample

Rule of Thumb for Sample Size #1: Start with 300

It’s down to the wire. Therequest for proposalis in your hand, you have two seconds to form your bid and you need a best guess at how much that sample is going to cost you. So, with no time to think, what should the size of your sample be? 300 respondents…probably.

For many statistical applications, conjoint included, 300 respondents is a good rule of thumb for sample size . Planning to report subgroups separately? In that case it’s best to plan additionally for at least 200 members of each subgroup. So if the intent is to compare the choice behavior of urban, suburban, and rural customers your first thought should be a sample size of at least 600, 200 per segment. The “300 respondents per study, 200 per subgroup” rule-of-thumb  is almost silly in its simplicity but works well in practice and is certainly the quickest rule to apply in a pinch .

Rule of Thumb for Sample Size #2: Ensure at Least 500 Appearances per Level

You haveplannedaheadthis time around, padded your prep time, and allowed an entire 30 seconds to calculate your required sample size before you need to get that bid out the door. Need the fastest formula possible? In that cas e, follow our second rule-of-thumb for sample size: ensure at least 500 appearances  per level.  The idea here is that across your entire sample of respondents, every attribute level should appear a minimum of 500 times. This calculation requires that you know or decide how many tasks (sets) you plan to include in the exercise, how many concepts (cards) will be shown in each task, and which attribute has the greatest number of levels. Once you have those figures, plug them into this formula for a quick answer:

sample size calculation discrete choice experiment

Where  c  is the maximum number of levels in an attribute,  t  is the number of tasks in the exercise, and  a  is the number of concepts in a task (not including the “None”).

As an example, imagine a CBC experiment involving 3 attributes. The first attribute has 3 levels, the second attribute has 6 levels, and the third attribute has 4. In this experiment we plan to show 3 CBC concepts in each choice task, and we will ask each respondent to complete 8 choice tasks in total. Our back-of-the-envelope formula suggests that a minimum sample size would be about: 

sample size calculation discrete choice experiment

Quick and Intuitive Conjoint Analysis Software

Need to launch a conjoint analysis study? Get access to our free conjoint analysis tool. In just a few minutes, you can create full conjoint analysis exercises with just a few clicks of our easy-to-use interface.

Conjoint Analysis Software Tool or Request a Product Tour

Two notes on this approach. First, 500 exposures should be considered the bare minimum in most cases. In practice, it is safer to plan for 1000 exposures per level so feel free to adjust the formula accordingly. Second, this approach was developed in a time when aggregate estimation was the only option for CBC data. It is notconsidered optimal for individualmodeling techniques like Hierarchical Bayes that are common today. Still, as a heuristic this approach does a good job of getting you in the ballpark of a sample size estimate and many researchers still find it useful as a rule-of-thumb.

Sample Size Rule of Thumb #3: Ask the Random Robots

This time you are not in a crunch. You have all the time in the world and you want to take a nice, well thought out approach to determining your size of sample. You have even gone so far as to complete your CBC design; attributes and levels all in place, tasks and concepts all laid out. Now you have a chance to test any sample size you like with a set of robotic random respondents. First, choose the number of respondents that you would like to use as a starting point. Next, generate that number of random responses to your CBC exercise (this can be accomplished quickly using a random number generator). Then run those responses through a logit model. Your resulting utilities will be pretty meaningless, it was random data after all, but the standard errors on those utility estimates can be useful. The sample size rule of thumb here :  if those main effects standard errors are below 0.05 you probably have a large enough sample. For two-way interaction effects and any alternative-specific attributes, keep the standard errors below 0.1.

If you are using Lighthouse Studio for your programming, then you can automate this entire process using the CBC Test Design feature. In any case, the “random robots” approach is quite robust as it allows you to tweak other settings besides respondent count in order to drive down that error. Increasing the number of tasks, adding more concepts to each task, or removing design prohibitions will all bring those estimated standard errors down and might save you the trouble of purchasing more respondents. Keep in mind that just like the second rule of thumb for sample size (back-of-the-envelope method), this approach was also designed for aggregate estimation and relies on a pooled logit model to calculate these standard errors. Consequently, the test analysis will not perfectly reflect the results that you eventually get from your individual respondent model.

Final Thoughts About Finding a Satistically Significant Sample Size

As mentioned, there is much more to consider when determining sample size than what I have described here. These three rules of thumb are uncomplicated heuristics, extreme simplifications of problem solving that provide a useful general rule to applyto most situations. If you are new to conjoint research, please consider this a jumping off point in the discussion, not a final prescription. I will include some additional reading below for those who wish to explore the topic further. Remember that as powerful as choice-based conjoint (CBC) might be, you still cannot disregard general sampling considerations. You can’t cheat representativeness, you can’t get good estimates out of bad data, and you can’t quadruple your precision by quadrupling the length of your survey without expecting your respondents to quit or fall asleep. But maybe you can use a few of these rules-of-thumb to help you out on your next project.

Get Started with Your Survey Research Today!

Ready for your next research study? Get access to our free survey research tool. In just a few minutes, you can create powerful surveys with our easy-to-use interface.

Start Survey Research for Free or Request a Product Tour

Additional Reading

Getting Started with Conjoint Analysis: Strategies for Product Design and Pricing Research – Chapter 7: Sample Size Issues for Conjoint Analysis

https://sawtoothsoftware.com/resources/technical-papers/sample-size-issues-for-conjoint-analysis-studies

Becoming an Expert in Conjoint Analysis: Choice Modeling for Pros – Chapter 8: Sample Size Decisions

Quick and Easy Power Analysis for Choice Experiments

https://sawtoothsoftware.com/resources/blog/posts/quick-and-easy-power-analysis-for-choice-experiments

Sawtooth Software

3210 N Canyon Rd Ste 202

Provo UT 84604-6508

United States of America

sample size calculation discrete choice experiment

Support: [email protected]

Consulting: [email protected]

Sales: [email protected]

Products & Services

Support & Resources

sample size calculation discrete choice experiment

chrome icon

Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

Chat with Paper

Discrete Choice Experiments

Quantitative analysis of multiple sclerosis patients' preferences for drug treatment: a best-worst scaling study., the effectiveness and cost-effectiveness of community-based lay distribution of hiv self-tests in increasing uptake of hiv testing among adults in rural malawi and rural and peri-urban zambia: protocol for star (self-testing for africa) cluster randomized evaluations., patient preferences for outcomes after kidney transplantation: a best-worst scaling survey., the impact of flavors, health risks, secondhand smoke and prices on young adults' cigarette and e-cigarette choices: a discrete choice experiment., constructing experimental designs for discrete-choice experiments: report of the ispor conjoint analysis experimental design good research practices task force, using discrete choice experiments to value health care programmes: current practice and future research reflections., eliciting gps' preferences for pecuniary and non-pecuniary job characteristics., policy interventions that attract nurses to rural areas: a multicountry discrete choice experiment, what determines individuals' preferences for colorectal cancer screening programmes a discrete choice experiment, related papers (5), conjoint analysis applications in health--a checklist: a report of the ispor good research practices for conjoint analysis task force., statistical methods for the analysis of discrete choice experiments: a report of the ispor conjoint analysis good research practices task force, using qualitative methods for attribute development for discrete choice experiments: issues and recommendations., a new approach to consumer theory.

Stack Exchange Network

Stack Exchange network consists of 183 Q&A communities including Stack Overflow , the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

What is the minimum sample size for discrete choice experiment?

I was wondering if there is a minimum sample size for conducting discrete choice experiment. From what I know, if choosing the number of sample size is a problem, one can resort to using the magic number of 400+. Although it would be nice to have such sample size, but then this kind of experiment is expensive, so 400+ or more may be impractical. I have read several journal articles about DCE and I was surprised that their sample sizes did not even reached 400.

archie's user avatar

  • 1 $\begingroup$ This is a difficult question to answer, given the information provided above. Without knowing the number of attributes (and levels) in your experimental design and the number of questions you plan to ask of each respondent, I cannot provide a justifiable answer. Do keep in mind that the quality of your experimental design can greatly affect the variance of your parameter estimates and, therefore, increasing the efficiency of your design is equivalent to increasing your sample size. $\endgroup$ –  user16892 Commented Nov 15, 2012 at 20:37
  • $\begingroup$ @Anderson. I see. So it really depends on the design of my experiment, my research questions, etc. I guess I have to carefully design my experiment to get the minimum and "optimum" sample size to avoid unnecessary costs. Thanks very much Anderson. $\endgroup$ –  archie Commented Nov 15, 2012 at 23:56

According to Orme (2010) , one rule of thumb for an acceptable sample size is:

$$ n \geq 500c/ta, $$ where:

  • n is the number of respondents,
  • t is the number of is the number of tasks,
  • a is number of alternatives per task (not including the none alternative),
  • c is the number of analysis cells. When considering main effects, c is equal to the largest number of levels for any one attribute. If you are also considering all two-way interactions, c is equal to the largest product of levels of any two attributes.

For example, if you are only considering main effects in a 3×3×4 design with three alternatives (plus one for 'choose none') and twelve choice tasks per respondent (without placing respondents into different blocks), you will need at least:

$$ n \geq 500×4/(12×3)\approx56 $$ respondents.

Online conjoint analysis tools will, such as Conjoint.ly , will be able to calculate this automatically when you set up an experiment.

k-zar's user avatar

Your Answer

Sign up or log in, post as a guest.

Required, but never shown

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy .

Not the answer you're looking for? Browse other questions tagged sample or ask your own question .

  • Featured on Meta
  • Announcing a change to the data-dump process
  • Bringing clarity to status tag usage on meta sites

Hot Network Questions

  • Are all citizens of Saudi Arabia "considered Muslims by the state"?
  • How rich is the richest person in a society satisfying the Pareto principle?
  • Can I use Cat 6A to create a USB B 3.0 Superspeed?
  • Why is LiCl a hypovalent covalent molecule?
  • Riemannian metric for which arbitary path becomes a geodesic
  • Gravitational potential energy of a water column
  • When can the cat and mouse meet?
  • How should I tell my manager that he could delay my retirement with a raise?
  • How is carousing different from drunkenness in Luke 21:34-36? How should they be interpreted literally and spiritually?
  • do-release-upgrade from 22.04 LTS to 24.04 LTS still no update available
  • Etymology of 制度
  • Hashable and ordered enums to describe states of a process
  • Why is the soil in my yard moist?
  • Default umask for NFS user nobody
  • What is the importance of bilinear functions?
  • how did the Apollo 11 know its precise gyroscopic position?
  • Can reinforcement learning rewards be a combination of current and new state?
  • How to change upward facing track lights 26 feet above living room?
  • Is it helpful to use a thicker gage wire for part of a long circuit run that could have higher loads?
  • Does a party have to wait 1d4 hours to start a Short Rest if no healing is available and an ally is only stabilized?
  • A seven letter *
  • Star Trek: The Next Generation episode that talks about life and death
  • Text wrapping in longtable not working
  • How can I play MechWarrior 2?

sample size calculation discrete choice experiment

  • DOI: 10.1016/j.jocm.2023.100436
  • Corpus ID: 261648307

Sample size selection for discrete choice experiments using design features

  • Samson Yaekob Assele , Michel Meulders , Martina Vandebroek
  • Published in Journal of Choice Modeling 1 December 2023
  • Economics, Business

2 Citations

The confusion of taste and consumption: evidence from a stated-choice experiment, short sea shipping as a sustainable modal alternative: qualitative and quantitative perspectives, 15 references, generating optimal designs for discrete choice experiments in r: the idefix package, investigating the impact of design characteristics on statistical efficiency within discrete choice experiments: a systematic survey, sample size and utility-difference precision in discrete-choice experiments: a meta-simulation approach, sample size requirements for discrete-choice experiments in healthcare: a practical guide, sample size requirements for stated choice experiments, effect of choice complexity on design efficiency in conjoint choice experiments, construction of experimental designs for mixed logit models allowing for correlation across choice observations, consumer price evaluations through choice experiments, a comparison of criteria to design efficient choice experiments, fraction of design space to assess prediction capability of response surface designs, related papers.

Showing 1 through 3 of 0 Related Papers

Sample size requirements for stated choice experiments

  • Published: 01 February 2013
  • Volume 40 , pages 1021–1041, ( 2013 )

Cite this article

sample size calculation discrete choice experiment

  • John M. Rose 1 &
  • Michiel C. J. Bliemer 1  

6555 Accesses

181 Citations

Explore all metrics

Stated choice (SC) experiments represent the dominant data paradigm in the study of behavioral responses of individuals, households as well as other organizations, yet in the past little has been known about the sample size requirements for models estimated from such data. Traditional orthogonal designs and existing sampling theories does not adequately address the issue and hence researchers have had to resort to simple rules of thumb or ignore the issue and collect samples of arbitrary size, hoping that the sample is sufficiently large enough to produce reliable parameter estimates, or are forced to make assumptions about the data that are unlikely to hold in practice. In this paper, we demonstrate how a recently proposed sample size computation can be used to generate so-called S -efficient designs using prior parameter values to estimate panel mixed multinomial logit models. Sample size requirements for such designs in SC studies are investigated. In a numerical case study is shown that a D -efficient and even more an S -efficient design require a (much) smaller sample size than a random orthogonal design in order to estimate all parameters at the level of statistical significance. Furthermore, it is shown that wide level range has a significant positive influence on the efficiency of the design and therefore on the reliability of the parameter estimates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

Discrete choice experiments: a guide to model specification, estimation and software.

sample size calculation discrete choice experiment

Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide

Comparing designs constructed with and without priors for choice experiments: a case study.

As has been demonstrated by Bliemer and Rose ( 2011 ) for the MNL model, there is no need to generate large designs and then give subsets of choice tasks to respondents. In fact, this could be even suboptimal, as small optimal designs contain the best choice tasks that retrieve the most information, and larger optimal designs are likely to contain inferior choice tasks. This has also been argued by Kanninen ( 2002 ), who demonstrates that an optimal dataset is simply a repetition of the smallest set of optimal choice tasks. However, there exist several cases in which respondent-specific designs may be better, for example in case of segmentation of the population, or to make choice sets more realistic by pivoting off a respondent-specific reference alternative, see Rose et al. ( 2008 ).

Ben-Akiva, M., Lerman, S.R.: Discrete choice analysis: theory and application to travel Demand. MIT Press, Cambridge (1985)

Google Scholar  

Bliemer, M.C.J., Rose J.M.,: Efficiency and Sample Size Requirements for Stated Choice Studies. Working paper ITLS-WP-05-08, Institute of Transport and Logistics Studies, The University of Sydney (2005)

Bliemer , M.C.J, Rose J.M.,: Efficiency and sample size requirements for stated choice studies. Presented at the 88th annual meeting of the transportation research board, Washington DC (2009)

Bliemer, M.C.J., Rose, J.M., Hensher, D.A.: Efficient stated choice experiments for estimating nested logit models. Transp. Res. Part B 43 (1), 19–35 (2009)

Article   Google Scholar  

Bliemer, M.C.J., Rose, J.M., Hess, S.: Approximation of Bayesian efficiency in experimental choice designs. J. Choice Model. 1 (1), 98–127 (2008)

Bliemer, M.C.J., Rose, J.M.: Experimental design influences on stated choice outputs: an empirical study in air travel choice. Transp. Res. Part A. 45 (1), 63–79 (2011)

Bliemer, M.C.J., Rose, J.M.: Construction of Experimental Designs for Mixed Logit Models Allowing for Correlation Across Choice Observations. Transp. Res. Part B. 46 (3), 720–734 (2010)

Bunch, D.S., Louviere, J.J., Anderson, D.: A Comparison of Experimental Design Strategies for Choice-Based Conjoint Analysis with Generic-Attribute Multinomial Logit Models, Working Paper. Graduate School of Management, University of California, Davis (1996)

Carlsson, F., Peter Martinsson, P.: Design techniques for stated preference methods in health economics. Health Econ. 12 (4), 281–294 (2003)

Garrod, G.D., Scarpa, R., Willis, K.G.: Estimating the benefits of traffic calming on through routes: a choice experiment approach. J. Transp. Econ. Policy 36 (2), 211–232 (2002)

Hensher, D.A., Prioni, P.: A service quality index for area-wide contract performance assessment. J. Transp. Econ. Policy 36 (1), 93–114 (2002)

Hensher, D.A., Rose, J.M., Greene, W.H.: Applied choice analysis: a primer. Cambridge University Press, Cambridge (2005)

Book   Google Scholar  

Huber, J., Zwerina, K.: The importance of utility balance in efficient choice designs. J. Mark. Res. 33 , 307–317 (1996)

Johnson, R., Orme, B. (2003) Getting the most from CBC, Sawtooth Software Research Paper Series, Sawtooth Software, Sequim

Kanninen, B.J.: Optimal design for multinomial choice experiments. J. Mark. Res. 39 , 214–217 (2002)

Kessels, R., Bradley, B., Goos, P., Vandebroek, M.: An efficient algorithm for constructing Bayesian optimal choice designs. J. Bus. Econ. Stat. 27 (2), 279–291 (2009)

Louviere, J.J., Hensher, D.A., Swait, J.D.: Stated choice methods: analysis and application. Cambridge University Press, Cambridge (2000)

Manski, C.F., McFadden, D.: Alternative Estimators and Sample Designs for Discrete Choice Analysis. In: Manski, C.F., McFadden, D. (eds.) Structural analysis of discrete data with econometric applications, pp. 2–50. MIT Press, Cambridge (1981)

McFadden, D.: Econometric analysis of qualitative response models. In: Griliches, Z., Intriligator, M.D. (eds.) Handbook of econometrics II, pp. 1395–1457. Elseviere Science, Amsterdam (1984)

Chapter   Google Scholar  

McFadden, D.: Conditional logit analysis of qualitative choice behavior. In: Zarembka, (ed.) Frontiers in econometrics, pp. 105–142. Academic Press, New York (1974)

ChoiceMetrics (2012) Ngene 1.1.1 User Manual & Reference Guide, Australia, www.choice-metrics.com . Accessed 14 March 2012

Orme, B. (1998) Sample size issues for conjoint analysis studies, Sawtooth Software Technical Paper, Sequim

Orme, B. (2010) Sample Size Issues for Conjoint Analysis Studies, Sawtooth Software Technical Paper, Sequim

Quan, W., Rose, J.M., Collins, A.T., Bliemer, M.C.J.: A comparison of Algorithms for Generating Efficient Choice Experiments, Working Paper ITLS-WP-11-19. Institute of Transport and Logistics Studies, The University of Sydney, Sydney (2011)

Rose, J.M., Bliemer, M.C.J.: Sample Optimality in the Design of Stated Choice Experiments, Working paper ITLS-WP-05-13. Institute of Transport and Logistics Studies, The Univesrity of Sydney, Sydney (2005a)

Rose, J.M., Bliemer, M.C.J.: Constructing Efficient Choice Experiments, Working Paper, ITLS-WP-05-07. Institute of Transport and Logistics Studies, The University of Sydney, Sydney (2005b)

Rose, J.M., Bliemer, M.C.J., Hensher, D.A., Collins, A.T.: Designing efficient stated choice experiments in the presence of reference alternatives. Transp. Res. Part B 42 (4), 396–406 (2008)

Sándor, Z., Wedel, M.: Designing conjoint choice experiments using managers’ prior beliefs. J. Mark. Res. 38 , 430–444 (2001)

Sándor, Z., Wedel, M.: Profile construction in experimental choice designs for mixed logit models. Marketing Science 21 (4), 455–476 (2002)

Yu, J., Goos, P.P., Vandebroek, M.: Efficient conjoint choice designs in the presence of respondent heterogeneity. Marketing Science 28 , 122–135 (2009)

Download references

Author information

Authors and affiliations.

Institute of Transport and Logistics Studies, The University of Sydney, The University of Sydney Business School, Sydney, Australia

John M. Rose & Michiel C. J. Bliemer

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to John M. Rose .

Rights and permissions

Reprints and permissions

About this article

Rose, J.M., Bliemer, M.C.J. Sample size requirements for stated choice experiments. Transportation 40 , 1021–1041 (2013). https://doi.org/10.1007/s11116-013-9451-z

Download citation

Published : 01 February 2013

Issue Date : September 2013

DOI : https://doi.org/10.1007/s11116-013-9451-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Stated choice experiments
  • D -optimality
  • Sample size
  • Simple random sampling
  • Mixed Multinomial Logit model
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. Sample Size Requirements for Discrete-Choice Experiments in Healthcare

    Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size requirements for stated choice data, it ...

  2. Sample size selection for discrete choice experiments using design

    Sample size calculations for Discrete Choice Experiment (DCE) studies have been approached in different ways. Researchers often used rules of thumb based on past experience (Pearmain and Gleave, 1991, Orme, 1998, Johnson and Orme, 2003, Hensher et al., 2005, Lancsar and Louviere, 2008).

  3. Sample Size Requirements for Discrete-Choice Experiments in ...

    Although theory exists as to the calculation of sample size requirements for stated choice data, it does not address the issue of minimum sample size requirements in terms of the statistical power of hypothesis tests on the estimated coefficients. The purpose of this paper is threefold: (1) to provide insight into whether and how researchers ...

  4. (PDF) Sample Size Requirements for Discrete-Choice Experiments in

    Table 5 Parameter estimates and precision from an actual discrete-choice experiment study [12] relative to those predicted by the sample size calculations Attribute MNL results actual study ( N = 117)

  5. A protocol for a discrete choice experiment: understanding patient

    There is no agreement on the correct sample size required for a DCE.47 However, research has shown that in all DCE studies with efficient designs, model estimate precision increases rapidly at sample sizes greater than 150 and then flattens out at around 300.48 It is also estimated that a minimum sample size of 200 respondents per sub-group be ...

  6. (PDF) Sample Size Requirements for Discrete-Choice Experiments in

    Sample Size Requirements for Discrete-Choice Experiments Table 1 Background information and sample size (method) used of published health care-related discrete-choice experiment studies in 2012 (N = 69) Item N (%) Country of origina UK 16 (23) USA Canada 13 (19) 10 (14) Australia 7 (10) Germany 6 (9) Netherlands 4 (6) Denmark 3 (4) Other Number ...

  7. Reducing sample size requirements by extending discrete choice

    Many approaches have been explored to reduce sample size requirements in discrete choice experiments. These can be grouped into three categories: (1) The most straightforward technique is to optimize the design of choice sets that are presented to the respondents (Huber and Zwerina, 1996, Bliemer and Rose, 2005, Rose and Bliemer, 2013). However ...

  8. Sample size calculations for discrete choice experiments in

    Sample size calculations for discrete choice experiments in pharmaceutical and health care applications. Poster presented at the 2023 PSI Annual Conference; June 11, 2023. Hammersmith, United Kingdom. Quantitative stated preferences elicitations with discrete choice experiments (DCEs) is increasingly used to inform decision making, such as ...

  9. Sample Size Requirements for Discrete-Choice Experiments in Healthcare

    Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. ... The purpose of this paper is threefold: (1) to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE ...

  10. [PDF] Sample Size Requirements for Discrete-Choice Experiments in

    The purpose of this paper is to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies, to introduce and explain the required sample size for parameter estimates in DCEs, and to provide a step-by-step guide for the calculation of the minimum sample size requirements for D CEs in health care. Discrete-choice experiments (DCEs ...

  11. Sample Size Requirements for Discrete-Choice Experiments in Healthcare

    16. Open in a separate window. As can be seen from Table 4, one needs a minimum sample size of 190 respondents with a statistical power of 0.8 and assuming an α = 0.05, whether 'injection every 4 months' is significantly different from 'tablet once a month (reference attribute level)' (Table 4, column B2).

  12. Sample size and utility-difference precision in discrete-choice

    For example, to achieve a precision score of 0.5, the platelet-disorder study would have required a sample size of about 600. In contrast, the hepatitis B study would have required a sample size of only about 50, while the osteoarthritis study would have demanded a sample size of more than 1000.

  13. Sample size requirements

    Our paper 'Sample size requirements for discrete choice experiments in health care: a practical guide' (The Patient, 2015) explains, step-by-step, how the minimum sample size requirement for a DCE study can be calculated and provides a general R-code.In that paper a ­DCE about patients' preferences for preventive osteoporosis drug treatment is used as an illustration (its DCE-design and R ...

  14. Sample size selection for discrete choice experiments using

    "Reducing sample size requirements by extending discrete choice experiments to indifference elicitation," Journal of choice modelling, Elsevier, vol. 48(C). Hoyos, David, 2010. " The state of the art of environmental valuation with discrete choice experiments ," Ecological Economics , Elsevier, vol. 69(8), pages 1595-1603, June.

  15. Sample Size Rule of Thumb for Choice-Based Conjoint (CBC)

    Rule of Thumb for Sample Size #2: Ensure at Least 500 Appearances per Level. You haveplannedaheadthis time around, padded your prep time, and allowed an entire 30 seconds to calculate your required sample size before you need to get that bid out the door. e, follow our second rule-of-thumb for sample size: ensure at least 500 appearances The ...

  16. PDF Sample Size Requirements for Discrete-Choice Experiments in ...

    The minimum sample size needed for a discrete-choice experiment (DCE) depends on the specific hypotheses to be tested. DCE practitioners should realize that a small size effect may still be meaningful, but that a limited sample size prevents detection of such small effects.

  17. Sample Size Requirements for Discrete-Choice Experiments in Healthcare

    (DOI: 10.1007/S40271-015-0118-Z) Discrete-choice experiments (DCEs) have become a commonly used instrument in health economics and patient-preference analysis, addressing a wide range of policy questions. An important question when setting up a DCE is the size of the sample needed to answer the research question of interest. Although theory exists as to the calculation of sample size ...

  18. What is the minimum sample size for discrete choice experiment?

    I was wondering if there is a minimum sample size for conducting discrete choice experiment. From what I know, if choosing the number of sample size is a problem, one can resort to using the magic number of 400+. Although it would be nice to have such sample size, but then this kind of experiment is expensive, so 400+ or more may be impractical.

  19. Sample size selection for discrete choice experiments using design

    Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide. The purpose of this paper is to provide insight into whether and how researchers have dealt with sample size calculations for healthcare-related DCE studies, to introduce and explain the required sample size for parameter estimates in DCEs, and to provide ...

  20. Sample size requirements for stated choice experiments

    Sample Size Requirements for Discrete-Choice Experiments in Healthcare: a Practical Guide Article Open access 01 March 2015. Comparing Designs Constructed With and Without Priors for Choice Experiments: A Case Study ... Although theory exists as to the calculation of sample size requirements for SC data, these traditional theories tend to ...

  21. Statistical Learning Facilitates Access to Awareness

    After reviewing the effect size (Cohen's d = 0.34) of a previous study that used the bias-free b-CFS paradigm to measure conscious access (Litwin et al., 2023), we concluded that a sample of 55 participants was needed for an experimental power of 80% with an alpha level of 0.05 for a planned one-tailed paired-samples t test (power calculation ...