Logo for Open Educational Resources

Chapter 5. Sampling

Introduction.

Most Americans will experience unemployment at some point in their lives. Sarah Damaske ( 2021 ) was interested in learning about how men and women experience unemployment differently. To answer this question, she interviewed unemployed people. After conducting a “pilot study” with twenty interviewees, she realized she was also interested in finding out how working-class and middle-class persons experienced unemployment differently. She found one hundred persons through local unemployment offices. She purposefully selected a roughly equal number of men and women and working-class and middle-class persons for the study. This would allow her to make the kinds of comparisons she was interested in. She further refined her selection of persons to interview:

I decided that I needed to be able to focus my attention on gender and class; therefore, I interviewed only people born between 1962 and 1987 (ages 28–52, the prime working and child-rearing years), those who worked full-time before their job loss, those who experienced an involuntary job loss during the past year, and those who did not lose a job for cause (e.g., were not fired because of their behavior at work). ( 244 )

The people she ultimately interviewed compose her sample. They represent (“sample”) the larger population of the involuntarily unemployed. This “theoretically informed stratified sampling design” allowed Damaske “to achieve relatively equal distribution of participation across gender and class,” but it came with some limitations. For one, the unemployment centers were located in primarily White areas of the country, so there were very few persons of color interviewed. Qualitative researchers must make these kinds of decisions all the time—who to include and who not to include. There is never an absolutely correct decision, as the choice is linked to the particular research question posed by the particular researcher, although some sampling choices are more compelling than others. In this case, Damaske made the choice to foreground both gender and class rather than compare all middle-class men and women or women of color from different class positions or just talk to White men. She leaves the door open for other researchers to sample differently. Because science is a collective enterprise, it is most likely someone will be inspired to conduct a similar study as Damaske’s but with an entirely different sample.

This chapter is all about sampling. After you have developed a research question and have a general idea of how you will collect data (observations or interviews), how do you go about actually finding people and sites to study? Although there is no “correct number” of people to interview, the sample should follow the research question and research design. You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.

Quick Terms Refresher

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.
  • Sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).
  • Sample size is how many individuals (or units) are included in your sample.

The “Who” of Your Research Study

After you have turned your general research interest into an actual research question and identified an approach you want to take to answer that question, you will need to specify the people you will be interviewing or observing. In most qualitative research, the objects of your study will indeed be people. In some cases, however, your objects might be content left by people (e.g., diaries, yearbooks, photographs) or documents (official or unofficial) or even institutions (e.g., schools, medical centers) and locations (e.g., nation-states, cities). Chances are, whatever “people, places, or things” are the objects of your study, you will not really be able to talk to, observe, or follow every single individual/object of the entire population of interest. You will need to create a sample of the population . Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample.

We begin this chapter with the case of a population of interest composed of actual people. After we have a better understanding of populations and samples that involve real people, we’ll discuss sampling in other types of qualitative research, such as archival research, content analysis, and case studies. We’ll then move to a larger discussion about the difference between sampling in qualitative research generally versus quantitative research, then we’ll move on to the idea of “theoretical” generalizability, and finally, we’ll conclude with some practical tips on the correct “number” to include in one’s sample.

Sampling People

To help think through samples, let’s imagine we want to know more about “vaccine hesitancy.” We’ve all lived through 2020 and 2021, and we know that a sizable number of people in the United States (and elsewhere) were slow to accept vaccines, even when these were freely available. By some accounts, about one-third of Americans initially refused vaccination. Why is this so? Well, as I write this in the summer of 2021, we know that some people actively refused the vaccination, thinking it was harmful or part of a government plot. Others were simply lazy or dismissed the necessity. And still others were worried about harmful side effects. The general population of interest here (all adult Americans who were not vaccinated by August 2021) may be as many as eighty million people. We clearly cannot talk to all of them. So we will have to narrow the number to something manageable. How can we do this?

Null

First, we have to think about our actual research question and the form of research we are conducting. I am going to begin with a quantitative research question. Quantitative research questions tend to be simpler to visualize, at least when we are first starting out doing social science research. So let us say we want to know what percentage of each kind of resistance is out there and how race or class or gender affects vaccine hesitancy. Again, we don’t have the ability to talk to everyone. But harnessing what we know about normal probability distributions (see quantitative methods for more on this), we can find this out through a sample that represents the general population. We can’t really address these particular questions if we only talk to White women who go to college with us. And if you are really trying to generalize the specific findings of your sample to the larger population, you will have to employ probability sampling , a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population randomly. Why randomly? If truly random, all the members have an equal opportunity to be a part of the sample, and thus we avoid the problem of having only our friends and neighbors (who may be very different from other people in the population) in the study. Mathematically, there is going to be a certain number that will be large enough to allow us to generalize our particular findings from our sample population to the population at large. It might surprise you how small that number can be. Election polls of no more than one thousand people are routinely used to predict actual election outcomes of millions of people. Below that number, however, you will not be able to make generalizations. Talking to five people at random is simply not enough people to predict a presidential election.

In order to answer quantitative research questions of causality, one must employ probability sampling. Quantitative researchers try to generalize their findings to a larger population. Samples are designed with that in mind. Qualitative researchers ask very different questions, though. Qualitative research questions are not about “how many” of a certain group do X (in this case, what percentage of the unvaccinated hesitate for concern about safety rather than reject vaccination on political grounds). Qualitative research employs nonprobability sampling . By definition, not everyone has an equal opportunity to be included in the sample. The researcher might select White women they go to college with to provide insight into racial and gender dynamics at play. Whatever is found by doing so will not be generalizable to everyone who has not been vaccinated, or even all White women who have not been vaccinated, or even all White women who have not been vaccinated who are in this particular college. That is not the point of qualitative research at all. This is a really important distinction, so I will repeat in bold: Qualitative researchers are not trying to statistically generalize specific findings to a larger population . They have not failed when their sample cannot be generalized, as that is not the point at all.

In the previous paragraph, I said it would be perfectly acceptable for a qualitative researcher to interview five White women with whom she goes to college about their vaccine hesitancy “to provide insight into racial and gender dynamics at play.” The key word here is “insight.” Rather than use a sample as a stand-in for the general population, as quantitative researchers do, the qualitative researcher uses the sample to gain insight into a process or phenomenon. The qualitative researcher is not going to be content with simply asking each of the women to state her reason for not being vaccinated and then draw conclusions that, because one in five of these women were concerned about their health, one in five of all people were also concerned about their health. That would be, frankly, a very poor study indeed. Rather, the qualitative researcher might sit down with each of the women and conduct a lengthy interview about what the vaccine means to her, why she is hesitant, how she manages her hesitancy (how she explains it to her friends), what she thinks about others who are unvaccinated, what she thinks of those who have been vaccinated, and what she knows or thinks she knows about COVID-19. The researcher might include specific interview questions about the college context, about their status as White women, about the political beliefs they hold about racism in the US, and about how their own political affiliations may or may not provide narrative scripts about “protective whiteness.” There are many interesting things to ask and learn about and many things to discover. Where a quantitative researcher begins with clear parameters to set their population and guide their sample selection process, the qualitative researcher is discovering new parameters, making it impossible to engage in probability sampling.

Looking at it this way, sampling for qualitative researchers needs to be more strategic. More theoretically informed. What persons can be interviewed or observed that would provide maximum insight into what is still unknown? In other words, qualitative researchers think through what cases they could learn the most from, and those are the cases selected to study: “What would be ‘bias’ in statistical sampling, and therefore a weakness, becomes intended focus in qualitative sampling, and therefore a strength. The logic and power of purposeful sampling like in selecting information-rich cases for study in depth. Information-rich cases are those from which one can learn a great deal about issues of central importance to the purpose of the inquiry, thus the term purposeful sampling” ( Patton 2002:230 ; emphases in the original).

Before selecting your sample, though, it is important to clearly identify the general population of interest. You need to know this before you can determine the sample. In our example case, it is “adult Americans who have not yet been vaccinated.” Depending on the specific qualitative research question, however, it might be “adult Americans who have been vaccinated for political reasons” or even “college students who have not been vaccinated.” What insights are you seeking? Do you want to know how politics is affecting vaccination? Or do you want to understand how people manage being an outlier in a particular setting (unvaccinated where vaccinations are heavily encouraged if not required)? More clearly stated, your population should align with your research question . Think back to the opening story about Damaske’s work studying the unemployed. She drew her sample narrowly to address the particular questions she was interested in pursuing. Knowing your questions or, at a minimum, why you are interested in the topic will allow you to draw the best sample possible to achieve insight.

Once you have your population in mind, how do you go about getting people to agree to be in your sample? In qualitative research, it is permissible to find people by convenience. Just ask for people who fit your sample criteria and see who shows up. Or reach out to friends and colleagues and see if they know anyone that fits. Don’t let the name convenience sampling mislead you; this is not exactly “easy,” and it is certainly a valid form of sampling in qualitative research. The more unknowns you have about what you will find, the more convenience sampling makes sense. If you don’t know how race or class or political affiliation might matter, and your population is unvaccinated college students, you can construct a sample of college students by placing an advertisement in the student paper or posting a flyer on a notice board. Whoever answers is your sample. That is what is meant by a convenience sample. A common variation of convenience sampling is snowball sampling . This is particularly useful if your target population is hard to find. Let’s say you posted a flyer about your study and only two college students responded. You could then ask those two students for referrals. They tell their friends, and those friends tell other friends, and, like a snowball, your sample gets bigger and bigger.

Researcher Note

Gaining Access: When Your Friend Is Your Research Subject

My early experience with qualitative research was rather unique. At that time, I needed to do a project that required me to interview first-generation college students, and my friends, with whom I had been sharing a dorm for two years, just perfectly fell into the sample category. Thus, I just asked them and easily “gained my access” to the research subject; I know them, we are friends, and I am part of them. I am an insider. I also thought, “Well, since I am part of the group, I can easily understand their language and norms, I can capture their honesty, read their nonverbal cues well, will get more information, as they will be more opened to me because they trust me.” All in all, easy access with rich information. But, gosh, I did not realize that my status as an insider came with a price! When structuring the interview questions, I began to realize that rather than focusing on the unique experiences of my friends, I mostly based the questions on my own experiences, assuming we have similar if not the same experiences. I began to struggle with my objectivity and even questioned my role; am I doing this as part of the group or as a researcher? I came to know later that my status as an insider or my “positionality” may impact my research. It not only shapes the process of data collection but might heavily influence my interpretation of the data. I came to realize that although my inside status came with a lot of benefits (especially for access), it could also bring some drawbacks.

—Dede Setiono, PhD student focusing on international development and environmental policy, Oregon State University

The more you know about what you might find, the more strategic you can be. If you wanted to compare how politically conservative and politically liberal college students explained their vaccine hesitancy, for example, you might construct a sample purposively, finding an equal number of both types of students so that you can make those comparisons in your analysis. This is what Damaske ( 2021 ) did. You could still use convenience or snowball sampling as a way of recruitment. Post a flyer at the conservative student club and then ask for referrals from the one student that agrees to be interviewed. As with convenience sampling, there are variations of purposive sampling as well as other names used (e.g., judgment, quota, stratified, criterion, theoretical). Try not to get bogged down in the nomenclature; instead, focus on identifying the general population that matches your research question and then using a sampling method that is most likely to provide insight, given the types of questions you have.

There are all kinds of ways of being strategic with sampling in qualitative research. Here are a few of my favorite techniques for maximizing insight:

  • Consider using “extreme” or “deviant” cases. Maybe your college houses a prominent anti-vaxxer who has written about and demonstrated against the college’s policy on vaccines. You could learn a lot from that single case (depending on your research question, of course).
  • Consider “intensity”: people and cases and circumstances where your questions are more likely to feature prominently (but not extremely or deviantly). For example, you could compare those who volunteer at local Republican and Democratic election headquarters during an election season in a study on why party matters. Those who volunteer are more likely to have something to say than those who are more apathetic.
  • Maximize variation, as with the case of “politically liberal” versus “politically conservative,” or include an array of social locations (young vs. old; Northwest vs. Southeast region). This kind of heterogeneity sampling can capture and describe the central themes that cut across the variations: any common patterns that emerge, even in this wildly mismatched sample, are probably important to note!
  • Rather than maximize the variation, you could select a small homogenous sample to describe some particular subgroup in depth. Focus groups are often the best form of data collection for homogeneity sampling.
  • Think about which cases are “critical” or politically important—ones that “if it happens here, it would happen anywhere” or a case that is politically sensitive, as with the single “blue” (Democratic) county in a “red” (Republican) state. In both, you are choosing a site that would yield the most information and have the greatest impact on the development of knowledge.
  • On the other hand, sometimes you want to select the “typical”—the typical college student, for example. You are trying to not generalize from the typical but illustrate aspects that may be typical of this case or group. When selecting for typicality, be clear with yourself about why the typical matches your research questions (and who might be excluded or marginalized in doing so).
  • Finally, it is often a good idea to look for disconfirming cases : if you are at the stage where you have a hypothesis (of sorts), you might select those who do not fit your hypothesis—you will surely learn something important there. They may be “exceptions that prove the rule” or exceptions that force you to alter your findings in order to make sense of these additional cases.

In addition to all these sampling variations, there is the theoretical approach taken by grounded theorists in which the researcher samples comparative people (or events) on the basis of their potential to represent important theoretical constructs. The sample, one can say, is by definition representative of the phenomenon of interest. It accompanies the constant comparative method of analysis. In the words of the funders of Grounded Theory , “Theoretical sampling is sampling on the basis of the emerging concepts, with the aim being to explore the dimensional range or varied conditions along which the properties of the concepts vary” ( Strauss and Corbin 1998:73 ).

When Your Population is Not Composed of People

I think it is easiest for most people to think of populations and samples in terms of people, but sometimes our units of analysis are not actually people. They could be places or institutions. Even so, you might still want to talk to people or observe the actions of people to understand those places or institutions. Or not! In the case of content analyses (see chapter 17), you won’t even have people involved at all but rather documents or films or photographs or news clippings. Everything we have covered about sampling applies to other units of analysis too. Let’s work through some examples.

Case Studies

When constructing a case study, it is helpful to think of your cases as sample populations in the same way that we considered people above. If, for example, you are comparing campus climates for diversity, your overall population may be “four-year college campuses in the US,” and from there you might decide to study three college campuses as your sample. Which three? Will you use purposeful sampling (perhaps [1] selecting three colleges in Oregon that are different sizes or [2] selecting three colleges across the US located in different political cultures or [3] varying the three colleges by racial makeup of the student body)? Or will you select three colleges at random, out of convenience? There are justifiable reasons for all approaches.

As with people, there are different ways of maximizing insight in your sample selection. Think about the following rationales: typical, diverse, extreme, deviant, influential, crucial, or even embodying a particular “pathway” ( Gerring 2008 ). When choosing a case or particular research site, Rubin ( 2021 ) suggests you bear in mind, first, what you are leaving out by selecting this particular case/site; second, what you might be overemphasizing by studying this case/site and not another; and, finally, whether you truly need to worry about either of those things—“that is, what are the sources of bias and how bad are they for what you are trying to do?” ( 89 ).

Once you have selected your cases, you may still want to include interviews with specific people or observations at particular sites within those cases. Then you go through possible sampling approaches all over again to determine which people will be contacted.

Content: Documents, Narrative Accounts, And So On

Although not often discussed as sampling, your selection of documents and other units to use in various content/historical analyses is subject to similar considerations. When you are asking quantitative-type questions (percentages and proportionalities of a general population), you will want to follow probabilistic sampling. For example, I created a random sample of accounts posted on the website studentloanjustice.org to delineate the types of problems people were having with student debt ( Hurst 2007 ). Even though my data was qualitative (narratives of student debt), I was actually asking a quantitative-type research question, so it was important that my sample was representative of the larger population (debtors who posted on the website). On the other hand, when you are asking qualitative-type questions, the selection process should be very different. In that case, use nonprobabilistic techniques, either convenience (where you are really new to this data and do not have the ability to set comparative criteria or even know what a deviant case would be) or some variant of purposive sampling. Let’s say you were interested in the visual representation of women in media published in the 1950s. You could select a national magazine like Time for a “typical” representation (and for its convenience, as all issues are freely available on the web and easy to search). Or you could compare one magazine known for its feminist content versus one antifeminist. The point is, sample selection is important even when you are not interviewing or observing people.

Goals of Qualitative Sampling versus Goals of Quantitative Sampling

We have already discussed some of the differences in the goals of quantitative and qualitative sampling above, but it is worth further discussion. The quantitative researcher seeks a sample that is representative of the population of interest so that they may properly generalize the results (e.g., if 80 percent of first-gen students in the sample were concerned with costs of college, then we can say there is a strong likelihood that 80 percent of first-gen students nationally are concerned with costs of college). The qualitative researcher does not seek to generalize in this way . They may want a representative sample because they are interested in typical responses or behaviors of the population of interest, but they may very well not want a representative sample at all. They might want an “extreme” or deviant case to highlight what could go wrong with a particular situation, or maybe they want to examine just one case as a way of understanding what elements might be of interest in further research. When thinking of your sample, you will have to know why you are selecting the units, and this relates back to your research question or sets of questions. It has nothing to do with having a representative sample to generalize results. You may be tempted—or it may be suggested to you by a quantitatively minded member of your committee—to create as large and representative a sample as you possibly can to earn credibility from quantitative researchers. Ignore this temptation or suggestion. The only thing you should be considering is what sample will best bring insight into the questions guiding your research. This has implications for the number of people (or units) in your study as well, which is the topic of the next section.

What is the Correct “Number” to Sample?

Because we are not trying to create a generalizable representative sample, the guidelines for the “number” of people to interview or news stories to code are also a bit more nebulous. There are some brilliant insightful studies out there with an n of 1 (meaning one person or one account used as the entire set of data). This is particularly so in the case of autoethnography, a variation of ethnographic research that uses the researcher’s own subject position and experiences as the basis of data collection and analysis. But it is true for all forms of qualitative research. There are no hard-and-fast rules here. The number to include is what is relevant and insightful to your particular study.

That said, humans do not thrive well under such ambiguity, and there are a few helpful suggestions that can be made. First, many qualitative researchers talk about “saturation” as the end point for data collection. You stop adding participants when you are no longer getting any new information (or so very little that the cost of adding another interview subject or spending another day in the field exceeds any likely benefits to the research). The term saturation was first used here by Glaser and Strauss ( 1967 ), the founders of Grounded Theory. Here is their explanation: “The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation . Saturation means that no additional data are being found whereby the sociologist can develop properties of the category. As he [or she] sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated. [They go] out of [their] way to look for groups that stretch diversity of data as far as possible, just to make certain that saturation is based on the widest possible range of data on the category” ( 61 ).

It makes sense that the term was developed by grounded theorists, since this approach is rather more open-ended than other approaches used by qualitative researchers. With so much left open, having a guideline of “stop collecting data when you don’t find anything new” is reasonable. However, saturation can’t help much when first setting out your sample. How do you know how many people to contact to interview? What number will you put down in your institutional review board (IRB) protocol (see chapter 8)? You may guess how many people or units it will take to reach saturation, but there really is no way to know in advance. The best you can do is think about your population and your questions and look at what others have done with similar populations and questions.

Here are some suggestions to use as a starting point: For phenomenological studies, try to interview at least ten people for each major category or group of people . If you are comparing male-identified, female-identified, and gender-neutral college students in a study on gender regimes in social clubs, that means you might want to design a sample of thirty students, ten from each group. This is the minimum suggested number. Damaske’s ( 2021 ) sample of one hundred allows room for up to twenty-five participants in each of four “buckets” (e.g., working-class*female, working-class*male, middle-class*female, middle-class*male). If there is more than one comparative group (e.g., you are comparing students attending three different colleges, and you are comparing White and Black students in each), you can sometimes reduce the number for each group in your sample to five for, in this case, thirty total students. But that is really a bare minimum you will want to go. A lot of people will not trust you with only “five” cases in a bucket. Lareau ( 2021:24 ) advises a minimum of seven or nine for each bucket (or “cell,” in her words). The point is to think about what your analyses might look like and how comfortable you will be with a certain number of persons fitting each category.

Because qualitative research takes so much time and effort, it is rare for a beginning researcher to include more than thirty to fifty people or units in the study. You may not be able to conduct all the comparisons you might want simply because you cannot manage a larger sample. In that case, the limits of who you can reach or what you can include may influence you to rethink an original overcomplicated research design. Rather than include students from every racial group on a campus, for example, you might want to sample strategically, thinking about the most contrast (insightful), possibly excluding majority-race (White) students entirely, and simply using previous literature to fill in gaps in our understanding. For example, one of my former students was interested in discovering how race and class worked at a predominantly White institution (PWI). Due to time constraints, she simplified her study from an original sample frame of middle-class and working-class domestic Black and international African students (four buckets) to a sample frame of domestic Black and international African students (two buckets), allowing the complexities of class to come through individual accounts rather than from part of the sample frame. She wisely decided not to include White students in the sample, as her focus was on how minoritized students navigated the PWI. She was able to successfully complete her project and develop insights from the data with fewer than twenty interviewees. [1]

But what if you had unlimited time and resources? Would it always be better to interview more people or include more accounts, documents, and units of analysis? No! Your sample size should reflect your research question and the goals you have set yourself. Larger numbers can sometimes work against your goals. If, for example, you want to help bring out individual stories of success against the odds, adding more people to the analysis can end up drowning out those individual stories. Sometimes, the perfect size really is one (or three, or five). It really depends on what you are trying to discover and achieve in your study. Furthermore, studies of one hundred or more (people, documents, accounts, etc.) can sometimes be mistaken for quantitative research. Inevitably, the large sample size will push the researcher into simplifying the data numerically. And readers will begin to expect generalizability from such a large sample.

To summarize, “There are no rules for sample size in qualitative inquiry. Sample size depends on what you want to know, the purpose of the inquiry, what’s at stake, what will be useful, what will have credibility, and what can be done with available time and resources” ( Patton 2002:244 ).

How did you find/construct a sample?

Since qualitative researchers work with comparatively small sample sizes, getting your sample right is rather important. Yet it is also difficult to accomplish. For instance, a key question you need to ask yourself is whether you want a homogeneous or heterogeneous sample. In other words, do you want to include people in your study who are by and large the same, or do you want to have diversity in your sample?

For many years, I have studied the experiences of students who were the first in their families to attend university. There is a rather large number of sampling decisions I need to consider before starting the study. (1) Should I only talk to first-in-family students, or should I have a comparison group of students who are not first-in-family? (2) Do I need to strive for a gender distribution that matches undergraduate enrollment patterns? (3) Should I include participants that reflect diversity in gender identity and sexuality? (4) How about racial diversity? First-in-family status is strongly related to some ethnic or racial identity. (5) And how about areas of study?

As you can see, if I wanted to accommodate all these differences and get enough study participants in each category, I would quickly end up with a sample size of hundreds, which is not feasible in most qualitative research. In the end, for me, the most important decision was to maximize the voices of first-in-family students, which meant that I only included them in my sample. As for the other categories, I figured it was going to be hard enough to find first-in-family students, so I started recruiting with an open mind and an understanding that I may have to accept a lack of gender, sexuality, or racial diversity and then not be able to say anything about these issues. But I would definitely be able to speak about the experiences of being first-in-family.

—Wolfgang Lehmann, author of “Habitus Transformation and Hidden Injuries”

Examples of “Sample” Sections in Journal Articles

Think about some of the studies you have read in college, especially those with rich stories and accounts about people’s lives. Do you know how the people were selected to be the focus of those stories? If the account was published by an academic press (e.g., University of California Press or Princeton University Press) or in an academic journal, chances are that the author included a description of their sample selection. You can usually find these in a methodological appendix (book) or a section on “research methods” (article).

Here are two examples from recent books and one example from a recent article:

Example 1 . In It’s Not like I’m Poor: How Working Families Make Ends Meet in a Post-welfare World , the research team employed a mixed methods approach to understand how parents use the earned income tax credit, a refundable tax credit designed to provide relief for low- to moderate-income working people ( Halpern-Meekin et al. 2015 ). At the end of their book, their first appendix is “Introduction to Boston and the Research Project.” After describing the context of the study, they include the following description of their sample selection:

In June 2007, we drew 120 names at random from the roughly 332 surveys we gathered between February and April. Within each racial and ethnic group, we aimed for one-third married couples with children and two-thirds unmarried parents. We sent each of these families a letter informing them of the opportunity to participate in the in-depth portion of our study and then began calling the home and cell phone numbers they provided us on the surveys and knocking on the doors of the addresses they provided.…In the end, we interviewed 115 of the 120 families originally selected for the in-depth interview sample (the remaining five families declined to participate). ( 22 )

Was their sample selection based on convenience or purpose? Why do you think it was important for them to tell you that five families declined to be interviewed? There is actually a trick here, as the names were pulled randomly from a survey whose sample design was probabilistic. Why is this important to know? What can we say about the representativeness or the uniqueness of whatever findings are reported here?

Example 2 . In When Diversity Drops , Park ( 2013 ) examines the impact of decreasing campus diversity on the lives of college students. She does this through a case study of one student club, the InterVarsity Christian Fellowship (IVCF), at one university (“California University,” a pseudonym). Here is her description:

I supplemented participant observation with individual in-depth interviews with sixty IVCF associates, including thirty-four current students, eight former and current staff members, eleven alumni, and seven regional or national staff members. The racial/ethnic breakdown was twenty-five Asian Americans (41.6 percent), one Armenian (1.6 percent), twelve people who were black (20.0 percent), eight Latino/as (13.3 percent), three South Asian Americans (5.0 percent), and eleven people who were white (18.3 percent). Twenty-nine were men, and thirty-one were women. Looking back, I note that the higher number of Asian Americans reflected both the group’s racial/ethnic composition and my relative ease about approaching them for interviews. ( 156 )

How can you tell this is a convenience sample? What else do you note about the sample selection from this description?

Example 3. The last example is taken from an article published in the journal Research in Higher Education . Published articles tend to be more formal than books, at least when it comes to the presentation of qualitative research. In this article, Lawson ( 2021 ) is seeking to understand why female-identified college students drop out of majors that are dominated by male-identified students (e.g., engineering, computer science, music theory). Here is the entire relevant section of the article:

Method Participants Data were collected as part of a larger study designed to better understand the daily experiences of women in MDMs [male-dominated majors].…Participants included 120 students from a midsize, Midwestern University. This sample included 40 women and 40 men from MDMs—defined as any major where at least 2/3 of students are men at both the university and nationally—and 40 women from GNMs—defined as any may where 40–60% of students are women at both the university and nationally.… Procedure A multi-faceted approach was used to recruit participants; participants were sent targeted emails (obtained based on participants’ reported gender and major listings), campus-wide emails sent through the University’s Communication Center, flyers, and in-class presentations. Recruitment materials stated that the research focused on the daily experiences of college students, including classroom experiences, stressors, positive experiences, departmental contexts, and career aspirations. Interested participants were directed to email the study coordinator to verify eligibility (at least 18 years old, man/woman in MDM or woman in GNM, access to a smartphone). Sixteen interested individuals were not eligible for the study due to the gender/major combination. ( 482ff .)

What method of sample selection was used by Lawson? Why is it important to define “MDM” at the outset? How does this definition relate to sampling? Why were interested participants directed to the study coordinator to verify eligibility?

Final Words

I have found that students often find it difficult to be specific enough when defining and choosing their sample. It might help to think about your sample design and sample recruitment like a cookbook. You want all the details there so that someone else can pick up your study and conduct it as you intended. That person could be yourself, but this analogy might work better if you have someone else in mind. When I am writing down recipes, I often think of my sister and try to convey the details she would need to duplicate the dish. We share a grandmother whose recipes are full of handwritten notes in the margins, in spidery ink, that tell us what bowl to use when or where things could go wrong. Describe your sample clearly, convey the steps required accurately, and then add any other details that will help keep you on track and remind you why you have chosen to limit possible interviewees to those of a certain age or class or location. Imagine actually going out and getting your sample (making your dish). Do you have all the necessary details to get started?

Table 5.1. Sampling Type and Strategies

Type Used primarily in... Strategies  
Probabilistic Quantitative research
Simple random Each member of the population has an equal chance at being selected
Stratified The sample is split into strata; members of each strata are selected in proportion to the population at large
Non-probabilistic Qualitative research
Convenience Simply includes the individuals who happen to be most accessible to the researcher
Snowball Used to recruit participants via other participants. The number of people you have access to “snowballs” as you get in contact with more people
Purposive Involves the researcher using their expertise to select a sample that is most useful to the purposes of the research; An effective purposive sample must have clear criteria and rationale for inclusion (e.g., )
Quota Set quotas to ensure that the sample you get represents certain characteristics in proportion to their prevalence in the population

Further Readings

Fusch, Patricia I., and Lawrence R. Ness. 2015. “Are We There Yet? Data Saturation in Qualitative Research.” Qualitative Report 20(9):1408–1416.

Saunders, Benjamin, Julius Sim, Tom Kinstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. “Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization.”  Quality & Quantity  52(4):1893–1907.

  • Rubin ( 2021 ) suggests a minimum of twenty interviews (but safer with thirty) for an interview-based study and a minimum of three to six months in the field for ethnographic studies. For a content-based study, she suggests between five hundred and one thousand documents, although some will be “very small” ( 243–244 ). ↵

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

The actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).  Sampling frames can differ from the larger population when specific exclusions are inherent, as in the case of pulling names randomly from voter registration rolls where not everyone is a registered voter.  This difference in frame and population can undercut the generalizability of quantitative results.

The specific group of individuals that you will collect data from.  Contrast population.

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A sampling strategy in which the sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the sample.  This is often done through a lottery or other chance mechanisms (e.g., a random selection of every twelfth name on an alphabetical list of voters).  Also known as random sampling .

The selection of research participants or other data sources based on availability or accessibility, in contrast to purposive sampling .

A sample generated non-randomly by asking participants to help recruit more participants the idea being that a person who fits your sampling criteria probably knows other people with similar criteria.

Broad codes that are assigned to the main issues emerging in the data; identifying themes is often part of initial coding . 

A form of case selection focusing on examples that do not fit the emerging patterns. This allows the researcher to evaluate rival explanations or to define the limitations of their research findings. While disconfirming cases are found (not sought out), researchers should expand their analysis or rethink their theories to include/explain them.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

The result of probability sampling, in which a sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the random sample.  This is often done through a lottery or other chance mechanisms (e.g., the random selection of every twelfth name on an alphabetical list of voters).  This is typically not required in qualitative research but rather essential for the generalizability of quantitative research.

A form of case selection or purposeful sampling in which cases that are unusual or special in some way are chosen to highlight processes or to illuminate gaps in our knowledge of a phenomenon.   See also extreme case .

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

The accuracy with which results or findings can be transferred to situations or people other than those originally studied.  Qualitative studies generally are unable to use (and are uninterested in) statistical generalizability where the sample population is said to be able to predict or stand in for a larger population of interest.  Instead, qualitative researchers often discuss “theoretical generalizability,” in which the findings of a particular study can shed light on processes and mechanisms that may be at play in other settings.  See also statistical generalization and theoretical generalization .

A term used by IRBs to denote all materials aimed at recruiting participants into a research study (including printed advertisements, scripts, audio or video tapes, or websites).  Copies of this material are required in research protocols submitted to IRB.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

This website may not work correctly because your browser is out of date. Please update your browser .

Critical case sampling

flickr-305293970-hd.jpg

A critical case is one that permits analytic generalisation, as, if a theory can work in the conditions of the critical case, it's likely to be able to work anywhere.

Characteristics of particular cases may make them critical – level of education of the population, level of pollution of the environment, level of resistance to government intervention of a community. The purpose of the evaluation is to investigate the success of the program in this particular critical case. Commissioners of the evaluation may be interested in the results of the evaluation for logical generalisation to other sites.

Polar regions and small island states are identified by scientists as critical cases in investigating the phenomenon of climate change. These sites are monitored closely for environmental changes. By investigating these sites in depth, scientists hope to develop knowledge that can be applied to other sites.

Suppose national policymakers want to get local communities involved in making decisions about how their local program will be run, but they aren't sure that the communities will understand the complex regulations governing their involvement. The first critical case is to evaluate the regulations in a community of well-educated citizens. If they can't understand the regulations, then less-educated folks are sure to find the regulations incomprehensible. Or, conversely, one might consider the critical case to be a community consisting of people with quite low levels of education: 'If they can understand the regulations, anyone can.' (Patton 2014: 276)

Focuses on identifying ‘outliers’ – those with exceptional outcomes - and understanding their experience as compared to others.

Analytical generalisation involves making projections about the likely transferability of findings from an evaluation, based on a theoretical analysis of the factors producing outcomes and the effect of context.

Patton, M. Q. (2014).  Qualitative Research & Evaluation Methods: Integrative Theory and Practice ​. SAGE Publications.

This page is a Stub (a minimal version of a page). You can help expand it. Contact Us  to recommend resources or volunteer to expand the description.

'Critical case sampling' is referenced in:

Framework/guide.

  • Rainbow Framework :  Sample

Back to top

© 2022 BetterEvaluation. All right reserved.

-->
> >

to return to the Sampling page


© RWJF 2008
P.O. Box 2316 College Road East and Route 1
Princeton, NJ 08543





-->Citation: Cohen D, Crabtree B. "Qualitative Research Guidelines Project." July 2006.


Critical Case Sampling

Sampling > Critical Case Sampling

What is Critical Case Sampling?

critical case sampling

This type of sampling is “…particularly useful if a small number of cases can be sampled” (Strewig & Stead, 2001). That small number of cases (assuming they can be classified as “critical”) are the ones more likely to provide a wealth of information.

Why Use Critical Case Sampling?

As Patton (1990) states: “If it can happen there, it can happen anywhere.” Let’s say you were studying reading levels for a popular science magazine, with reading levels ranging from an 8th grade level all the way up to graduate school. If 8th graders can understand the science articles, then everyone about that level should also be able to understand the articles — making the 8th grade level readers a “critical case.” Using critical cases also makes sense if you’re on a tight budget and want to study the participants who are most likely to provide crucial information.

Scientists deal with critical cases all of the time. For example, Gregor Mendel discovered the fundamental laws of inheritance through his meticulous study of pea plants. If Mendel had attempted to focus on a broad range of species instead of just one, he may not have made the discoveries that he is renowned for, such as his finding that genes are inherited in pairs — one from each parent.

References: Cold Spring Harbor Laboratory. DNA Learning Center: Gregor Mendel. Retrieved 1/20/2017 from: https://www.dnalc.org/view/16151-biography-1-gregor-mendel-1822-1884-.html Patton, M. (1990). Qualitative evaluation and research methods. Beverly Hills, CA: Sage. PDF . Strewig, F. & Stead, G. (2001) Planning, Reporting & Designing Research. Pearson, South Africa.

Sago

What We Offer

With a comprehensive suite of qualitative and quantitative capabilities and 55 years of experience in the industry, Sago powers insights through adaptive solutions.

  • Recruitment
  • Communities
  • Methodify® Automated research
  • QualBoard® Digital Discussions
  • QualMeeting® Digital Interviews
  • Global Qualitative
  • Global Quantitative
  • In-Person Facilities
  • Healthcare Solutions
  • Research Consulting
  • Europe Solutions
  • Neuromarketing Tools
  • Trial & Jury Consulting

Who We Serve

Form deeper customer connections and make the process of answering your business questions easier. Sago delivers unparalleled access to the audiences you need through adaptive solutions and a consultative approach.

  • Consumer Packaged Goods
  • Financial Services
  • Media Technology
  • Medical Device Manufacturing
  • Marketing Research

With a 55-year legacy of impact, Sago has proven we have what it takes to be a long-standing industry leader and partner. We continually advance our range of expertise to provide our clients with the highest level of confidence.​

  • Global Offices
  • Partnerships & Certifications
  • News & Media
  • Researcher Events

multi-video ai summaries thumbnail

Take Your Research to the Next Level with Multi-Video AI Summaries

steve schlesinger, mrx council hall of fame

Steve Schlesinger Inducted Into 2024 Market Research Council Hall of Fame

professional woman looking down at tablet in office at night

Sago Announces Launch of Sago Health to Elevate Healthcare Research

Drop into your new favorite insights rabbit hole and explore content created by the leading minds in market research.

  • Case Studies
  • Knowledge Kit

girl wearing medical mask in foreground, two people talking in medical masks in background

How Connecting with Gen C Can Help Your Brand Grow

the deciders july 2024 blog thumbnail

The Deciders, July 2024: Former Nikki Haley Voters

Get in touch

critical sampling in qualitative research

  • Account Logins

critical sampling in qualitative research

Different Types of Sampling Techniques in Qualitative Research

  • Resources , Blog

clock icon

Key Takeaways:

  • Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling.
  • Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results.
  • It’s crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a sampling technique for your qualitative research.

Qualitative research seeks to understand social phenomena from the perspective of those experiencing them. It involves collecting non-numerical data such as interviews, observations, and written documents to gain insights into human experiences, attitudes, and behaviors. While qualitative research can provide rich and nuanced insights, the accuracy and generalizability of findings depend on the quality of the sampling process. Sampling techniques are a critical component of qualitative research as it involves selecting a group of participants who can provide valuable insights into the research questions.

This article explores different types of sampling techniques in qualitative research. First, we’ll provide a comprehensive overview of four standard sampling techniques in qualitative research. and then compare and contrast these techniques to provide guidance on choosing the most appropriate method for a particular study. Additionally, you’ll find best practices for sampling and learn about ethical considerations researchers need to consider in selecting a sample. Overall, this article aims to help researchers conduct effective and high-quality sampling in qualitative research.

In this Article:

  • Purposive Sampling
  • Convenience Sampling
  • Snowball Sampling
  • Theoretical Sampling

Factors to Consider When Choosing a Sampling Technique

Practical approaches to sampling: recommended practices, final thoughts, get expert guidance on your sample needs.

Want expert input on the best sampling technique for your qualitative research project? Book a consultation for trusted advice.

Request a consultation

4 Types of Sampling Techniques and Their Applications

Sampling is a crucial aspect of qualitative research as it determines the representativeness and credibility of the data collected. Several sampling techniques are used in qualitative research, each with strengths and weaknesses. In this section, let’s explore four standard sampling techniques in qualitative research: purposive sampling, convenience sampling, snowball sampling, and theoretical sampling. We’ll break down the definition of each technique, when to use it, and its advantages and disadvantages.

1. Purposive Sampling

Purposive sampling, or judgmental sampling, is a non-probability sampling technique in qualitative research that’s commonly used. In purposive sampling, researchers intentionally select participants with specific characteristics or unique experiences related to the research question. The goal is to identify and recruit participants who can provide rich and diverse data to enhance the research findings.

Purposive sampling is used when researchers seek to identify individuals or groups with particular knowledge, skills, or experiences relevant to the research question. For instance, in a study examining the experiences of cancer patients undergoing chemotherapy, purposive sampling may be used to recruit participants who have undergone chemotherapy in the past year. Researchers can better understand the phenomenon under investigation by selecting individuals with relevant backgrounds.

Purposive Sampling: Strengths and Weaknesses

Purposive sampling is a powerful tool for researchers seeking to select participants who can provide valuable insight into their research question. This method is advantageous when studying groups with technical characteristics or experiences where a random selection of participants may yield different results.

One of the main advantages of purposive sampling is the ability to improve the quality and accuracy of data collected by selecting participants most relevant to the research question. This approach also enables researchers to collect data from diverse participants with unique perspectives and experiences related to the research question.

However, researchers should also be aware of potential bias when using purposive sampling. The researcher’s judgment may influence the selection of participants, resulting in a biased sample that does not accurately represent the broader population. Another disadvantage is that purposive sampling may not be representative of the more general population, which limits the generalizability of the findings. To guarantee the accuracy and dependability of data obtained through purposive sampling, researchers must provide a clear and transparent justification of their selection criteria and sampling approach. This entails outlining the specific characteristics or experiences required for participants to be included in the study and explaining the rationale behind these criteria. This level of transparency not only helps readers to evaluate the validity of the findings, but also enhances the replicability of the research.

2. Convenience Sampling  

When time and resources are limited, researchers may opt for convenience sampling as a quick and cost-effective way to recruit participants. In this non-probability sampling technique, participants are selected based on their accessibility and willingness to participate rather than their suitability for the research question. Qualitative research often uses this approach to generate various perspectives and experiences.

During the COVID-19 pandemic, convenience sampling was a valuable method for researchers to collect data quickly and efficiently from participants who were easily accessible and willing to participate. For example, in a study examining the experiences of university students during the pandemic, convenience sampling allowed researchers to recruit students who were available and willing to share their experiences quickly. While the pandemic may be over, convenience sampling during this time highlights its value in urgent situations where time and resources are limited.

Convenience Sampling: Strengths and Weaknesses

Convenience sampling offers several advantages to researchers, including its ease of implementation and cost-effectiveness. This technique allows researchers to quickly and efficiently recruit participants without spending time and resources identifying and contacting potential participants. Furthermore, convenience sampling can result in a diverse pool of participants, as individuals from various backgrounds and experiences may be more likely to participate.

While convenience sampling has the advantage of being efficient, researchers need to acknowledge its limitations. One of the primary drawbacks of convenience sampling is that it is susceptible to selection bias. Participants who are more easily accessible may not be representative of the broader population, which can limit the generalizability of the findings. Furthermore, convenience sampling may lead to issues with the reliability of the results, as it may not be possible to replicate the study using the same sample or a similar one.

To mitigate these limitations, researchers should carefully define the population of interest and ensure the sample is drawn from that population. For instance, if a study is investigating the experiences of individuals with a particular medical condition, researchers can recruit participants from specialized clinics or support groups for that condition. Researchers can also use statistical techniques such as stratified sampling or weighting to adjust for potential biases in the sample.

3. Snowball Sampling

Snowball sampling, also called referral sampling, is a unique approach researchers use to recruit participants in qualitative research. The technique involves identifying a few initial participants who meet the eligibility criteria and asking them to refer others they know who also fit the requirements. The sample size grows as referrals are added, creating a chain-like structure.

Snowball sampling enables researchers to reach out to individuals who may be hard to locate through traditional sampling methods, such as members of marginalized or hidden communities. For instance, in a study examining the experiences of undocumented immigrants, snowball sampling may be used to identify and recruit participants through referrals from other undocumented immigrants.

Snowball Sampling: Strengths and Weaknesses

Snowball sampling can produce in-depth and detailed data from participants with common characteristics or experiences. Since referrals are made within a network of individuals who share similarities, researchers can gain deep insights into a specific group’s attitudes, behaviors, and perspectives.

4. Theoretical Sampling

Theoretical sampling is a sophisticated and strategic technique that can help researchers develop more in-depth and nuanced theories from their data. Instead of selecting participants based on convenience or accessibility, researchers using theoretical sampling choose participants based on their potential to contribute to the emerging themes and concepts in the data. This approach allows researchers to refine their research question and theory based on the data they collect rather than forcing their data to fit a preconceived idea.

Theoretical sampling is used when researchers conduct grounded theory research and have developed an initial theory or conceptual framework. In a study examining cancer survivors’ experiences, for example, theoretical sampling may be used to identify and recruit participants who can provide new insights into the coping strategies of survivors.

Theoretical Sampling: Strengths and Weaknesses

One of the significant advantages of theoretical sampling is that it allows researchers to refine their research question and theory based on emerging data. This means the research can be highly targeted and focused, leading to a deeper understanding of the phenomenon being studied. Additionally, theoretical sampling can generate rich and in-depth data, as participants are selected based on their potential to provide new insights into the research question.

Participants are selected based on their perceived ability to offer new perspectives on the research question. This means specific perspectives or experiences may be overrepresented in the sample, leading to an incomplete understanding of the phenomenon being studied. Additionally, theoretical sampling can be time-consuming and resource-intensive, as researchers must continuously analyze the data and recruit new participants.

To mitigate the potential for bias, researchers can take several steps. One way to reduce bias is to use a diverse team of researchers to analyze the data and make participant selection decisions. Having multiple perspectives and backgrounds can help prevent researchers from unconsciously selecting participants who fit their preconceived notions or biases.

Another solution would be to use reflexive sampling. Reflexive sampling involves selecting participants aware of the research process and provides insights into how their biases and experiences may influence their perspectives. By including participants who are reflexive about their subjectivity, researchers can generate more nuanced and self-aware findings.

Choosing the proper sampling technique in qualitative research is one of the most critical decisions a researcher makes when conducting a study. The preferred method can significantly impact the accuracy and reliability of the research results.

For instance, purposive sampling provides a more targeted and specific sample, which helps to answer research questions related to that particular population or phenomenon. However, this approach may also introduce bias by limiting the diversity of the sample.

Conversely, convenience sampling may offer a more diverse sample regarding demographics and backgrounds but may also introduce bias by selecting more willing or available participants.

Snowball sampling may help study hard-to-reach populations, but it can also limit the sample’s diversity as participants are selected based on their connections to existing participants.

Theoretical sampling may offer an opportunity to refine the research question and theory based on emerging data, but it can also be time-consuming and resource-intensive.

Additionally, the choice of sampling technique can impact the generalizability of the research findings. Therefore, it’s crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a sampling technique. By doing so, researchers can select the most appropriate method for their research question and ensure the validity and reliability of their findings.

Tips for Selecting Participants

When selecting participants for a qualitative research study, it is crucial to consider the research question and the purpose of the study. In addition, researchers should identify the specific characteristics or criteria they seek in their sample and select participants accordingly.

One helpful tip for selecting participants is to use a pre-screening process to ensure potential participants meet the criteria for inclusion in the study. Another technique is using multiple recruitment methods to ensure the sample is diverse and representative of the studied population.

Ensuring Diversity in Samples

Diversity in the sample is important to ensure the study’s findings apply to a wide range of individuals and situations. One way to ensure diversity is to use stratified sampling, which involves dividing the population into subgroups and selecting participants from each subset. This helps establish that the sample is representative of the larger population.

Maintaining Ethical Considerations

When selecting participants for a qualitative research study, it is essential to ensure ethical considerations are taken into account. Researchers must ensure participants are fully informed about the study and provide their voluntary consent to participate. They must also ensure participants understand their rights and that their confidentiality and privacy will be protected.

A qualitative research study’s success hinges on its sampling technique’s effectiveness. The choice of sampling technique must be guided by the research question, the population being studied, and the purpose of the study. Whether purposive, convenience, snowball, or theoretical sampling, the primary goal is to ensure the validity and reliability of the study’s findings.

By thoughtfully weighing the pros and cons of each sampling technique in qualitative research, researchers can make informed decisions that lead to more reliable and accurate results. In conclusion, carefully selecting a sampling technique is integral to the success of a qualitative research study, and a thorough understanding of the available options can make all the difference in achieving high-quality research outcomes.

If you’re interested in improving your research and sampling methods, Sago offers a variety of solutions. Our qualitative research platforms, such as QualBoard and QualMeeting, can assist you in conducting research studies with precision and efficiency. Our robust global panel and recruitment options help you reach the right people. We also offer qualitative and quantitative research services to meet your research needs. Contact us today to learn more about how we can help improve your research outcomes.

Find the Right Sample for Your Qualitative Research

Trust our team to recruit the participants you need using the appropriate techniques. Book a consultation with our team to get started .

de la riva case study blog thumbnail

Enhancing Efficiency with All-in-One Digital Qual

smiling woman sitting at a table looking at her phone with a coffee cup in front of her

Crack the Code: Evolving Panel Expectations

toddler girl surrounded by stuffed animals and using an ipad

Pioneering the Future of Pediatric Health

swing voters, july 2024 florida thumbnail

The Swing Voter Project, July 2024: Florida

summer 2024 travel trends

Exploring Travel Trends and Behaviors for Summer 2024

The Deciders, June 2024, Georgia

The Deciders, June 24, 2024: Third-Party Georgia Voters

Summer 2024 Insights: The Compass to This Year's Travel Choices

Summer 2024 Insights: The Compass to This Year’s Travel Choices

swing voters, north carolina, june 2024, thumbnail

The Swing Voter Project, June 2024: North Carolina

Take a deep dive into your favorite market research topics

critical sampling in qualitative research

How can we help support you and your research needs?

critical sampling in qualitative research

BEFORE YOU GO

Have you considered how to harness AI in your research process? Check out our on-demand webinar for everything you need to know

critical sampling in qualitative research

Logo for VCU Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Part 4: Using qualitative methods

17. Qualitative data and sampling

Chapter outline.

  • Ethical responsibility and cultural respectfulness (7 minute read)
  • Critical considerations (8 minute read)
  • Find the right qualitative data to answer my research question (17 minute read)
  • How to gather a qualitative sample (21 minute read)
  • What should my sample look like? (9 minute read)

Content warning: examples in this chapter contain references to substance use, ageism, injustices against the Black community in research (e.g. Henrietta Lacks and Tuskegee Syphillis Study), children and their educational experiences, mental health, research bias, job loss and business closure, mobility limitations, politics, media portrayals of LatinX families, labor protests, neighborhood crime, Batten Disease (childhood disorder), transgender youth, cancer, child welfare including kinship care and foster care, Planned Parenthood, trauma and resilience, sexual health behaviors.

Now let’s change things up! In the previous chapters, we were exploring steps to create and carry out a quantitative research study. Quantitative studies are great when we want to summarize data and examine or test relationships between ideas using numbers and the power of statistics. However, qualitative research offers us a different and equally important tool. Sometimes the aim of research is to explore meaning and experience. If these are the goals of our research proposal, we are going to turn to qualitative research. Qualitative research relies on the power of human expression through words, pictures, movies, performance and other artifacts that represent these things. All of these tell stories about the human experience and we want to learn from them and have them be represented in our research. Generally speaking, qualitative research is about the gathering up of these stories, breaking them into pieces so we can examine the ideas that make them up, and putting them back together in a way that allows us to tell a common or shared story that responds to our research question. Back in Chapter 7 we talked about different paradigms.

critical sampling in qualitative research

Before plunging further into our exploration of qualitative research, I would like to suggest that we begin by thinking about some ethical, cultural and empowerment-related considerations as you plan your proposal. This is by no means a comprehensive discussion of these topics as they relate to qualitative research, but my intention is to have you think about a few issues that are relevant at each step of the qualitative process. I will begin each of our qualitative chapters with some discussion about these topics as they relate to each of these steps in the research process. These sections are specially situated at the beginning of the chapters so that you can consider how these principles apply throughout the proceeding discussion. At the end of this chapter there will be an opportunity to reflect on these areas as they apply specifically to your proposal. Now, we have already discussed research ethics back in Chapter 8 . However, as qualitative researchers we have some unique ethical commitments to participants and to the communities that they represent. Our work as qualitative researchers often requires us to represent the experiences of others, which also means that we need to be especially attentive to how culture is reflected in our research. Cultural respectfulness suggests that we approach our work and our participants with a sense of humility. This means that we maintain an open mind, a desire to learn about the cultural context of participants’ lives, and that we preserve the integrity of this context as we share our findings.

17.1 Ethical responsibility and cultural respectfulness

Learning objectives.

Learners will be able to…

  • Explain how our ethical responsibilities as researchers translate into decisions regarding qualitative sampling
  • Summarize how aspects of culture and identity may influence recruitment for qualitative studies

Representation

Representation reflects two important aspects of our work as qualitative researchers, who is present and how are they presented. First, we need to consider who we are including or excluding in or sample. Recruitment and sampling is especially tied to our ethical mandate as researchers to uphold the principle of justice under the Belmont Report [1] (see Chapter 6   for additional information). Within this context we need to:

  • Assure there is fair distribution of risks and benefits related to our research
  • Be conscientious in our recruitment efforts to support equitable representation
  • Ensure that special protections to vulnerable groups involved in research activities are in place

As you plan your qualitative research study, make sure to consider who is invited and able to participate and who is not. These choices have important implications for your findings and how well your results reflect the population you are seeking to represent. There may be explicit exclusions that don’t allow certain people to participate, but there may also be unintended reasons people are excluded (e.g. transportation, language barriers, access to technology, lack of time).

The second part of representation has to do with how we disseminate our findings and how this reflects on the population we are studying. We will speak further about this aspect of representation in Chapter 21 , which is specific to qualitative research dissemination. For now, it is enough to know that we need to be thoughtful about who we attempt to recruit and how effectively our resultant sample reflects our population.

Being mindful of history

As you plan for the recruitment of your sample, be mindful of the history of how this group (and/or the individuals you may be interacting with) has been treated – not just by the research community, but by others in positions of power. As researchers, we usually represent an outside influence and the people we are seeking to recruit may have significant reservations about trusting us and being willing to participate in our study (often grounded in good historical reasons—see Chapter 6 for additional information). Because of this, be very intentional in your efforts to be transparent about the purpose of your research and what it involves, why it is important to you, as well as how it can impact the community. Also, in helping to address this history, we need to make concerted efforts to get to know the communities that we research with well, including what is important to them.

Stories as sacred: How are we requesting them?

Finally, it is worth pointing out that as qualitative researchers, we have an extra layer of ethical and cultural responsibility. While quantitative research deals with numbers, as qualitative researchers, we are generally asking people to share their stories. Stories are intimate, full of depth and meaning, and can reveal tremendous amounts about who we are and what makes us tick. Because of this, we need to take special care to treat these stories as sacred. I will come back to this point in subsequent chapters, but as we go about asking for people to share their stories, we need to do so humbly.

Key Takeaways

  • As researchers, we need to consider how our participant communities have been treated historically, how we are representing them in the present through our research, and the implications this representation could have (intended and unintended) for their lives. We need to treat research participants and their stories with respect and humility.
  • When conducting qualitative research, we are asking people to share their stories with us. These “data” are personal, intimate, and often reflect the very essence of who our participants are. As researchers, we need to treat  research participants and their stories with respect and humility.

17.2 Critical considerations

  • Assess dynamics of power in sampling design and recruitment for individual participants and participant communities
  • Create opportunities for empowerment through early choice points in key research design elements

Related to the previous discussion regarding being mindful of history, we also need to consider the current dynamics of power between researcher and potential participant. While we may not always recognize or feel like we are in a position of power, as researchers we hold specialized knowledge, a particular skill set, and what we do can with the data we collect can have important implications and consequences for individuals, groups, and communities. All of these contribute to the formation of a role ascribed with power. It is important for us to consider how this power is perceived and whenever possible, how we can build opportunities for empowerment that can be built into our research design. Examples of some strategies include:

  • Recruiting and meeting in spaces that are culturally acceptable
  • Finding ways to build participant choice into the research process
  • Working with a community advisory group during the research process (explained further in the example box below)
  • Designing informative and educational materials that help to thoroughly explain the research process in meaningful ways
  • Regularly checking with participants for understanding
  • Asking participants what they would like to get out of their participation and what it has been like to participate in our research
  • Determining if there are ways that we can contribute back to communities beyond our research (developing an ongoing commitment to collaboration and reciprocity)

While it may be beyond the scope of a student research project to address all of these considerations, I do think it is important that we start thinking about these more in our research practices. As social work researchers, we should be modeling empowerment practices in the field of social science research, but we often fail to meet this standard.

Example. A community advisory group can be a tremendous asset throughout our research process, but especially in early stages of planning, including recruitment. I was fortunate enough to have a community advisory group for one of the projects I worked on. They were incredibly helpful as I considered different perspectives I needed to include in my study, helping me to think through a respectful way to approach recruitment, and how we might make the research arrangement a bit more reciprocal so community members might benefit as well.

Intersectional identity

As qualitative researchers, we are often not looking to prove a hypothesis or uncover facts. Instead, we are generally seeking to expand our understanding of the breadth and depth of human experience. Breadth is reflected as we seek to uncover variation across participants and depth captures variation or detail within each participants’ story. Both are important for generating the fullest picture possible for our findings. For example, we might be interested in learning about people’s experience living in an assisted living facility by interviewing residents. We would want to capture a range of different residents’ experiences (breadth) and for each resident, we would seek as much detail as possible (depth). Do note, sometimes our research may only involve one person, such as in a case study . However, in these instances we are usually trying to understand many aspects or dimensions of that single case.

To capture this breadth and depth we need to remember that people are made of multiple stories formed by intersectional identities . This means that our participants never just represent one homogeneous social group. We need to consider the various aspects of our population that will help to give the most complete representation in our sample as we go about recruitment.

Identify a population you are interested in studying. This might be a population you are working with at your field placement (either directly or indirectly), a group you are especially interested in learning more about, or a community you want to serve in the future. As you formulate your question, you may draw your sample directly from clients that are being served, others in their support network, service providers that are providing services, or other stakeholders that might be invested in the well-being of this group or community. Below, list out two populations you are interested in studying and then for each one, think about two groups connected with this population that you might focus your study on.

1. 1a.
1b.
2. 2a.
2b.

Next, think about what would kind of information might help you understand this group better. If you had the chance to sit down and talk with them, what kinds of things would you want to ask? What kinds of things would help you understand their perspective or their worldview more clearly? What kinds of things do we need to learn from them and their experiences that could help us to be better social workers? For each of the groups you identified above, write out something you would like to learn from their experience.

1 1a. 1a.
1b. 1b.
2. 2a. 2a.
2b. 2b.

Finally, consider how this group might perceive a request to participate. For the populations and the groups that you have identified, think about the following questions:

  • How have these groups been represented in the news?
  • How have these groups been represented in popular culture and popular media?
  • What historical or contemporary factors might influence these group members’ opinions of research and researchers?
  • In what ways have these groups been oppressed and how might research or academic institutions have contributed to this oppression?

Our impact on the qualitative process

It is important for qualitative research to thoughtfully plan for and attempt to capture our own impact on the research process. This influence that we can have on the research process represents what is known as researcher bias . This requires that we consider how we, as human beings, influence the research we conduct. This starts at the very beginning of the research process, including how we go about sampling. Our choices throughout the research process are driven by our unique values, experiences, and existing knowledge of how the world works. To help capture this contribution, qualitative researchers may plan to use tools like a reflexive journal , which is a research journal that helps the researcher to reflect on and consider their thoughts and reactions to the research process and how these may influence or shape a study (there will be more about this tool in Chapter 20 when we discuss the concept of rigor ). While this tool is not specific to the sampling process, the next few chapters will suggest reflexive journal questions to help you think through how it might be used as you develop a qualitative proposal.

Example. To help demonstrate the potential for researcher bias, consider a number of students that I work with who are placed in school systems for their field experience and choose to focus their research proposal in this area. Some are interested in understanding why parents or guardians aren’t more involved in their children’s educational experience. While this might be an interesting topic, I would encourage students to consider what kind of biases they might have around this issue.

  • What expectations do they have about parenting?
  • What values do they attach to education and how it should be supported in the home?
  • How has their own upbringing shaped their expectations?
  • What do they know about the families that the school district serves and how did they come by this information?
  • How are these families’ life experiences different from their own?

The answers to these questions may unconsciously shape the early design of the study, including the research question they ask and the sources of data they seek out. For instance, their study may only focus on the behaviors and the inclinations of the families, but do little to investigate the role that the school plays in engagement and other structural barriers that might exist (e.g. language, stigma, accessibility, child-care, financial constraints, etc.).

  • As researchers, we wield (sometimes subtle) power and we need to be conscientious of how we use and distribute this power.
  • Qualitative study findings represent complex human experiences. As good as we may be, we are only going to capture a relatively small window into these experiences (and need to be mindful of this when discussing our findings).

In the early stages of your research process, it is a good idea to start your reflexive journal . Starting a reflexive journal is as easy as opening up a new word document, titling it and chronologically dating your entries. If you are more tactile-oriented, you can also keep your reflexive journal in paper bound journal.

To prompt your initial entry, put your thoughts down in response to the following questions:

  • What led you to be interested in this topic?
  • What experience(s) do you have in this area?
  • What knowledge do you have about this issue and how did you come by this knowledge?
  • In what ways might you be biased about this topic?

Don’t answer this last question too hastily! Our initial reaction is often—”Biased!?! Me—I don’t have a biased bone in my body! I have an open-mind about everything, toward everyone!” After all, much of our social work training directs us towards acceptance and working to understand the perspectives of others. However, WE ALL HAVE BIASES . These are conscious or subconscious preferences that lead us to favor some things over others. These preferences influence the choices we make throughout the research process. The reflexive journal helps us to reflect on these preferences, where they might stem from, and how they might be influencing our research process. For instance, I conduct research in the area of mental health. Before I became a researcher, I was a mental health clinician, and my years as a mental health practitioner created biases for me that influence my approach to research. For instance, I may be biased in perceiving mental health services as being well-intentioned and helpful. However, participants may well have very different perceptions based on their experiences or beliefs (or those of their loved ones).

17.3 Finding the right qualitative data to answer my research question

  • Compare different types of qualitative data
  • Begin to formulate decisions as they build their qualitative research proposal, specially in regards to selecting types of data that can effectively answer their research question

Sampling starts with deciding on the type of data you will be using. Qualitative research may use data from a variety of sources. Sources of qualitative data may come from interviews or focus groups , observations , a review of written documents, administrative data, or other forms of media, and performances. While some qualitative studies rely solely on one source of data, others incorporate a variety.

You should now be well acquainted with the term triangulation . When thinking about triangulation in qualitative research, we are often referring to our use of multiple sources of data among those listed above to help strengthen the confidence we have in our findings. Drawing on a journalism metaphor, this allows us to “fact check” our data to help ensure that we are getting the story correct. This can mean that we use one type of data (like interviews), but we intentionally plan to get a diverse range of perspectives from people we know will see things differently. In this case we are using triangulation of perspectives. In addition, we may also you a variety of different types of data, like including interviews, data from case records, and staff meeting minutes all as data sources in the same study. This reflects triangulation through types of data.

As a student conducting research, you may not always have access to vulnerable groups or communities in need, or it may be unreasonable for you to collect data from them directly due to time, resource, or knowledge constraints. Because of this, as you are reviewing the sections below, think about accessible alternative sources of data that will still allow you to answer your research question practically, and I will provide some examples along the way to get you started. In the above example, local media coverage might be a means of obtaining data that does not involve vulnerable directly collecting data from potentially vulnerable participants.

critical sampling in qualitative research

Verbal data

Perhaps the bread an d butter of t he qualitative researche r, we often rely on what people  tell us as a primary source of information for qualitative studies in the form of verbal data. The researcher who schedules interviews with recipients of public assistance to capture their experience after legislation drastically changes requirements for benefits relies on the communication between the researcher and the impacted recipients of public assistance. Focus groups are another frequently used method of gathering verbal data. Focus groups bring together a group of participants to discuss their unique perspectives and explore commonalities on a given topic. One such example is a researcher who brings together a group of child welfare workers who have been in the field for one to two years to ask them questions regarding their preparation, experiences, and perceptions regarding their work. 

A benefit of utilizing verbal data is that it offers an opportunity for researchers to hear directly from participants about their experiences, opinions, or understanding of a given topic. Of course, this requires that participants be willing to share this information with a researcher and that the information shared is genuine. If groups of participants are unwilling to participate in sharing verbal data or if participants share information that somehow misrepresents their feelings (perhaps because they feel intimidated by the research process), then our qualitative sample can become biased and lead to inaccurate or partially accurate findings.

As noted above, participant willingness and honesty can present challenges for qualitative researchers. You may face similar challenges as a student gathering verbal data directly from participants who have been personally affected by your research topic. Because of this, you might want to gather verbal data from other sources. Many of the students I work with are placed in schools. It is not feasible for them to interview the youth they work with directly, so frequently they will interview other professionals in the school, such as teachers, counselors, administration, and other staff. You might also consider interviewing other social work students about their perceptions or experiences working with a particular g roup. 

Again, because it may be problematic or unrealistic for you to obtain verbal data directly from vulnerable groups as a student researcher, you might consider gathering verbal data from the following sources:

  • Interviews and focus groups with providers, social work students, faculty, the general public, administrators, local politicians, advocacy groups
  • Public blogs of people invested in your topic
  • Publicly available transcripts from interviews with experts in the area or people reporting experiences in popular media

Make sure to consult with your professor to ensure that what you are planning will be realistic for the purposes of your study.

critical sampling in qualitative research

Observational data

As researcher s, we sometimes rely on our own powers of observation to gather data on a particular topic. We may observe a person’s behavior, an interaction, setting, context, and maybe even our own reactions to what we are observing (i.e. what we are thinking or feeling). When observational data is used for quantitative purposes, it involves a count, such as how many times a certain behavior occurs for a child in a classroom. However, when observational data is used for qualitative purposes, it involves the researcher providing a detailed description. For instance, a qualitative researcher may conduct observations of how mothers and children interact in child and adolescent cancer units, and take notes about where exchanges take place, topics of conversation, nonverbal information, and data about the setting itself – what the unit looks like, how it is arranged, the lighting, photos on the wall, etc.

Observational data can provide important contextual information that may not be captured when we rely solely on verbal data. However, using this form of data requires us, as researchers, to correctly observe and interpret what is going on. As we don’t have direct access to what participants may be thinking or feeling to aid us (which can lead us to misinterpret or create a biased representation of what we are observing), our take on this situation may vary drastically from that of another person observing the same thing. For instance, if we observe two people talking and one begins crying, how do we know if these are tears of joy or sorrow? When you observe someone being abrupt in a conversation, I might interpret that as the person being rude while you might perceive that the person is distracted or preoccupied with something. The point is, we can’t know for sure. Perhaps one of the most challenging aspects of gathering observational data is collecting neutral, objective observations, that are not laden with our subjective value judgments placed on them. Students often find this out in class during one of our activities. For this activity, they have to go out to public space and write down observations about what they observe. When they bring them back to class and we start discussing them together, we quickly realize how often we make (unfounded) judgments. Frequent examples from our class include determining the race/ethnicity of people they observe or the relationships between people, without any confirmational knowledge. Additionally, they often describe scenarios with adverbs and adjectives that often reflect judgments and values they are placing on their data. I’m not sharing this to call them out, in fact, they do a great job with the assignment. I just want to demonstrate that as human beings, we are often less objective than we think we are! These are great examples of research bias.

Again, gaining access to observational spaces, especially private ones, might be a challenge for you as a student. As such, you might consider if observing public spaces might be an option. If you do opt for this, make sure you are not violating anyone’s right to privacy. For instance, gathering information in a narcotics anonymous meeting or a religious celebration might be perceived as offensive, invasive or in direct opposition to values (like anonymity) of participants. When making observations in public spaces be careful not to gather any information that might identify specific individuals or organizations. Also, it is important to consider the influence your presence may have on a community, particularly if your observation makes you stand out among those typically present in that setting. Always consider the needs of the individual and the communities in formulating a plan for observing public behavior. Public spaces might include commercial spaces or events open to the public as well as municipal parks. Below we will have an expanded discussion about different varieties of non-probability sampling strategies that apply to qualitative research. Recruiting in public spaces like these may work for strategies such as convenience sampling or quota sampling , but would not be a good choice for snowball sampling or purposive sampling .

As with the cautionary note for student researchers under verbal data, you may experience restricted access to spaces in which you are able to gather observational data. However, if you do determine that observational data might be a good fit for your student proposal, you might consider the following spaces:

  • Shopping malls
  • Public parks or beaches
  • Public meetings or rallies
  • Public transportation

Artifacts (documents & other media)

Existing artifacts can also be very useful for the qualitative researcher. Examples include newspapers, blogs, websites, podcasts, television shows, movies, pictures, video recordings, artwork, and live performances. While many of these sources may provide indirect information on a topic, this information can still be quite valuable in capturing the sentiment of popular culture and thereby help researchers enhance their understanding of (dominant) societal values and opinions. Conversely, researchers can intentionally choose to seek out divergent, unique or controversial perspectives by searching for artifacts that tend to take up positions that differ from the mainstream, such as independent publications and (electronic) newsletters. While we will explore this further below, it is important to understand that data and research, in all its forms, is political. Among many other purposes, it is used to create, critique, and change policy; to engage in activism; to support and refute how organizations function; and to sway public opinion.

When utilizing documents and other media as artifacts, researchers may choose to use an entire source (such as a book or movie), or they may use a segment or portion of that artifact (such as the front-page stories from newspapers, or specific scenes in a television series). Your choice of which artifacts you choose to include will be driven by your question, and remember, you want your sample of artifacts to reflect the diversity of perspectives that may exist in the population you are interested in. For instance, perhaps I am interested in studying how various forms of media portray substance use treatment. I might intentionally include a range of liberal to conservative views that are portrayed across a number of media sources.

As qualitative researchers using artifacts, we often need to do some digging to understand the context of said artifact. We do this because data is almost always affiliated or aligned with some position (again, data is political). To help us consider this, it may be helpful to reflect on the following questions:

  • Who owns the artifact or where is it housed
  • What values does the owner (organization or person) hold
  • How might the position or identity of the owner influence what information is shared or how it is portrayed
  • What is the purpose of the artifact
  • Who is the audience for which the artifact is intended

Answers to questions such as these can help us to b etter under stand and give meaning to the content of the artifacts. Content is the substance of the artifact (e.g. the wor ds, picture, scene). While c ontext is the circumstances surrounding content. Both work together to help provide meaning, and further understanding of what can be derived from an artifact. As an example to illustrate this point, let’s say that you are including meeting minutes from an organizing network as a source of data for your study. The narrative description in these minutes will certainly be important, however, they may not tell the whole story. For instance, you might not know from the text that the organization has recently voted in a new president and this has created significant division within the network. Knowing this information might help you to interpret the agenda and the discussion contained in the minutes very differently. 

Content and context as concentric circles, with context being the larger circle. Arrow between the two suggesting interaction to produce meaning. interaction to produce meaning

As student researchers, using documents and other artifacts may be a particularly appealing source of data for your study. This is because this data already exists (you aren’t creating new data) and depending on what you select, it might be relatively easy to access. Examples of utilizing existing artifacts might include studying the cultural context of movie portrayals of Latinx families or analyzing publicly available town hall meeting minutes to explore expressions of social capital. Below is a list of sources of data from documents or other media sources to consider for your student proposal:

  • Movies or TV shows
  • Music or music videos
  • Public blogs
  • Policies or other organizational documents
  • Meeting minutes
  • Comments in online forums
  • Books, newspapers, magazines, or other print/virtual text-based materials
  • Recruitment, training, or educational materials
  • Musical or artistic expressions

Finally, Photovoice is a technique that merges pictures with narrative (word or voice) data that helps interpret the meaning or significance of the visual. Photovoice is often used for qualitative work that is conducted as part of Community Based Participatory Research (CBPR), wherein community members act as both participants and as co-researchers. These community members are provided with a means of capturing images that reflect their understanding of some topic, prompt or question, and then they are asked to provide a narrative description or interpretation to help give meaning to the image(s). Both the visual and n arrative information are used as qualitative data to include in the study. Dissemination of Photovoice projects often involve a public display of the works, such as through a demonstration or art exhibition to raise awareness or to produce some specific change that is desired by participants. Because this form of study is often intentionally persuasive in nature, we need to recognize that this form of data will be inherently subjective. As a student, it may be particularly challenging to implement a Photovoice project, especially due to its time-intensive nature, as well as the additional commitments of needing to engage, train, and collaborate with community partners.

Table 17.1 Types of qualitative data
Verbal Strengths

Challenges

Observational Observations in: Strengths

Challenges

Documents & Other Media Strengths

Challenges

How many kinds of data?

You will need to consider whether you will rely on one kind of data or multiple. While many qualitative studies solely use one type of data, such as interviews or focus groups, others may use multiple sources. The decision to use multiple sources is often made to help strengthen the confidence we have in our findings or to help us to produce a richer, more detailed description of our results. For instance, if we are conducting a case study of what the family experience is for a child with a very rare disorder like Batten Disease , we may use multiple sources of data. These can include observing family and community interactions, conducting interviews with family members and others connected to the family (such as service providers,) and examining journal entries families were asked to keep over the course of the study. By collecting data from a variety of sources such as this, we can more broadly represent a range of perspectives when answering our research question, which will hopefully provide a more holistic picture of the family experience. However, if we are trying to examine the decision-making processes of adult protective workers, it may make the most sense to rely on just one type of data, such as interviews with adult protective workers. 

  • There are numerous types of qualitative data (verbal, observational, artifacts) that we may be able to access when planning a qualitative study. As we plan, we need to consider the strengths and challenges that each possess and how well each type might answer our research question.
  • The use of multiple types of qualitative data does add complexity to a study, but this complication may well be worth it to help us explore multiple dimensions of our topic and thereby enrich our findings.

Reflexive Journal Entry Prompt

For your next entry, consider responding to the following:

  • What types of data appeal to you?
  • Why do you think you are drawn to them?
  • How well does this type of data “fit” as a means of answering your question? Why?

17.4 How to gather a qualitative sample

  • Compare and contrast various non-probability sampling approaches
  • Select a sampling strategy that ideologically fits the research question and is practical/actionable

Before we launch into how to plan our sample, I’m going to take a brief moment to remind us of the philosophical basis surrounding the purpose of qualitative research—not to punish you, but because it has important implications for sampling.

Nomothetic vs. idiographic

As a quick reminder, as we discussed in Chapter 8   idiographic research aims to develop a rich or deep understanding of the individual or the few. The focus is on capturing the uniqueness of a smaller sample in a comprehensive manner. For example, an idiographic study might be a good approach for a case study examining the experiences of a transgender youth and her family living in a rural Midwestern state. Data for this idiographic study would be collected from a range of sources, including interviews with family members, observations of family interactions at home and in the community, a focus group with the youth and her friend group, another focus group with the mother and her social network, etc. The aim would be to gain a very holistic picture of this family’s experiences.

On the other hand, nomothetic research is invested in trying to uncover what is ‘true’ for many. It seeks to develop a general understanding of a very specific relationship between variables. The aim is to produce generalizable findings, or findings that apply to a large group of people. This is done by gathering a large sample and looking at a limited or restricted number of aspects. A nomothetic study might involve a national survey of heath care providers in which thousands of providers are surveyed regarding their current knowledge and comp etence in treating transgender individuals. It would gather data from a very large number of people, and attempt to highlight some general findings across this population on a very focused topic.

Idiographic and nomothetic research represent two different research categories existing at opposite extremes on a continuum.  Qualitative research generally exists on the idiographic end of this continuum. We are most often seeking to obtain a rich, deep, detailed understanding from a relatively small group of people.

Figure 17.2 Idiographic vs. Nomothetic provides a visual where by idiographic there are a few figures with many different thought bubbles above them, and with nomothetic there are many people with one single thought bubble.

Non-probability sampling

Non-probability sampling refers to sampling techniques for which a person’s (or event’s) likelihood of being selected for membership in the sample is unknown. Because we don’t know the likelihood of selection, we don’t know whether a sample represents a larger population or not. But that’s okay, because representing the population is not the goal of nonprobability samples. That said, the fact that nonprobability samples do not represent a larger population does not mean that they are drawn arbitrarily or without any specific purpose in mind. We typically use nonprobability samples in research projects that are qualitative in nature. We will examine several types of nonprobability samples. These include purposive samples, snowball samples, quota samples, and convenience samples.

Convenience or availability

Convenience sampling, also known as availability sampling, is a nonprobability sampling strategy that is employed by both qualitative and quantitative researchers. To draw a convenience sample, we would simply collect data from those people or other relevant elements to which we have the most convenient access. While convenience samples offer one major benefit—convenience—we should be cautious about generalizing from research that relies on convenience samples because we have no confidence that the sample is representative of a broader population. If you are a social work student who needs to conduct a research project at your field placement setting and you decide to conduct a focus group with the staff at your agency, you are using a convenience sampling approach – you are recruiting participants that are easily accessible to you. In addition, if you elect to analyze existing data that your social work program has collected as part of their graduation exit surveys, you are using data that you readily have access to for your project; again, you have a convenience sample. The vast majority of students I work with on their proposal design rely on convenience data due to time constraints and limited resources.

To draw a purposive sample, we begin with specific perspectives or purposive criteria in mind that we want to examine. We would then seek out research participants who cover that full range of perspectives. For example, if you are studying mental health supports on your campus, you may want to be sure to include not only students, but mental health practitioners and student affairs administrators as well. You might also select students who currently use mental health supports, those who dropped out of supports, and those who are waiting to receive supports. The “purposive” part of purposive sampling comes from selecting specific participants on purpose because you already know they have certain characteristics—being an administrator, dropping out of mental health supports, for example—that you need in your sample.

Note that these differ from inclusion criteria , which are more general requirements a person must possess to be a part of your sample; to be a potential participant that may or may not be sampled. For example, one of the inclusion criteria for a study of your campus’ mental health supports might be that participants had to have visited the mental health center in the past year. That differs from purposive sampling. In purposive sampling, you know characteristics of individuals and recruit them because of those characteristics. For example, I might recruit Jane because she stopped seeking supports this month, because she has worked at the center for many years, and so forth.

Also, it’s important to recognize that purposive sampling requires you to have prior information about your participants before recruiting them because you need to know their perspectives or experiences before you know whether you want them in your sample. This is a common mistake that many students make. What I often hear is, “I’m using purposive sampling because I’m recruiting people from the health center,” or something like that. That’s not purposive sampling. In most instances they really mean they are going to use convenience sampling-taking whoever they can recruit that fit the inclusion criteria (i.e. have attended the mental health center). Purposive sampling is recruiting specific people  because of the various characteristics and perspectives they bring to your sample. Imagine we were creating a focus group. A purposive sample might gather clinicians, patients, administrators, staff, and former patients together so they can talk as a group. Purposive sampling would seek out people that have each of those attributes.

If you are considering using a purposive sampling approach for your research proposal, you will need to determine what your purposive criteria involves. There are a range of different purposive strategies that might be employed, including: maximum variation , typical case , extreme case , or political case , and you want to be thoughtful in thinking about which one(s) you select and why.

Table 17.2 Various purposive strategy approaches
Case(s) selected to represent a range of very different perspectives on a topic You interview student leaders from the schools of social work, business, the arts, math & science, education, history & anthropology and health studies to ensure that you have the perspective of a variety of disciplines
Case(s) selected to reflect a commonly held perspective. You interview a child welfare worker specifically because many of their characteristics fit the state statistical profile for providers in that service area.
Case(s) selected to represent extreme or underrepresented perspectives. You examine websites devoted to rare cancer survivor support.
Case(s) selected to represent a contemporary politicized issue You analyze media interviews with Planned Parenthood providers, employees, and clients from 2010 to present.
Case(s) selected based on specialized content knowledge or expertise You are interested in studying resilience in trauma providers, so you research and reach out to a handful of authorities in this area.
Case(s) selected based on their representation of a specific theoretical orientation or some aspect of a given theory You are interested in studying how training methods vary by practitioner according to their theoretical orientation. You specifically reach out to a clinician who identifies as a Cognitive Behavioral clinician, one who identifies as Bowenian, and one who identifies as Structural Family.
Case(s) selected based on the likelihood that the case will yield the desired information You examine a public gaming network forum on social media to see how participants offer support to one another.

  It can be a bit tricky determining how to approach or formulate your purposive cases. Below are a couple additional resources to explore this strategy further.

For more information on purposive sampling consult this webpage from Laerd Statistics on purposive sampling and this webpage from the University of Connecticut on education research .

When using snowball sampling , we might know one or two people we’d like to include in our study but then we have to rely on those initial participants to help identify additional participants. Thus, our sample builds and grows as the study continues, much as a snowball builds and becomes larger as it rolls through the snow. Snowball sampling is an especially useful strategy when you wish to study a stigmatized group or behavior. These groups may have limited visibility and accessibility for a variety of reasons, including safety. 

Malebranche and colleagues (2010) [5] were interested in studying sexual health behaviors of Black, bisexual men. Anticipating that this may be a challenging group to recruit, they utilized a snowball sampling approach. They recruited initial contacts through activities such as advertising on websites and distributing fliers strategically (e.g. barbershops, nightclubs). These initial recruits were compensated with $50 and received study information sheets and five contact cards to distribute to people in their social network that fit the study criteria. Eventually the research team was able to recruit a sample of 38 men who fit the study criteria.

Snowball sampling may present some ethical quandaries for us. Since we are essentially relying on others to help advertise for us, we are giving up some of our control over the process of recruitment. We may be worried about coercion, or having people put undue pressure to have others’ they know participate in your study. To help mitigate this, we would want to make sure that any participant we recruit understands that participation is completely voluntary and if they tell others about the studies, they should also make them aware that it is voluntary, too. In addition to coercion, we also want to make sure that people’s privacy is not violated when we take this approach. For this reason, it is good practice when using a snowball approach to provide people with our contact information as the researchers and ask that they get in touch with us, rather than the other way around. This may also help to protect again potential feelings of exploitation or feeling taken advantage of. Because we often turn to snowball sampling when our population is difficult to reach or engage, we need to be especially sensitive to why this is. It is often because they have been exploited in the past and participating in research may feel like an extension of this. To address this, we need to have a very clear and transparent informed consent process and to also think about how we can use or research to benefit the people we work in the most meaningful and tangible ways.

Quota sampling is another nonprobability sampling strategy. This type of sampling is actually employed by both qualitative and quantitative researchers, but because it is a nonprobability method, we’ll discuss it in this section. When conducting quota sampling, we identify categories that are important to our study and for which there is likely to be some variation. Subgroups are created based on each category and the researcher decides how many people (or whatever element happens to be the focus of the research) to include from each subgroup and collects data from that number for each subgroup. To demonstrate, perhaps we are interested in studying support needs for children in the foster care system. We decide that we want to examine equal numbers (seven each) of children placed in a kinship placement, a non-kinship foster placement, group home, and residential placements. We expect that the experiences and needs across these settings may differ significantly, so we want to have good representation of each one, thus setting a quota of seven for each type of placement.

Table 17.3 Non-probability sampling strategies
You gather data from whatever cases/people/documents happen to be convenient
You seek out elements that meet specific criteria, representing different perspectives
You rely on participant referrals to recruit new participants
You select a designated number of cases from specified subgroups

As you continue to plan for your proposal, below you will find some of the strengths and challenges presented by each of these types of sampling.

Table 17.4 Non-probability sampling strategies strengths and challenges
Allows us to draw sample from participants who are most readily available/accessible Sample may be biased and may represent limited or skewed diversity in characteristics of participants
Ensures that specific expertise, positions, or experiences are represented in sample participants It may be challenging to define purposive criteria or to locate cases that represent these criteria; restricts our potential sampling pool
Accesses participant social network and community knowledge

Can be helpful in gaining access to difficult to reach populations

May be hard to locate initial small group of participants, concerns over privacy—people might not want to share contacts, process may be slow or drawn-out
Helps to ensure specific characteristics are represented and defines quantity of each Can be challenging to fill quotas, especially for subgroups that might be more difficult to locate or reluctant to participate

Wait a minute, we need a plan!

Both qualitative and quantitative research should be planful and systematic. We’ve actually covered a lot of ground already and before we get any further, we need to start thinking about what the plan for your qualitative research proposal will look like. This means that as you develop your research proposal, you need to consider what you will be doing each step of the way: how you will find data, how you will capture it, how you will organize it, and how you will store it. If you have multiple types of data, you need to have a plan in place for each type. The plan that you develop is your data collection protocol . If you have a team of researchers (or are part of a research team), the data collection protocol is an important communication tool, making sure that everyone is clear what is going on as the research proceeds. This plan is important to help keep you and others involved in your research consistent and accountable. Throughout this chapter and the next ( Chapter 18 —qualitative data gathering) we will walk through points you will want to include in your data collection protocol. While I’ve spent a fair amount of time talking about the importance of having a plan here, qualitative design often does embrace some degree of flexibility. This flexibility is related to the concept of emergent design that we find in qualitative studies. Emergent design is the idea that some decision in our design will be dynamic and fluid as our understanding of the research question evolves. The more we learn about the topic, the more we want to understand it thoroughly.

A research protocol is a document that not only defines your research project and its aims, but also comprehensively plans how you will carry it out. If this sounds like the function of a research proposal, you are right, they are similar. What differentiates a protocol from a proposal is the level of detail. A proposal is more conceptual; a protocol is more practical (right down to the dollars and cents!). A protocol offers explicit instructions for you and your research team, any funders that may be involved in your research, and any oversight bodies that might be responsible for overseeing your study. Not every study requires a research protocol, but what I’m suggesting here is that you consider constructing at least a limited one to help though the decisions you will need to make to construct your qualitative study.

Al-Jundi and Sakka (2016) [6] provide the following elements for a research protocol :

  • What is the question? (Hypothesis) What is to be investigated?
  • Why is the study important (Significance)
  • Where and when will it take place?
  • What is the methodology? (Procedures and methods to be used).
  • How are you going to implement it? (Research design)
  • What is the proposed time table and budget?
  • What are the resources required (technical, scientific, and financial)?

While your research proposal in its entirety will focus on many of these areas, our attention for developing your qualitative research protocol will hone in on the two highlighted above. As we go through these next couple chapters, there will be a number of exercises that walk you though decision points that will form your qualitative research protocol.

To begin developing your qualitative research protocol:

  • Select the question you have decided is the best to frame your research proposal.
  • Write a brief paragraph about the aim of your study, ending it with the research question you have selected.

Here are a few additional resources on developing a research protocol:

Cameli et al., (2018) How to write a research protocol: Tips and tricks .

Ohio State University, Institutional Review Board (n.d.). Research protocol .

World Health Organization (n.d.). Recommended format for a research protocol .

Decision Point: What types of data will you be using?

  • Why is this a good choice, given your research question?
  • If so, provide support for this decision.

Decision Point: Which non-probability sampling strategy will you employ?

  • Why is this is a good fit?
  • What steps might your take to address these challenges?

Recruiting strategies

Much like quantitative research, recruitment for qualitative studies can take many different approaches. When considering how to draw your qualitative sample, it may be helpful to first consider which of these three general strategies will best fit your research question and general study design: public, targeted, or membership-based. While all will lead to a sample, the process for getting you there will look very different, depending on the strategy you select.

Taking a public approach to recruitment offers you access to the broadest swath of potential participants. With this approach, you are taking advantage of public spaces in an attempt to gain the attention of the general population of people that frequent that space so that they can learn about your study. These spaces can be in-person (e.g. libraries, coffee shops, grocery stores, health care settings, parks) or virtual (e.g. open chat forums, e-bulletin boards, news feeds). Furthermore, a public approach can be static (such as hanging a flier), or dynamic (such as talking to people and directly making requests to participate). While a public approach may offer broad coverage in that it attempts to appeal to an array of people, it may be perceived as impersonal or easily able to be overlooked, due to the potential presence of other announcements that may be featured in public spaces. Public recruitment is most likely to be associated with convenience or quota sampling and is unlikely to be used with purposive or snowball sampling, where we would need some advance knowledge of people and the characteristics they possess.

As an alternative, you may elect to take a targeted approach to recruitment. By targeting a select group, you are restricting your sampling frame to those individuals or groups who are potentially most well-suited to answer your research question. Additionally, you may be targeting specific people to help craft a diverse sample, particularly with respect to personal characteristics and/or opinions.

You can target your recruitment through the use of different strategies. First, you might consider the use of knowledgeable and well-connected community members. These are people who may possess a good amount of social capital in their community, which can aid in recruitment efforts. If you are considering the use of community members in this role, make sure to be thoughtful in your approach, as you are essentially asking them to share some of their social capital with you. This means learning about the community or group, approaching community members with a sense of humility, and making sure to demonstrate transparency and authenticity in your interactions. These community members may also be champions for the topic you are researching. A champion is someone who helps to draw the interest of a particular group of people. The champion often comes from within the group itself. As an example, let’s say you’re interested in studying the experiences of family members who have a loved one struggling with substance use. To aid in your recruitment for this study, you enlist the help of a local person who does a lot of work with Al-Anon, an organization facilitating mutual support groups for individuals and families affected by alcoholism.

A targeted approach can certainly help ensure that we are talking to people who are knowledgeable about the topic we are interested in, however, we still need to be aware of the potential for bias. If we target our recruitment based on connection to a particular person, event, or passion for the topic, these folks may share information that they think is viewed as favorable or that disproportionately reflects a particular perspective. This phenomenon is due to the fact that we often spend time with people who are like-minded or share many of our views. A targeted approach may be helpful for any type of non-probability sampling, but can be especially useful for purposive, quota, or snowball sampling, where we are trying to access people or groups of people with specific characteristics or expertise.

Membership-based

Finally, you might consider a membership-based approach . This approach is really a form of targeted recruitment, but may benefit from some individual attention. When using a membership-based approach, your sampling frame is the membership list of a particular organization or group. As you might have guessed, this organization or group must be well-suited for helping to answer your research question. You will need permission to access membership, and the identity of the person authorized to grant permission will depend on the organizational structure. When contacting members regarding recruitment, you may consider using directories, newsletters, listservs or membership meetings. When utilizing a membership-based approach, we often know that members possess specific inclusion criteria we need, however, because they are all associated with that particular group or organization, they may be homogenous or like-minded in other ways. This may limit the diversity in our sample and is something to be mindful of when interpreting our findings. Membership-based recruiting can be helpful when we have a membership group that fulfills our inclusion criteria. For instance, if you want to conduct research with social workers, you might attempt to recruit through the NASW membership distribution list (but this access will come with stipulations and a price tag). Membership-based recruitment may be helpful for any non-probability sampling approach, given that the membership criteria and study inclusion criteria are a close fit. Table 17.5 offers some additional considerations for each of these strategies with examples to help demonstrate sources that might correspond with them.

Table 17.5 Recruitment strategies, strengths, challenges, and examples
Public Strengths: Easier to gain access; Exposure to large numbers of people

Challenges: Can be impersonal, Difficult to cultivate interest

Advertising in public events & spaces

Accessing materials in local libraries or museums

Finding public web-based resources and sources of data (websites, blogs, open forums)

 

Targeted Strengths: Prior knowledge of potential audience, More focused use of resources

Challenges: May be hard to locate/access target group(s), Groups may be suspicious of/or resistant to being targeted

Working with advocacy group for issue you are studying to aid recruitment

Contacting local expert (historian) to help you locate relevant documents

Advertising in places that your population may frequent

 

Membership-Based Strengths: Shared interest (through common membership), Potentially existing infrastructure for outreach

Challenges: Organization may be highly sensitive to protecting members, Members may be similar in perspectives and limit diversity of ideas

Membership newsletters

Listserv or Facebook groups

Advertising at membership meetings or events

  • Qualitative research predominately relies on non-probability sampling techniques. There are a number of these techniques to choose from (convenience/availability, purposive, snowball, quota), each with advantages and limitations to consider. As we consider these, we need to reflect on both our research question and the resources we have available to us in developing a sampling strategy.
  • As we consider where and how we will recruit our sample, there are a range of general approaches, including public, targeted, and membership-based.

Decision Point: How will you recruit or gain access to your sample?

  • If you are recruiting people, how will you identify them? If necessary (and it often is), how will gain permission to do this?
  • If you are using documents or other artifacts for your study, how will you gain access to these? If necessary (and it often is), how will gain permission to do this?

17.5 What should my sample look like?

  • Explain key factors that influence the makeup of a qualitative sample
  • Develop and critique a sampling strategy to support their qualitative proposal

Once you have started your recruitment, you also need to know when to stop. Knowing when to stop recruiting for a qualitative research study generally involves a dynamic and reflective process. This means that you will actively be involved in a process of recruiting, collecting data, beginning to review your preliminary data, and conducting more recruitment to gather more data. You will continue this process until you have gathered enough data and included sufficient perspectives to answer your research question in rich and meaningful way.

Circle divided up in three sections, each with an arrow curving and directed to the next section, demonstrating the ongoing iterative nature of qualitative recruiting, gathering data and analyzing data (the three sections of the circle).

The sample size of qualitative studies can vary significantly. For instance, case studies may involve only one participant or event, while some studies may involve hundreds of interviews or even thousands of documents. Generally speaking, when compared to quantitative research, qualitative studies have a considerably smaller sample. Your decision regarding sample size should be guided by a few considerations, described below.

Amount of data

When gathering quantitative data, the amount of data we are gathering is often specified at the start (e.g. a fixed number of questions on a survey or a set number of indicators on a tracking form). However, when gathering qualitative data, we are often asking people to expand on and explore their thoughts and reactions to certain things. This can produce A LOT of data. If you have ever had to transcribe an interview (type out the conversation while listening to an audio recorded interview), you quickly learn that a 15-minute discussion turns into many pages of dialogue. As such, each interview or focus group you conduct represents multi-page transcripts, all of which becomes your data. If you are conducting interviews or focus groups, y ou will know you have collected enough data from each interaction when you have covered all your questions and allowed the participant(s) to share any and all ideas they have related to the topic. If you are using observational data, you need to spend sufficient time making observations and capturing data to offer a genuine and holistic representation of the thing you are observing (at least to the best of your ability). When using documents and other sources of media, again, you want to ensure that diverse perspectives are represented through your artifact choices so that your data reflects a well-rounded representation of the issue you are studying. For any of these data sources, this involves a judgment call on the researcher’s part. Your judgment should be informed by what you have read in the existing literature and consultation with your professor. 

As part of your analysis, you will likely eventually break these larger hunks of data apart into words or small phrases, giving you potentially thousands of pieces of data. If you are relying on documents or other artifacts, the amount of data contained in each of these pieces is determined in advance, as they already exist. However, you will need to determine how many to include. With interviews, focus groups, or other forms of data generation (e.g. taking pictures for a photovoice project), we don’t necessarily know how much data will be generated with each encounter, as it will depend on the questions that are asked, the information that is shared, and how well we capture it.

Type of study

A variety of types of qualitative studies will be discussed in greater detail in Chapter 22 . While you don’t necessarily need to have an extensive understanding of them all at this point in time, it is important that you understand which of the different design types are best for answering certain research questions. For instance, if our question involves understanding some type of experience, that is often best answered by a phenomenological design. Or, if we want to better understand some process, a grounded theory study may be best suited. While there are no hard and fast rules regarding qualitative sample size, each of these different types of designs has different guidelines for what is considered an acceptable or reasonable number to include in your sample. So drawing on the previous examples, your grounded theory study might include 45 participants because you need more people to gain a clearer picture of each step of the process, while your phenomenological study includes 20 because that provides a good representation of the experience you are interested in. Both would be reasonable targets based on the respective study design type. So as you consider your research question and which specific type of qualitative design this leads you to, you will need to do some investigation to see what size samples are recommended for that particular type of qualitative design.

Diversity of perspectives

As you consider your research question, you also may want to think about the potential variation in how your study population might view this topic. If you are conducting a case study of one person, this obviously isn’t a concern, but if you are interested in exploring a range of experiences, you want to plan to intentionally recruit so this level of diversity is reflected in your sample. The level of variation you seek will have direct implications for how big your sample might be. In the example provided above in the section on quota sampling, we wanted to ensure we had equal representation across a host of placement dispositions for children in foster care. This helped us define our target sample size: (4) settings a quota of (7) participants from each type of setting = a target sample size of (28).

critical sampling in qualitative research

In Chapter 18 , we will be talking about different approaches to data gathering, which may help to dictate the range of perspectives you want to represent. For instance, if you conduct a focus group, you want all of your participants to have some experience with the thing that you are studying, but you hope that their perspectives differ from one another. Furthermore, you may want to avoid groups of participants who know each other well in the same focus group (if possible), as this may lead to groupthink or level of familiarity that doesn’t really encourage differences being expressed. Ideally, we want to encourage a discussion where a variety of ideas are shared, offering a more complete understanding of how the topic is experienced. This is true in all forms of qualitative data, in that your findings are likely to be more well-rounded and offer a broader understanding fo the issue if you recruit a sample with diverse perspectives.

Finally, the concept of saturation has important implications for both qualitative sample size and data analysis. To understand the idea of saturation, it is first important to understand that unlike most quantitative research, with qualitative research we often at least begin the process of data analysis while we are still actively collecting data. This is called an iterative approach to data analysis. So, if you are a qualitative researcher conducting interviews, you may be aiming to complete 30 interviews. After you have completed your first five interviews, you may begin reviewing and coding (a term that refers to labeling the different ideas found in your transcripts) these interviews while you are still conducting more interviews. You go on to review each new interview that you conduct and code it for the ideas that are reflected there. Eventually, you will reach a point where conducting more interviews isn’t producing any new ideas, and this is the point of saturation. Reaching saturation is an indication that we can stop data collection. This may come before or after you hit 30, but as you can see, it is driven by the presence of new ideas or concepts in your interviews, not a specific number.

This chapter represents our transition in the text to a focus on qualitative methods in research. Throughout this chapter we have explored a number of topics including various types of qualitative data, approaches to qualitative sampling, and some considerations for recruitment and sample composition. It bears repeating that your plan for sampling should be driven by a number of things: your research question, what is feasible for you, especially as a student researcher, best practices in qualitative research. Finally, in subsequent chapters, we will continue the discussion about reflexivity as it relates to the qualitative research process that we began here.

  • The composition of our qualitative sample comes with some important decisions to consider, including how large should our sample be and what level and type of diversity it should reflect. These decisions are guided by the purposes or aims of our study, as well as access to resources and our population.
  • The concept of saturation is important for qualitative research. It helps us to determine when we have sufficiently collected a range of perspectives on the topic we are studying.

Decision Point(s): What should your sample look like (sample composition)?

  • If so, how many?
  • How was this number determined?
  • OR will you use the concept of saturation to determine when to stop?
  • What supports your decision in regards to the previous question?

This isn’t so much a decision point, but a chance for you to reflect on the choices you’ve made thus far in your protocol with regards to your: (1) ethical responsibility, (2) commitment to cultural humility, and (3) respect for empowerment of individuals and groups as a social work researcher. Think about each of the decisions you’ve made thus far and work across this grid to identify any important considerations that you need to take into account.

You have been prompted to make a number of choices regarding how you will proceed with gathering your qualitative sample. Based on what you have learned and what you are planning, respond to the following questions below.

  • What are the strengths of your sampling plan in respect to being able to answer your qualitative research question?
  • How feasible is it for you, as a student researcher, to be able to carry out your sampling plan?
  • What reservations or questions do you still need to have answered to adequately plan for your sample?
  • What excites you about your proposal thus far?
  • What worries you about your proposal thus far?

Media Attributions

  • book lights © KAROLINA GRABOWSKA is licensed under a CC BY-ND (Attribution NoDerivatives) license
  • women having a conversation © Andrea Piacquadio is licensed under a CC0 (Creative Commons Zero) license
  • magnifying-glass-1020141_960_720 © Peggy Marco is licensed under a CC0 (Creative Commons Zero) license
  • content context © Cory Cummings
  • idiographic vs. nomothetic © Cory Cummings
  • Giant_snowball_Oxford © Kamyar Adl is licensed under a CC BY (Attribution) license
  • target © bavillo13 is licensed under a CC0 (Creative Commons Zero) license
  • qual data recruit process © Cory Cummings
  • 7419840024_8dff7228cb_b © Free Press/Free Press Action Fund's Photostream is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license
  • National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research. Retrieved from https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html ↵
  • Patel, J., Tinker, A., & Corna, L. (2018). Younger workers’ attitudes and perceptions towards older colleagues.  Working with Older People, 22 (3), 129-138. ↵
  • Veenstra, A. S., Iyer, N., Hossain, M. D., & Park, J. (2014). Time, place, technology: Twitter as an information source in the Wisconsin labor protests. Computers in Human Behavior, 31, 65-72. ↵
  • Ohmer, M. L., & Owens, J. (2013). Using photovoice to empower youth and adults to prevent crime.  Journal of Community Practice, 21 (4), 410-433. ↵
  • Malebranche, D. J., Arriola, K. J., Jenkins, T. R., Dauria, E., & Patel, S. N. (2010). Exploring the “bisexual bridge”: A qualitative study of risk behavior and disclosure of same-sex behavior among Black bisexual men . American Journal of Public Health, 100( 1), 159-164. ↵
  • Al-Jundi, A., & SakkA, S. (2016). Protocol writing in clinical research. Journal of Clinical and Diagnostic Research: JCDR, 10 (11), ZE10. ↵

Research that involves the use of data that represents human expression through words, pictures, movies, performance and other artifacts.

One of the three ethical principles in the Belmont Report. States that benefits and burdens of research should be distributed fairly.

Case studies are a type of qualitative research design that focus on a defined case and gathers data to provide a very rich, full understanding of that case. It usually involves gathering data from multiple different sources to get a well-rounded case description.

the various aspects or dimensions that come together in forming our identity

The unintended influence that the researcher may have on the research process.

A research journal that helps the researcher to reflect on and consider their thoughts and reactions to the research process and how it may be shaping the study

Rigor is the process through which we demonstrate, to the best of our ability, that our research is empirically sound and reflects a scientific approach to knowledge building.

A form of data gathering where researchers ask individual participants to respond to a series of (mostly open-ended) questions.

A form of data gathering where researchers ask a group of participants to respond to a series of (mostly open-ended) questions.

Observation is a tool for data gathering where researchers rely on their own senses (e.g. sight, sound) to gather information on a topic.

Triangulation of data refers to the use of multiple types, measures or sources of data in a research project to increase the confidence that we have in our findings.

sampling approaches for which a person’s likelihood of being selected for membership in the sample is unknown

A convenience sample is formed by collecting data from those people or other relevant elements to which we have the most convenient access. Essentially, we take who we can get.

A quota sample involves the researcher identifying a subgroups within a population that they want to make sure to include in their sample, and then identifies a quota or target number to recruit that represent each of these subgroups.

For a snowball sample, a few initial participants are recruited and then we rely on those initial (and successive) participants to help identify additional people to recruit. We thus rely on participants connects and knowledge of the population to aid our recruitment.

In a purposive sample, participants are intentionally or hand-selected because of their specific expertise or experience.

Content is the substance of the artifact (e.g. the words, picture, scene). It is what can actually be observed.

Context is the circumstances surrounding an artifact, event, or experience.

Photovoice is a technique that merges pictures with narrative (word or voice data that helps that interpret the meaning or significance of the visual artifact. It is often used as a tool in CBPR.

A rich, deep, detailed understanding of a unique person, small group, and/or set of circumstances.

Inclusion criteria are general requirements a person must possess to be a part of your sample.

A purposive sampling strategy where you choose cases because they represent a range of very different perspectives on a topic

A purposive sampling strategy where you select cases that represent the most common/ a commonly held perspective.

A purposive sampling strategy that selects a case(s) that represent extreme or underrepresented perspectives. It is a way of intentionally focusing on or representing voices that may not often be heard or given emphasis.

A purposive sampling strategy that focuses on selecting cases that are important in representing a contemporary politicized issue.

A plan that is developed by a researcher, prior to commencing a research project, that details how data will be collected, stored and managed during the research project.

Emergent design is the idea that some decision in our research design will be dynamic and change as our understanding of the research question evolves as we go through the research process. This is (often) evident in qualitative research, but rare in quantitative research.

approach to recruitment where participants are sought in public spaces

approach to recruitment where participants are based on some personal characteristic or group association

approach to recruitment where participants are members of an organization or social group with identified membership

To type out the text of recorded interview or focus group.

A qualitative research design that aims to capture and describe the lived experience of some event or "phenomenon" for a group of people.

A type of research design that is often used to study a process or identify a theory about how something works.

The point where gathering more data doesn't offer any new ideas or perspectives on the issue you are studying.  Reaching saturation is an indication that we can stop qualitative data collection.

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • Cookies & Privacy
  • GETTING STARTED
  • Introduction
  • FUNDAMENTALS
  • Acknowledgements
  • Research questions & hypotheses
  • Concepts, constructs & variables
  • Research limitations
  • Getting started
  • Sampling Strategy
  • Research Quality
  • Research Ethics
  • Data Analysis

Purposive sampling

Purposive sampling, also known as judgmental , selective or subjective sampling, is a type of non-probability sampling technique . Non-probability sampling focuses on sampling techniques where the units that are investigated are based on the judgement of the researcher [see our articles: Non-probability sampling to learn more about non-probability sampling, and Sampling: The basics , for an introduction to terms such as units , cases and sampling ]. There are a number of different types of purposive sampling, each with different goals. This article explains (a) what purposive sampling is, (b) the eight of the different types of purposive sampling, (c) how to create a purposive sample, and (d) the broad advantages and disadvantages of purposive sampling.

Purposive sampling explained

Types of purposive sampling, advantages and disadvantages of purposive sampling.

Purposive sampling represents a group of different non-probability sampling techniques . Also known as judgmental , selective or subjective sampling, purposive sampling relies on the judgement of the researcher when it comes to selecting the units (e.g., people, cases/organisations, events, pieces of data) that are to be studied. Usually, the sample being investigated is quite small, especially when compared with probability sampling techniques .

Unlike the various sampling techniques that can be used under probability sampling (e.g., simple random sampling, stratified random sampling, etc.), the goal of purposive sampling is not to randomly select units from a population to create a sample with the intention of making generalisations (i.e., statistical inferences ) from that sample to the population of interest [see the article: Probability sampling ]. This is the general intent of research that is guided by a quantitative research design .

The main goal of purposive sampling is to focus on particular characteristics of a population that are of interest, which will best enable you to answer your research questions. The sample being studied is not representative of the population, but for researchers pursuing qualitative or mixed methods research designs , this is not considered to be a weakness. Rather, it is a choice, the purpose of which varies depending on the type of purposing sampling technique that is used. For example, in homogeneous sampling , units are selected based on their having similar characteristics because such characteristics are of particular interested to the researcher. By contrast, critical case sampling is frequently used in exploratory , qualitative research in order to assess whether the phenomenon of interest even exists (amongst other reasons).

During the course of a qualitative or mixed methods research design , more than one type of purposive sampling technique may be used. For example, critical case sampling may be used to investigate whether a phenomenon is worth investigating further, before adopting a maximum variation sampling technique is used to develop a wider picture of the phenomenon. We explain the different goals of these types of purposive sampling technique in the next section.

There are a wide range of purposive sampling techniques that you can use (see Patton, 1990, 2002; Kuzel, 1999, for a complete list). Each of these types of purposive sampling technique is discussed in turn:

Maximum variation sampling

Homogeneous sampling, typical case sampling, extreme (or deviant) case sampling, critical case sampling, total population sampling, expert sampling.

Maximum variation sampling, also known as heterogeneous sampling , is a purposive sampling technique used to capture a wide range of perspectives relating to the thing that you are interested in studying; that is, maximum variation sampling is a search for variation in perspectives, ranging from those conditions that are view to be typical through to those that are more extreme in nature. By conditions , we mean the units (i.e., people, cases/organisations, events, pieces of data) that are of interest to the researcher. These units may exhibit a wide range of attributes, behaviours, experiences, incidents, qualities, situations, and so forth. The basic principle behind maximum variation sampling is to gain greater insights into a phenomenon by looking at it from all angles. This can often help the researcher to identify common themes that are evident across the sample.

Homogeneous sampling is a purposive sampling technique that aims to achieve a homogeneous sample; that is, a sample whose units (e.g., people, cases, etc.) share the same (or very similar) characteristics or traits (e.g., a group of people that are similar in terms of age, gender, background, occupation, etc.). In this respect, homogeneous sampling is the opposite of maximum variation sampling . A homogeneous sample is often chosen when the research question that is being address is specific to the characteristics of the particular group of interest, which is subsequently examined in detail.

Typical case sampling is a purposive sampling technique used when you are interested in the normality/typicality of the units (e.g., people, cases, events, settings/contexts, places/sites) you are interested, because they are normal/typical . The word typical does not mean that the sample is representative in the sense of probability sampling (i.e., that the sample shares the same/similar characteristics of the population being studied). Rather, the word typical means that the researcher has the ability to compare the findings from a study using typical case sampling with other similar samples (i.e., comparing samples, not generalising a sample to a population). Therefore, with typical case sampling, you cannot use the sample to make generalisations to a population, but the sample could be illustrative of other similar samples. Whilst typical case sampling can be used exclusively, it may also follow another type of purposive sampling technique, such as maximum variation sampling, which can help to act as an exploratory sampling strategy to identify the typical cases that are subsequently selected.

Extreme (or deviant) case sampling is a type of purposive sampling that is used to focus on cases that are special or unusual , typically in the sense that the cases highlight notable outcomes , failures or successes . These extreme (or deviant) cases are useful because they often provide significant insight into a particular phenomenon, which can act as lessons (or cases of best practice) that guide future research and practice. In some cases, extreme (or deviant) case sampling is thought to reflect the purest form of insight into the phenomenon being studied.

Critical case sampling is a type of purposive sampling technique that is particularly useful in exploratory qualitative research, research with limited resources , as well as research where a single case (or small number of cases) can be decisive in explaining the phenomenon of interest. It is this decisive aspect of critical case sampling that is arguably the most important. To know if a case is decisive, think about the following statements: ?If it happens there, it will happen anywhere?; or ?if it doesn?t happen there, it won?t happen anywhere?; and ?If that group is having problems, then we can be sure all the groups are having problems? (Patton, 202, p.237). Whilst such critical cases should not be used to make statistical generalisations , it can be argued that they can help in making logical generalisations . However, such logical generalisations should be made carefully.

Total population sampling is a type of purposive sampling technique where you choose to examine the entire population (i.e., the total population ) that have a particular set of characteristics (e.g., specific experience, knowledge, skills, exposure to an event, etc.). In such cases, the entire population is often chosen because the size of the population that has the particular set of characteristics that you are interest in is very small. Therefore, if a small number of units (i.e., people, cases/organisations, etc.) were not included in the sample that is investigated, it may be felt that a significant piece of the puzzle was missing [see the article, Total population sampling , to learn more].

Expert sampling is a type of purposive sampling technique that is used when your research needs to glean knowledge from individuals that have particular expertise . This expertise may be required during the exploratory phase of qualitative research, highlighting potential new areas of interest or opening doors to other participants. Alternately, the particular expertise that is being investigated may form the basis of your research, requiring a focus only on individuals with such specific expertise. Expert sampling is particularly useful where there is a lack of empirical evidence in an area and high levels of uncertainty, as well as situations where it may take a long period of time before the findings from research can be uncovered. Therefore, expert sampling is a cornerstone of a research design known as expert elicitation .

Whilst each of the different types of purposive sampling has its own advantages and disadvantages, there are some broad advantages and disadvantages to using purposive sampling, which are discussed below.

Advantages of purposive sampling

There are a wide range of qualitative research designs that researchers can draw on. Achieving the goals of such qualitative research designs requires different types of sampling strategy and sampling technique . One of the major benefits of purposive sampling is the wide range of sampling techniques that can be used across such qualitative research designs; purposive sampling techniques that range from homogeneous sampling through to critical case sampling , expert sampling , and more.

Whilst the various purposive sampling techniques each have different goals, they can provide researchers with the justification to make generalisations from the sample that is being studied, whether such generalisations are theoretical , analytic and/or logical in nature. However, since each of these types of purposive sampling differs in terms of the nature and ability to make generalisations, you should read the articles on each of these purposive sampling techniques to understand their relative advantages.

Qualitative research designs can involve multiple phases, with each phase building on the previous one. In such instances, different types of sampling technique may be required at each phase. Purposive sampling is useful in these instances because it provides a wide range of non-probability sampling techniques for the researcher to draw on. For example, critical case sampling may be used to investigate whether a phenomenon is worth investigating further, before adopting an expert sampling approach to examine specific issues further.

Disadvantages of purposive sampling

Purposive samples, irrespective of the type of purposive sampling used, can be highly prone to researcher bias . The idea that a purposive sample has been created based on the judgement of the researcher is not a good defence when it comes to alleviating possible researcher biases, especially when compared with probability sampling techniques that are designed to reduce such biases. However, this judgemental, subjective component of purpose sampling is only a major disadvantage when such judgements are ill-conceived or poorly considered ; that is, where judgements have not been based on clear criteria, whether a theoretical framework, expert elicitation, or some other accepted criteria.

The subjectivity and non-probability based nature of unit selection (i.e., selecting people, cases/organisations, etc.) in purposive sampling means that it can be difficult to defend the representativeness of the sample. In other words, it can be difficult to convince the reader that the judgement you used to select units to study was appropriate. For this reason, it can also be difficult to convince the reader that research using purposive sampling achieved theoretical/analytic/logical generalisation . After all, if different units had been selected, would the results and any generalisations have been the same?

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

jcm-logo

Article Menu

critical sampling in qualitative research

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The power of a belief system: a systematic qualitative synthesis of spiritual care for patients with brain tumors.

critical sampling in qualitative research

1. Introduction

2. materials and methods, 2.1. literature search and methodology, 2.2. screening and data extraction, 2.3. data synthesis and analysis, search results, 4. discussion, 4.1. patient, 4.2. family or care givers, 4.3. provider, 4.4. future directions and limitations, 5. conclusions, author contributions, data availability statement, conflicts of interest.

  • Ostrom, Q.T.; Cioffi, G.; Gittleman, H.; Patil, N.; Waite, K.; Kruchko, C.; Barnholtz-Sloan, J.S. CBTRUS Statistical Report: Primary Brain and Other Central Nervous System Tumors Diagnosed in the United States in 2012–2016. Neuro. Oncol. 2019 , 21 , v1–v100. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Rouse, C.; Gittleman, H.; Ostrom, Q.T.; Kruchko, C.; Barnholtz-Sloan, J.S. Years of Potential Life Lost for Brain and CNS Tumors Relative to Other Cancers in Adults in the United States, 2010. Neuro. Oncol. 2016 , 18 , 70–77. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Butowski, N.A. Epidemiology and Diagnosis of Brain Tumors. Continuum 2015 , 21 , 301–313. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Ali, F.S.; Hussain, M.R.; Gutiérrez, C.; Demireva, P.; Ballester, L.Y.; Zhu, J.-J.; Blanco, A.; Esquenazi, Y. Cognitive Disability in Adult Patients with Brain Tumors. Cancer Treat. Rev. 2018 , 65 , 33–40. [ Google Scholar ] [ CrossRef ]
  • Koekkoek, J.A.F.; van der Meer, P.B.; Pace, A.; Hertler, C.; Harrison, R.; Leeper, H.E.; Forst, D.A.; Jalali, R.; Oliver, K.; Philip, J.; et al. Palliative Care and End-of-Life Care in Adults with Malignant Brain Tumors. Neuro. Oncol. 2023 , 25 , 447–456. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • de Brito Sena, M.A.; Damiano, R.F.; Lucchetti, G.; Peres, M.F.P. Defining Spirituality in Healthcare: A Systematic Review and Conceptual Framework. Front. Psychol. 2021 , 12 , 756080. [ Google Scholar ] [ CrossRef ]
  • Brady, M.J.; Peterman, A.H.; Fitchett, G.; Mo, M.; Cella, D. A Case for Including Spirituality in Quality of Life Measurement in Oncology. Psychooncology 1999 , 8 , 417–428. [ Google Scholar ] [ CrossRef ]
  • Ownsworth, T.; Nash, K. Existential Well-Being and Meaning Making in the Context of Primary Brain Tumor: Conceptualization and Implications for Intervention. Front. Oncol. 2015 , 5 , 96. [ Google Scholar ] [ CrossRef ]
  • Anderson-Shaw, L.; Baslet, G.; Villano, J.L. Brain Neoplasm and the Potential Impact on Self-Identity. AJOB Neurosci. 2010 , 1 , 3–7. [ Google Scholar ] [ CrossRef ]
  • Pertz, M.; Schlegel, U.; Thoma, P. Sociocognitive Functioning and Psychosocial Burden in Patients with Brain Tumors. Cancers 2022 , 14 , 767. [ Google Scholar ] [ CrossRef ]
  • Pelletier, G.; Verhoef, M.J.; Khatri, N.; Hagen, N. Quality of Life in Brain Tumor Patients: The Relative Contributions of Depression, Fatigue, Emotional Distress, and Existential Issues. J. Neurooncol. 2002 , 57 , 41–49. [ Google Scholar ] [ CrossRef ]
  • Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. Syst. Rev. 2021 , 10 , 89. [ Google Scholar ] [ CrossRef ]
  • Strang, S.; Strang, P. Spiritual Thoughts, Coping and ‘Sense of Coherence’ in Brain Tumour Patients and Their Spouses. Palliat. Med. 2001 , 15 , 127–134. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Brody, H.; Cardinal, J.L.; Foglio, J.P. Addressing Spiritual Concerns in Family Medicine: A Team Approach. J. Am. Board Fam. Med. 2004 , 17 , 201–206. [ Google Scholar ] [ CrossRef ]
  • Lipsman, N.; Skanda, A.; Kimmelman, J.; Bernstein, M. The Attitudes of Brain Cancer Patients and Their Caregivers towards Death and Dying: A Qualitative Study. BMC Palliat. Care 2007 , 6 , 7. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Nixon, A.; Narayanasamy, A. The Spiritual Needs of Neuro-oncology Patients from Patients’ Perspective. J. Clin. Nurs. 2010 , 19 , 2259–2370. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zelcer, S.; Cataudella, D.; Cairney, A.E.L.; Bannister, S.L. Palliative Care of Children with Brain Tumors. Arch. Pediatr. Adolesc. Med. 2010 , 164 , 225. [ Google Scholar ] [ CrossRef ]
  • Cavers, D.; Hacking, B.; Erridge, S.E.; Kendall, M.; Morris, P.G.; Murray, S.A. Social, Psychological and Existential Well-Being in Patients with Glioma and Their Caregivers: A Qualitative Study. CMAJ 2012 , 184 , E373–E382. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Newberry, A.G.; Choi, C.-W.J.; Donovan, H.S.; Schulz, R.; Bender, C.; Given, B.; Sherwood, P. Exploring Spirituality in Family Caregivers of Patients with Primary Malignant Brain Tumors across the Disease Trajectory. Oncol. Nurs. Forum 2013 , 40 , E119–E125. [ Google Scholar ] [ CrossRef ]
  • Nixon, A.V.; Narayanasamy, A.; Penny, V. An Investigation into the Spiritual Needs of Neuro-Oncology Patients from a Nurse Perspective. BMC Nurs. 2013 , 12 , 2. [ Google Scholar ] [ CrossRef ]
  • Sizoo, E.M.; Dirven, L.; Reijneveld, J.C.; Postma, T.J.; Heimans, J.J.; Deliens, L.; Pasman, H.R.W.; Taphoorn, M.J.B. Measuring Health-Related Quality of Life in High-Grade Glioma Patients at the End of Life Using a Proxy-Reported Retrospective Questionnaire. J. Neurooncol. 2014 , 116 , 283–290. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Piderman, K.M.; Breitkopf, C.R.; Jenkins, S.M.; Lovejoy, L.A.; Dulohery, Y.M.; Marek, D.V.; Durland, H.L.; Head, D.L.; Swanson, S.W.; Hogg, J.T.; et al. The Feasibility and Educational Value of Hear My Voice, a Chaplain-Led Spiritual Life Review Process for Patients with Brain Cancers and Progressive Neurologic Conditions. J. Cancer Educ. 2015 , 30 , 209–212. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Strang, S.; Strang, P.; Ternestedt, B.-M. Existential Support in Brain Tumour Patients and Their Spouses. Support. Care Cancer 2001 , 9 , 625–633. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Piderman, K.M.; Radecki Breitkopf, C.; Jenkins, S.M.; Lapid, M.I.; Kwete, G.M.; Sytsma, T.T.; Lovejoy, L.A.; Yoder, T.J.; Jatoi, A. The Impact of a Spiritual Legacy Intervention in Patients with Brain Cancers and Other Neurologic Illnesses and Their Support Persons. Psychooncology 2017 , 26 , 346–353. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Piderman, K.M.; Egginton, J.S.; Ingram, C.; Dose, A.M.; Yoder, T.J.; Lovejoy, L.A.; Swanson, S.W.; Hogg, J.T.; Lapid, M.I.; Jatoi, A.; et al. I’m Still Me: Inspiration and Instruction from Individuals with Brain Cancer. J. Health Care Chaplain. 2017 , 23 , 15–33. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Cutillo, A.; Zimmerman, K.; Davies, S.; Madan-Swain, A.; Landier, W.; Arynchyna, A.; Rocque, B.G. Coping Strategies Used by Caregivers of Children with Newly Diagnosed Brain Tumors. J. Neurosurg. Pediatr. 2018 , 23 , 30–39. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Giovagnoli, A.R.; Paterlini, C.; Meneses, R.F.; Martins da Silva, A. Spirituality and Quality of Life in Epilepsy and Other Chronic Neurological Disorders. Epilepsy Behav. 2019 , 93 , 94–101. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Randazzo, D.M.; McSherry, F.; Herndon, J.E.; Affronti, M.L.; Lipp, E.S.; Flahiff, C.; Miller, E.; Woodring, S.; Boulton, S.; Desjardins, A.; et al. Complementary and Integrative Health Interventions and Their Association with Health-Related Quality of Life in the Primary Brain Tumor Population. Complement. Ther. Clin. Pract. 2019 , 36 , 43–48. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Hyer, J.M.; Paredes, A.Z.; Kelley, E.P.; Tsilimigras, D.; Meyer, B.; Newberry, H.; Pawlik, T.M. Characterizing Pastoral Care Utilization by Cancer Patients. Am. J. Hosp. Palliat. Care 2021 , 38 , 758–765. [ Google Scholar ] [ CrossRef ]
  • Randazzo, D.M.; McSherry, F.; Herndon, J.E., II; Affronti, M.L.; Lipp, E.S.; Miller, E.S.; Woodring, S.; Healy, P.; Jackman, J.; Crouch, B.; et al. Spiritual Well-Being and Its Association with Health-Related Quality of Life in Primary Brain Tumor Patients. Neurooncol. Pract. 2021 , 8 , 299–309. [ Google Scholar ] [ CrossRef ]
  • Baksi, A.; Arda Sürücü, H.; Genç, H. Psychological Hardiness and Spirituality in Patients with Primary Brain Tumors: A Comparative Study. J. Relig. Health 2021 , 60 , 2799–2809. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Sprik, P.J.; Tata, B.; Kelly, B.; Fitchett, G. Religious/Spiritual Concerns of Patients with Brain Cancer and Their Caregivers. Ann. Palliat. Med. 2021 , 10 , 964–969. [ Google Scholar ] [ CrossRef ]
  • Applebaum, A.J.; Baser, R.E.; Roberts, K.E.; Lynch, K.; Gebert, R.; Breitbart, W.S.; Diamond, E.L. Meaning-Centered Psychotherapy for Cancer Caregivers: A Pilot Trial among Caregivers of Patients with Glioblastoma Multiforme. Transl. Behav. Med. 2022 , 12 , 841–852. [ Google Scholar ] [ CrossRef ]
  • Maugans, T.A. The SPIRITual History. Arch. Fam. Med. 1996 , 5 , 11–16. [ Google Scholar ] [ CrossRef ]
  • Puchalski, C.; Ferrell, B.; Virani, R.; Otis-Green, S.; Baird, P.; Bull, J.; Chochinov, H.; Handzo, G.; Nelson-Becker, H.; Prince-Paul, M.; et al. Improving the Quality of Spiritual Care as a Dimension of Palliative Care: The Report of the Consensus Conference. J. Palliat. Med. 2009 , 12 , 885–904. [ Google Scholar ] [ CrossRef ]
  • Nolan, S.; Saltmarch, P.; Leget, C. Spiritual Care in Palliative Care: Working towards an EAPC Task Force. Eur. J. Pal. Care 2011 , 18 , 86–89. [ Google Scholar ]
  • Puchalski, C.M.; Vitillo, R.; Hull, S.K.; Reller, N. Improving the Spiritual Dimension of Whole Person Care: Reaching National and International Consensus. J. Palliat. Med. 2014 , 17 , 642–656. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Gholamhosseini, M.; Dehghan, M.; Azzizadeh Forouzi, M.; Mangolian Shahrbabaki, P.; Roy, C. Effectiveness of Spiritual Counseling on the Enhancement of Hope in Iranian Muslim Patients with Myocardial Infarction: A Two-Month Follow-Up. J. Relig. Health 2022 , 61 , 3898–3908. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kobasa, S.C. Stressful Life Events, Personality, and Health: An Inquiry into Hardiness. J. Pers. Soc. Psychol. 1979 , 37 , 1–11. [ Google Scholar ] [ CrossRef ]
  • Korman, M.B.; Ellis, J.; Moore, J.; Bilodeau, D.; Dulmage, S.; Fitch, M.; Mueller, C.; Sahgal, A.; Moroney, C. Dignity Therapy for Patients with Brain Tumours: Qualitative Reports from Patients, Caregivers and Practitioners. Ann. Palliat. Med. 2021 , 10 , 838–845. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

DatabasePUBMED/MEDLINE
(Spirituality OR Holistic Medicine) AND (Brain Tumors OR Neuro-oncology OR Glioma OR meningioma OR astrocytoma OR glioblastoma OR ependymoma OR schwannoma OR pituitary adenoma OR oligodendroglioma)
7th May 2024
214
StudyPMIDDesignParticipantsnKey Findings
Strang et al. 2001 [ ]11301663Qualitative Study (Interviews)Patients20
Caregivers16
Brody et al. 2004 [ ]15226285Case ReportPatient1
Lipsman et al. 2007 [ ]17996072Qualitative Study (Interviews)Patients7
Caregivers22
Nixon et al. 2010 [ ]20529167Qualitative Study (Survey)Patients21
Zelcer et al. 2010 [ ]20194254Qualitative Study (Interviews)Caregivers25
Cavers et al. 2012 [ ]22431898Prospective Qualitative Study (Interviews)Patients26
Caregivers23
Hospital Staff19
Newberry et al. 2013 [ ]23615145Prospective Qualitative Study (Interviews)Patients50 < 0.01) and anxiety (p < 0.01) symptoms for patients and their families and also served as a protective barrier against poor mental health outcomes.
Caregivers50
Nixon et al. 2013 [ ]23374999Mixed Methods (Surveys + Thematic Analysis)Hospital Staff12
Sizoo et al. 2014 [ ]24162875Retrospective Qualitative Study (Survey)Caregivers83
Piderman et al. 2015 [ ]24952300Prospective Qualitative Study (Interviews)Patients25
Strang et al. 2001 [ ]11762974Qualitative Study (Interviews)Patients20
Caregivers16
Hospital Staff16
Piderman et al. 2017 [ ]26643586RCTPatients24
Caregivers24
Piderman et al. 2017 [ ]27398684Prospective Qualitative Study (Interviews)Patients19
Cutillo et al. 2018 [ ]30485195Prospective Qualitative Study (Interviews)Caregivers40
Giovagnoli et al. 2019 [ ]30851485Comparative Cohort StudyPatients28
Randazzo et al. 2019 [ ]31383442Retrospective Cohort StudyPatients845
Hyer et al. 2021 [ ]32799646Retrospective Cohort StudyPatients232
Randazzo et al. 2021 [ ]34055377Retrospective Cohort StudyPatients606
Baksi et al. 2021 [ ]33818705Prospective Cohort ComparisonsPatients61 < 0.001).
Healthy Subjects61
Sprik et al. 2021 [ ]32921085Qualitative Study (Interview)Hospital Staff1
Appelbaum et al. 2022 [ ]35852487Mixed-Methods RCTCaregivers60
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Mehta, N.H.; Prajapati, M.; Aeleti, R.; Kinariwala, K.; Ohri, K.; McCabe, S.; Buller, Z.; Leskinen, S.; Nawabi, N.L.; Bhatt, V.; et al. The Power of a Belief System: A Systematic Qualitative Synthesis of Spiritual Care for Patients with Brain Tumors. J. Clin. Med. 2024 , 13 , 4871. https://doi.org/10.3390/jcm13164871

Mehta NH, Prajapati M, Aeleti R, Kinariwala K, Ohri K, McCabe S, Buller Z, Leskinen S, Nawabi NL, Bhatt V, et al. The Power of a Belief System: A Systematic Qualitative Synthesis of Spiritual Care for Patients with Brain Tumors. Journal of Clinical Medicine . 2024; 13(16):4871. https://doi.org/10.3390/jcm13164871

Mehta, Neel H., Megh Prajapati, Rishi Aeleti, Kush Kinariwala, Karina Ohri, Sean McCabe, Zachary Buller, Sandra Leskinen, Noah L. Nawabi, Vatsal Bhatt, and et al. 2024. "The Power of a Belief System: A Systematic Qualitative Synthesis of Spiritual Care for Patients with Brain Tumors" Journal of Clinical Medicine 13, no. 16: 4871. https://doi.org/10.3390/jcm13164871

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Open access
  • Published: 07 August 2024

Factors critical for the successful delivery of telehealth to rural populations: a descriptive qualitative study

  • Rebecca Barry   ORCID: orcid.org/0000-0003-2272-4694 1 ,
  • Elyce Green   ORCID: orcid.org/0000-0002-7291-6419 1 ,
  • Kristy Robson   ORCID: orcid.org/0000-0002-8046-7940 1 &
  • Melissa Nott   ORCID: orcid.org/0000-0001-7088-5826 1  

BMC Health Services Research volume  24 , Article number:  908 ( 2024 ) Cite this article

214 Accesses

Metrics details

The use of telehealth has proliferated to the point of being a common and accepted method of healthcare service delivery. Due to the rapidity of telehealth implementation, the evidence underpinning this approach to healthcare delivery is lagging, particularly when considering the uniqueness of some service users, such as those in rural areas. This research aimed to address the current gap in knowledge related to the factors critical for the successful delivery of telehealth to rural populations.

This research used a qualitative descriptive design to explore telehealth service provision in rural areas from the perspective of clinicians and describe factors critical to the effective delivery of telehealth in rural contexts. Semi-structured interviews were conducted with clinicians from allied health and nursing backgrounds working in child and family nursing, allied health services, and mental health services. A manifest content analysis was undertaken using the Framework approach.

Sixteen health professionals from nursing, clinical psychology, and social work were interviewed. Participants mostly identified as female (88%) and ranged in age from 26 to 65 years with a mean age of 47 years. Three overarching themes were identified: (1) Navigating the role of telehealth to support rural healthcare; (2) Preparing clinicians to engage in telehealth service delivery; and (3) Appreciating the complexities of telehealth implementation across services and environments.

Conclusions

This research suggests that successful delivery of telehealth to rural populations requires consideration of the context in which telehealth services are being delivered, particularly in rural and remote communities where there are challenges with resourcing and training to support health professionals. Rural populations, like all communities, need choice in healthcare service delivery and models to increase accessibility. Preparation and specific, intentional training for health professionals on how to transition to and maintain telehealth services is a critical factor for delivery of telehealth to rural populations. Future research should further investigate the training and supports required for telehealth service provision, including who, when and what training will equip health professionals with the appropriate skill set to deliver rural telehealth services.

Peer Review reports

Introduction

Telehealth is a commonly utilised application in rural health settings due to its ability to augment service delivery across wide geographical areas. During the COVID-19 pandemic, the use of telehealth became prolific as it was rapidly adopted across many new fields of practice to allow for healthcare to continue despite requirements for physical distancing. In Australia, the Medicare Benefits Scheme (MBS) lists health services that are subsidised by the federal government. Telehealth items were extensively added to these services as part of the response to COVID-19 [ 1 ]. Although there are no longer requirements for physical distancing in Australia, many health providers have continued to offer services via telehealth, particularly in rural areas [ 2 , 3 ]. For the purpose of this research, telehealth was defined as a consultation with a healthcare provider by phone or video call [ 4 ]. Telehealth service provision in rural areas requires consideration of contextual factors such as access to reliable internet, community members’ means to finance this access [ 5 ], and the requirement for health professionals to function across a broad range of specialty skills. These factors present a case for considering the delivery of telehealth in rural areas as a unique approach, rather than one portion of the broader use of telehealth.

Research focused on rural telehealth has proliferated alongside the rapid implementation of this service mode. To date, there has been a focus on the impact of telehealth on areas such as client access and outcomes [ 2 ], client and health professional satisfaction with services and technology [ 6 ], direct and indirect costs to the patient (travel cost and time), healthcare service provider staffing, lower onsite healthcare resource utilisation, improved physician recruitment and retention, and improved client access to care and education [ 7 , 8 ]. In terms of service implementation, these elements are important but do not outline the broader implementation factors critical to the success of telehealth delivery in rural areas. One study by Sutarsa et al. explored the implications of telehealth as a replacement for face-to-face services from the perspectives of general practitioners and clients [ 9 ] and articulated that telehealth services are not a like-for-like service compared to face-to-face modes. Research has also highlighted the importance of understanding the experience of telehealth in rural Australia across different population groups, including Aboriginal and Torres Strait Islander peoples, and the need to consider culturally appropriate services [ 10 , 11 , 12 , 13 ].

Research is now required to determine what the critical implementation factors are for telehealth delivery in rural areas. This type of research would move towards answering calls for interdisciplinary, qualitative, place-based research [ 12 ] that explores factors required for the sustainability and usability of telehealth in rural areas. It would also contribute to the currently limited understanding of implementation factors required for telehealth delivery to rural populations [ 14 ]. There is a reasonable expectation that there is consistency in the way health services are delivered, particularly across geographical locations. Due to the rapid implementation of telehealth services, there was limited opportunity to proactively identify factors critical for successful telehealth delivery in rural areas and this has created a lag in policy, process, and training. This research aimed to address this gap in the literature by exploring and describing rural health professionals’ experiences providing telehealth services. For the purpose of this research, rural is inclusive of locations classified as rural or remote (MM3-6) using the Modified Monash Model which considers remoteness and population size in its categorisation [ 15 ].

This research study adopted a qualitative descriptive design as described by Sandelowski [ 16 ]. The purpose of a descriptive study is to document and describe a phenomenon of interest [ 17 ] and this method is useful when researchers seek to understand who was involved, what occurred, and the location of the phenomena of interest [ 18 ]. The phenomenon of interest for this research was the provision of telehealth services to rural communities by health professionals. In line with this, a purposive sampling technique was used to identify participants who have experience of this phenomenon [ 19 ]. This research is reported in line with the consolidated criteria for reporting qualitative research [ 20 ] to enhance transparency and trustworthiness of the research process and results [ 21 ].

Research aims

This research aimed to:

Explore telehealth service provision in rural areas from the perspective of clinicians.

Describe factors critical to the successful delivery of telehealth in rural contexts.

Participant recruitment and data collection

People eligible to participate in the research were allied health (using the definition provided by Allied Health Professions Australia [ 22 ]) or nursing staff who delivered telehealth services to people living in the geographical area covered by two rural local health districts in New South Wales, Australia (encompassing rural areas MM3-6). Health organisations providing telehealth service delivery in the southwestern and central western regions of New South Wales were identified through the research teams’ networks and invited to be part of the research.

Telehealth adoption in these organisations was intentionally variable to capture different experiences and ranged from newly established (prompted by COVID-19) to well established (> 10 years of telehealth use). Organisations included government, non-government, and not-for-profit health service providers offering child and family nursing, allied health services, and mental health services. Child and family nursing services were delivered by a government health service and a not-for-profit specialist service, providing health professional advice, education, and guidance to families with a baby or toddler. Child and family nurses were in the same geographical region as the families receiving telehealth. Transition to telehealth services was prompted by the COVID-19 pandemic. The participating allied health service was a large, non-government provider of allied health services to regional New South Wales. Allied health professionals were in the same region as the client receiving telehealth services. Use of telehealth in this organisation had commenced prior to the COVID-19 pandemic. Telehealth mental health services were delivered by an emergency mental health team, located at a large regional hospital to clients in another healthcare facility or location to which the health professional could not be physically present (typically a lower acuity health service in a rural location).

Once organisations agreed to disseminate the research invitation, a key contact person employed at each health organisation invited staff to participate via email. Staff were provided with contact details of the research team in the email invitation. All recruitment and consent processes were managed by the research team to minimise risk of real or perceived coercion between staff and the key contact person, who was often in a supervisory or managerial position within the organisation. Data were collected using semi-structured interviews using an online platform with only the interviewer and participant present. Interviews were conducted by a research team member with training in qualitative data collection during November and December 2021 and were transcribed verbatim by a professional transcribing service. All participants were offered the opportunity to review their transcript and provide feedback, however none opted to do so. Data saturation was not used as guidance for participant numbers, taking the view of Braun and Clarke [ 23 ] that meaning is generated through the analysis rather than reaching a point of saturation.

Data analysis

Researchers undertook a manifest content analysis of the data using the Framework approach developed by Ritchie and Spencer [ 24 ]. All four co-authors were involved in the data analysis process. Framework uses five stages for analysis including (1) familiarisation (2) identifying a thematic framework based on emergent overarching themes, (3) application of the coding framework to the interview transcripts [indexing], (4) reviewing and charting of themes and subthemes, and (5) mapping and interpretation [ 24 , p. 178]. The research team analysed a common interview initially, identified codes and themes, then independently applied these to the remaining interviews. Themes were centrally recorded, reviewed, and discussed by the research team prior to inclusion into the thematic framework. Final themes were confirmed via collaborative discussion and consensus. The iterative process used to review and code data was recorded into an Excel spreadsheet to ensure auditability and credibility, and to enhance the trustworthiness of the analysis process.

This study was approved by the Greater Western NSW Human Research Ethics Committee and Charles Sturt University Human Research Ethics Committee (approval numbers: 2021/ETH00088 and H21215). All participants provided written consent.

Eighteen health professionals consented to be interviewed. Two were lost to follow-up, therefore semi-structured interviews were conducted with 16 of these health professionals, the majority of which were from the discipline of nursing ( n  = 13, 81.3%). Participant demographics and their pseudonyms are shown in Table  1 .

Participants mostly identified as female ( n  = 14, 88%) and ranged in age from 26 to 65 years with a mean age of 47 years. Participants all delivered services to rural communities in the identified local health districts and resided within the geographical area they serviced. The participants resided in areas classified as MM3-6 but were most likely to reside in an area classified MM3 (81%). Average interview time was 38 min, and all interviews were conducted online via Zoom.

Three overarching themes were identified through the analysis of interview transcripts with health professionals. These themes were: (1) Navigating the role of telehealth to support rural healthcare; (2) Preparing clinicians to engage in telehealth service delivery; and (3) Appreciating the complexities of telehealth implementation across services and environments.

Theme 1: navigating the role of telehealth to support rural healthcare

The first theme described clinicians’ experiences of using telehealth to deliver healthcare to rural communities, including perceived benefits and challenges to acceptance, choice, and access. Interview participants identified several factors that impacted on or influenced the way they could deliver telehealth, and these were common across the different organisational structures. Clinicians highlighted the need to consider how to effectively navigate the role of telehealth in supporting their practice, including when it would enhance their practice, and when it might create barriers. The ability to improve rural service provision through greater access was commonly discussed by participants. In terms of factors important for telehealth delivery in rural contexts, the participants demonstrated that knowledge of why and how telehealth was used were important, including the broadened opportunity for healthcare access and an understanding of the benefits and challenges of providing these services.

Access to timely and specialist healthcare for rural communities

Participants described a range of benefits using telehealth to contact small, rural locations and facilitate greater access to services closer to home. This was particularly evident when there was lack of specialist support in these areas. These opportunities meant that rural people could receive timely care that they required, without the burden of travelling significant distances to access health services.

The obvious thing in an area like this, is that years ago, people were being transported three hours just to see us face to face. It’s obviously giving better, more timely access to services. (Patrick)

Staff access to specialist support was seen as an important aspect for rural healthcare by participants, because of the challenges associated with lack of staffing and resources within these areas which potentially increased the risks for staff in these locations, particularly when managing clients with acute mental illnesses.

Within the metro areas they’ve got so many staff and so many hospitals and they can manage mental health patients quite well within those facilities, but with us some of these hospitals will have one RN on overnight and it’s just crappy for them, and so having us able to do video link, it kind of takes the pressure off and we’re happy to make the decisions and the risky decisions for what that person needs. (Tracey)

Participants described how the option to use telehealth to provide specialised knowledge and expertise to support local health staff in rural hospitals likely led to more appropriate outcomes for clients wanting to be able to remain in their community. Conversely, Amber described the implications if telehealth was not available.

If there was some reason why the telehealth wasn’t available… quite often, I suppose the general process be down to putting the pressure on the nursing and the medical staff there to make a decision around that person, which is not a fair or appropriate thing for them to do. (Amber)

Benefits and challenges to providing telehealth in rural communities

Complementing the advantage of reduced travel time to access services, was the ability for clients to access additional support via telehealth, which was perceived as a benefit. For example, one participant described how telehealth was useful for troubleshooting client’s problems rather than waiting for their next scheduled appointment.

If a mum rings you with an issue, you can always say to them “are you happy to jump onto My Virtual Care with me now?” We can do that, do a consult over My Virtual Care. Then I can actually gauge how mum is. (Jade)

While accessibility was a benefit, participants highlighted that rural communities need to be provided with choice, rather than the assumption that telehealth be the preferred option for everyone, as many rural clients want face-to-face services.

They’d all prefer, I think, to be able to see someone in person. I think that’s generally what NSW rural [want] —’cause I’m from country towns as well—there’s no substitute, like I said, for face-to-face assessment. (Adam)

Other, more practical limitations of broad adoption of telehealth raised by the participants included issues with managing technology and variability in internet connectivity.

For many people in the rural areas, it’s still an issue having that regular [internet] connection that works all the time. I think it’s a great option but I still think it’s something that some rural people will always have some challenges with because it’s not—there’s so many black spots and so many issues still with the internet connection in rural areas. Even in town, there’s certain areas that are still having lots of problems. (Chloe)

Participants also identified barriers related to assumptions that all clients will have access to technology and have the necessary data to undertake a telehealth consultation, which wasn’t always the case, particularly with individuals experiencing socioeconomic disadvantage.

A lot of [Aboriginal] families don’t actually have access to telehealth services. Unless they use their phone. If they have the technology on their phones. I found that was a little bit of an issue to try and help those particular clients to get access to the internet, to have enough data on their phone to make that call. There was a lot of issues and a lot of things that we were putting in complaints about as they were going “we’re using up a lot of these peoples’ data and they don’t have internet in their home.” (Evelyn).

Other challenges identified by the participants were related to use of telehealth for clients that required additional support. Many participants talked about the complexities of using an interpreter during a telehealth consultation for culturally and linguistically diverse clients.

Having interpreters, that’s another element that’s really, really difficult because you’re doing video link, but then you’ve also got the phone on speaker and you’re having this three-way conversation. Even that, in itself, that added element on video link is really, really tough. It’s a really long process. (Tracey)

In summary, this theme described some of the benefits and constraints when using telehealth for the delivery of rural health services. The participants demonstrated the importance of understanding the needs and contexts of individual clients, and accounting for this when making decisions to incorporate telehealth into their service provision. Understanding how and why telehealth can be implemented in rural contexts was an important foundation for the delivery of these services.

Theme 2: preparing clinicians to engage in telehealth service delivery

The preparation required for clinicians to engage with telehealth service delivery was highlighted and the participants described the unique set of skills required to effectively build rapport, engage, and carry out assessments with clients. For many participants who had not routinely used telehealth prior to the COVID-19 pandemic, the transition to using telehealth had been rapid. The participants reflected on the implications of rapidly adopting these new practices and the skills they required to effectively deliver care using telehealth. These skills were critical for effective delivery of telehealth to rural communities.

Rapid adoption of new skills and ways of working

The rapid and often unsupported implementation of telehealth in response to the COVID-19 pandemic resulted in clinicians needing to learn and adapt to telehealth, often without being taught or with minimal instruction.

We had to do virtual, virtually overnight we were changed to, “Here you go. Do it this way,” without any real education. It was learned as we went because everybody was in the same boat. Everyone was scrabbling to try and work out how to do it. (Chloe)

In addition to telehealth services starting quickly, telehealth provision requires clinicians to use a unique set of skills. Therapeutic interventions and approaches were identified as being more challenging when seeing a client through a screen, compared to being physically present together in a room.

The body language is hidden a little bit when you’re on teleconference, whereas when you’re standing up face to face with someone, or standing side by side, the person can see the whole picture. When you’re on the video link, the patient actually can’t—you both can’t see each other wholly. That’s one big barrier. (Adam)

There was an emphasis on communication skills such as active listening and body language that were required when engaging with telehealth. These skills were seen as integral to building rapport and connection. The importance of language in an environment with limited visualisation of body language, is further demonstrated by one participant describing how they tuned into the timing and flow of the conversation to avoid interrupting and how these skills were pertinent for using telehealth.

In the beginning especially, we might do this thing where I think they’ve finished or there’s a bit of silence, so I go to speak and then they go to speak at the same time, and that’s different because normally in person you can really gauge that quite well if they’ve got more to say. I think those little things mean that you’ve got to work a bit harder and you’ve got to bring those things to the attention of the client often. (Robyn)

Preparing clinicians to engage in telehealth also required skills in sharing clear and consistent information with clients about the process of interacting via telehealth. This included information to reassure the client that the telehealth appointment was private as well as prepare them for potential interruptions due to connection issues.

I think being really explicitly clear about the fact that with our setups we have here, no one can dial in, no one else is in my room even watching you. We’re not recording, and there’s a lot of extra information, I think around that we could be doing better in terms of delivering to the person. (Amber)

Becoming accustomed to working through the ‘window’

Telehealth was often described as a window and not a view of the whole person which presented limitations for clinicians, such as seeing nuance of expression. Participants described the difficulties of assessing a client using telehealth when you cannot see the whole picture such as facial expressions, movement, behaviour, interactions with others, dress, and hygiene.

I found it was quite difficult because you couldn’t always see the actual child or the baby, especially if they just had their phone. You couldn’t pick up the body language. You couldn’t always see the facial expressions. You couldn’t see the child and how the child was responding. It did inhibit a lot of that side of our assessing. Quite often you’d have to just write, “Unable to view child.” You might be able to hear them but you couldn’t see them. (Chloe)

Due to the window view, the participants described how they needed to pay even greater attention to eye contact and tone of voice when engaging with clients via telehealth.

I think the eye contact is still a really important thing. Getting the flow of what they’re comfortable with a little bit too. It’s being really careful around the tone of voice as well too, because—again, that’s the same for face-to-face, but be particularly careful of it over telehealth. (Amber)

This theme demonstrates that there are unique and nuanced skills required by clinicians to effectively engage in provision of rural healthcare services via telehealth. Many clinicians described how the rapid uptake of telehealth required them to quickly adapt to providing telehealth services, and they had to modify their approach rather than replicate what they would do in face-to-face contexts. Appreciating the different skills sets required for telehealth practice was perceived as an important element in supporting clinicians to deliver quality healthcare.

Theme 3: appreciating the complexities of telehealth implementation across services and environments

It was commonly acknowledged that there needed to be an appreciation by clinicians of the multiple different environments that telehealth was being delivered in, as well as the types of consultations being undertaken. This was particularly important when well-resourced large regional settings were engaging with small rural services or when clinicians were undertaking consultations within a client’s home.

Working from a different location and context

One of the factors identified as important for the successful delivery of services via telehealth was an understanding of the location and context that was being linked into. Participants regularly talked about the challenges when undertaking a telehealth consultation with clients at home, which impacted the quality of the consultation as it was easy to “ lose focus” (Kelsey) and become distracted.

Instead of just coming in with one child, they had all the kids, all wanting their attention. I also found that babies and kids kept pressing the screen and would actually disconnect us regularly. (Chloe)

For participants located in larger regional locations delivering telehealth services to smaller rural hospitals, it was acknowledged that not all services had equivalent resources, skills, and experience with this type of healthcare approach.

They shouldn’t have to do—they’ve gotta double-click here, login there. They’re relying on speakers that don’t work. Sometimes they can’t get the cameras working. I think telehealth works as long as it’s really user friendly. I think nurses—as a nurse, we’re not supposed to be—I know IT’s in our job criteria, but not to the level where you’ve got to have a degree in technology to use it. (Adam)

Participants also recognised that supporting a client through a telehealth consultation adds workload stress as rural clinicians are often having pressures with caseloads and are juggling multiple other tasks while trying to trouble shoot technology issues associated with a telehealth consultation.

Most people are like me, not great with computers. Sometimes the nurse has got other things in the Emergency Department she’s trying to juggle. (Eleanor)

Considerations for safety, privacy, and confidentiality

Participants talked about the challenges that arose due to inconsistencies in where and how the telehealth consultation would be conducted. Concerns about online safety and information privacy were identified by participants.

There’s the privacy issue, particularly when we might see someone and they might be in a bed and they’ve got a laptop there, and they’re not given headphones, and we’re blaring through the speaker at them, and someone’s three meters away in another bed. That’s not good. That’s a bit of a problem. (Patrick)

When telehealth was offered as an option to clients at a remote healthcare site, clinicians noted that some clients were not provided with adequate support and were left to undertake the consultation by themselves which could cause safety risks for the client and an inability for the telehealth clinician to control the situation.

There were some issues with patients’ safety though. Where the telehealth was located was just in a standard consult room and there was actually a situation where somebody self-harmed with a needle that was in a used syringe box in that room. Then it was like, you just can’t see high risk—environment. (Eleanor)

Additionally, participants noted that they were often using their own office space to conduct telehealth consultations rather than a clinical room which meant there were other considerations to think about.

Now I always lock my room so nobody can enter. That’s a nice little lesson learnt. I had a consult with a mum and some other clinicians came into my room and I thought “oh my goodness. I forgot to lock.” I’m very mindful now that I lock. (Jade)

This theme highlights the complexities that exist when implementing telehealth across a range of rural healthcare settings and environments. It was noted by participants that there were variable skills and experience in using telehealth across staff located in smaller rural areas, which could impact on how effective the consultation was. Participants identified the importance of purposely considering the environment in which the telehealth consultation was being held, ensuring that privacy, safety, and distractibility concerns have been adequately addressed before the consultation begins. These factors were considered important for the successful implementation of telehealth in rural areas.

This study explored telehealth service delivery in various rural health contexts, with 16 allied health and nursing clinicians who had provided telehealth services to people living in rural communities prior to, and during the COVID-19 pandemic. Reflections gained from clinicians were analysed and reported thematically. Major themes identified were clinicians navigating the role of telehealth to support rural healthcare, the need to prepare clinicians to engage in telehealth service delivery and appreciating the complexities of telehealth implementation across services and environments.

The utilisation of telehealth for health service delivery has been promoted as a solution to resolve access and equity issues, particularly for rural communities who are often impacted by limited health services due to distance and isolation [ 6 ]. This study identified a range of perceived benefits for both clients and clinicians, such as improved access to services across large geographic distances, including specialist care, and reduced travel time to engage with a range of health services. These findings are largely supported by the broader literature, such as the systematic review undertaken by Tsou et al. [ 25 ] which found that telehealth can improve clinical outcomes and increase the timeliness to access services, including specialist knowledge. Clinicians in our study also noted the benefits of using telehealth for ad hoc clinical support outside of regular appointment times, which to date has not been commonly reported in the literature as a benefit. Further investigation into this aspect may be warranted.

The findings from this study identify a range of challenges that exist when delivering health services within a virtual context. It was common for participants to highlight that personal preference for face-to-face sessions could not always be accommodated when implementing telehealth services in rural areas. The perceived technological possibilities to improve access can have unintended consequences for community members which may contribute to lack of responsiveness to community needs [ 12 ]. It is therefore important to understand the client and their preferences for using telehealth rather than making assumptions on the appropriateness of this type of health service delivery [ 26 ]. As such, telehealth is likely to function best when there is a pre-established relationship between the client and clinician, with clients who have a good knowledge of their personal health and have access to and familiarity with digital technology [ 13 ]. Alternatively, it is appropriate to consider how telehealth can be a supplementary tool rather than a stand-alone service model replacing face-to-face interactions [ 13 ].

As identified in this study, managing technology and internet connectivity are commonly reported issues for rural communities engaging in telehealth services [ 27 , 28 ]. Additionally, it was highlighted that within some rural communities with higher socioeconomic disadvantage, limited access to an appropriate level of technology and the required data to undertake a telehealth consult was a deterrent to engage in these types of services. Mathew et al. [ 13 ] found in their study that bandwidth impacted video consultations, which was further compromised by weather conditions, and clients without smartphones had difficulty accessing relevant virtual consultation software.

The findings presented here indicate that while telehealth can be a useful model, it may not be suitable for all clients or client groups. For example, the use of interpreters in telehealth to support clients was a key challenge identified in this study. This is supported by Mathew et al. [ 13 ] who identified that language barriers affected the quality of telehealth consultations and accessing appropriate interpreters was often difficult. Consideration of health and digital literacy, access and availability of technology and internet, appropriate client selection, and facilitating client choice are all important drivers to enhance telehealth experiences [ 29 ]. Nelson et al. [ 6 ] acknowledged the barriers that exist with telehealth, suggesting that ‘it is not the groups that have difficulty engaging, it is that telehealth and digital services are hard to engage with’ (p. 8). There is a need for telehealth services to be delivered in a way that is inclusive of different groups, and this becomes more pertinent in rural areas where resources are not the same as metropolitan areas.

The findings of this research highlight the unique set of skills required for health professionals to translate their practice across a virtual medium. The participants described these modifications in relation to communication skills, the ability to build rapport, conduct healthcare assessments, and provide treatment while looking at a ‘window view’ of a person. Several other studies have reported similar skillsets that are required to effectively use telehealth. Uscher-Pines et al. [ 30 ] conducted research on the experiences of psychiatrists moving to telemedicine during the COVID-19 pandemic and noted challenges affecting the quality of provider-patient interactions and difficulty conducting assessment through the window of a screen. Henry et al. [ 31 ] documented a list of interpersonal skills considered essential for the use of telehealth encompassing attributes related to set-up, verbal and non-verbal communication, relationship building, and environmental considerations.

Despite the literature uniformly agreeing that telehealth requires a unique skill set there is no agreement on how, when and for whom education related to these skills should be provided. The skills required for health professionals to use telehealth have been treated as an add-on to health practice rather than as a specialty skill set requiring learning and assessment. This is reflected in research such as that by Nelson et al. [ 6 ] who found that 58% of mental health professionals using telehealth in rural areas were not trained to use it. This gap between training and practice is likely to have arisen from the rapid and widespread implementation of telehealth during the COVID-19 pandemic (i.e. the change in MBS item numbers [ 1 ]) but has not been addressed in subsequent years. For practice to remain in step with policy and funding changes, the factors required for successful implementation of telehealth in rural practice must be addressed.

The lack of clarity around who must undertake training in telehealth and how regularly, presents a challenge for rural health professionals whose skill set has been described as a specialist-generalist that covers a significant breadth of knowledge [ 32 ]. Maintaining knowledge currency across this breadth is integral and requires significant resources (time, travel, money) in an environment where access to education can be limited [ 33 ]. There is risk associated with continually adding skills on to the workload of rural health professionals without adequate guidance and provision for time to develop and maintain these skills.

While the education required to equip rural health professionals with the skills needed to effectively use telehealth in their practice is developing, until education requirements are uniformly understood and made accessible this is likely to continue to pose risk for rural health professionals and the community members accessing their services. Major investment in the education of all health professionals in telehealth service delivery, no matter the context, has been identified as critical [ 6 ].

This research highlights that the experience of using telehealth in rural communities is unique and thus a ‘one size fits all’ approach is not helpful and can overlook the individual needs of a community. Participants described experiences of using telehealth that were different between rural communities, particularly for smaller, more remote rural locations where resources and staff support and experience using telehealth were not always equivalent to larger rural locations. Research has indicated the need to invest in resourcing and education to support expansion of telehealth, noting this is particularly important in rural, regional, and remote areas [ 34 ]. Our study recognises that this is an ongoing need as rural communities continue to have diverse experiences of using telehealth services. Careful consideration of the context of individual rural health services, including the community needs, location, and resource availability on both ends of the consultation is required. Use of telehealth cannot have the same outcomes in every area. It is imperative that service providers and clinicians delivering telehealth from metropolitan areas to rural communities appreciate and understand the uniqueness of every community, so their approach is tailored and is helpful rather than hindering the experience for people in rural communities.

Limitations

There are a number of limitations inherent to the design of this study. Participants were recruited via their workplace and thus although steps were taken to ensure they understood the research would not affect their employment, it is possible some employees perceived an association between the research and their employment. Health professionals who had either very positive or very negative experiences with telehealth may have been more likely to participate, as they may be more likely to want to discuss their experiences. In addition to this, only health services that were already connected with the researchers’ networks were invited to participate. Other limitations include purposive sampling, noting that the opinions of the participants are not generalisable. The participant group also represented mostly nursing professionals whose experiences with telehealth may differ from other health disciplines. Finally, it is important to acknowledge that the opinions of the health professionals who participated in the study, may not represent, or align with the experience and opinions of service users.

This study illustrates that while telehealth has provided increased access to services for many rural communities, others have experienced barriers related to variability in connectivity and managing technology. The results demonstrated that telehealth may not be the preferred or appropriate option for some individuals in rural communities and it is important to provide choice. Consideration of the context in which telehealth services are being delivered, particularly in rural and remote communities where there are challenges with resourcing and training to support health professionals, is critical to the success of telehealth service provision. Another critical factor is preparation and specific, intentional training for health professionals on how to transition to manage and maintain telehealth services effectively. Telehealth interventions require a unique skill set and guidance pertaining to who, when and what training will equip health professionals with the appropriate skill set to deliver telehealth services is still to be determined.

Data availability

The qualitative data collected for this study was de-identified before analysis. Consent was not obtained to use or publish individual level identified data from the participants and hence cannot be shared publicly. The de-identified data can be obtained from the corresponding author on reasonable request.

Commonwealth of Australia. COVID-19 Temporary MBS Telehealth Services: Department of Health and Aged Care, Australian Government; 2022 [ https://www.mbsonline.gov.au/internet/mbsonline/publishing.nsf/Content/Factsheet-TempBB .

Caffery LA, Muurlink OT, Taylor-Robinson AW. Survival of rural telehealth services post‐pandemic in Australia: a call to retain the gains in the ‘new normal’. Aust J Rural Health. 2022;30(4):544–9.

Article   PubMed   PubMed Central   Google Scholar  

Shaver J. The state of telehealth before and after the COVID-19 pandemic. Prim Care: Clin Office Pract. 2022;49(4):517–30.

Article   Google Scholar  

Australian Digital Health Agency. What is telehealth? 2024 [ https://www.digitalhealth.gov.au/initiatives-and-programs/telehealth .

Hirko KA, Kerver JM, Ford S, Szafranski C, Beckett J, Kitchen C, et al. Telehealth in response to the COVID-19 pandemic: implications for rural health disparities. J Am Med Inform Assoc. 2020;27(11):1816–8.

Nelson D, Inghels M, Kenny A, Skinner S, McCranor T, Wyatt S, et al. Mental health professionals and telehealth in a rural setting: a cross sectional survey. BMC Health Serv Res. 2023;23(1):200.

Butzner M, Cuffee Y. Telehealth interventions and Outcomes Across Rural Communities in the United States: Narrative Review. J Med Internet Res. 2021;23(8):NPAG–NPAG.

Calleja Z, Job J, Jackson C. Offsite primary care providers using telehealth to support a sustainable workforce in rural and remote general practice: a rapid review of the literature. Aust J Rural Health. 2023;31(1):5–18.

Article   PubMed   Google Scholar  

Sutarsa IN, Kasim R, Steward B, Bain-Donohue S, Slimings C, Hall Dykgraaf S, et al. Implications of telehealth services for healthcare delivery and access in rural and remote communities: perceptions of patients and general practitioners. Aust J Prim Health. 2022;28(6):522–8.

Bradford NK, Caffery LJ, Smith AC. Telehealth services in rural and remote Australia: a systematic review of models of care and factors influencing success and sustainability. Rural Remote Health. 2016;16(4):1–23.

Google Scholar  

Caffery LJ, Bradford NK, Wickramasinghe SI, Hayman N, Smith AC. Outcomes of using telehealth for the provision of healthcare to Aboriginal and Torres Strait Islander people: a systematic review. Aust N Z J Public Health. 2017;41(1):48–53.

Warr D, Luscombe G, Couch D. Hype, evidence gaps and digital divides: Telehealth blind spots in rural Australia. Health. 2023;27(4):588–606.

Mathew S, Fitts MS, Liddle Z, Bourke L, Campbell N, Murakami-Gold L, et al. Telehealth in remote Australia: a supplementary tool or an alternative model of care replacing face-to-face consultations? BMC Health Serv Res. 2023;23(1):1–10.

Campbell J, Theodoros D, Hartley N, Russell T, Gillespie N. Implementation factors are neglected in research investigating telehealth delivery of allied health services to rural children: a scoping review. J Telemedicine Telecare. 2020;26(10):590–606.

Commonwealth of Australia. Modified Monash Model: Department of Health and Aged Care Commonwealth of Australia; 2021 [updated 14 December 2021. https://www.health.gov.au/topics/rural-health-workforce/classifications/mmm .

Sandelowski M. Whatever happened to qualitative description? Research in nursing & health. 2000;23(4):334 – 40.

Marshall C, Rossman GB. Designing qualitative research: Sage; 2014.

Caelli K, Ray L, Mill J. Clear as mud’: toward greater clarity in generic qualitative research. Int J Qualitative Methods. 2003;2(2):1–13.

Tolley EE. Qualitative methods in public health: a field guide for applied research. Second edition. ed. San Francisco, CA: Jossey-Bass & Pfeiffer Imprints, Wiley; 2016.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Levitt HM, Motulsky SL, Wertz FJ, Morrow SL, Ponterotto JG. Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qualitative Psychol. 2017;4(1):2.

Allied Health Professions Australia. Allied health professions 2024 [ https://ahpa.com.au/allied-health-professions/ .

Braun V, Clarke V. To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualitative Res Sport Exerc Health. 2021;13(2):201–16.

Ritchie J, Spencer L. In: Alan B, Burgess RG, editors. Qualitative data analysis for applied policy research. Analyzing qualitative data: Routledge; 1994. pp. 173–94.

Tsou C, Robinson S, Boyd J, Jamieson A, Blakeman R, Yeung J, et al. Effectiveness of telehealth in rural and remote emergency departments: systematic review. J Med Internet Res. 2021;23(11):e30632.

Pullyblank K. A scoping literature review of rural beliefs and attitudes toward telehealth utilization. West J Nurs Res. 2023;45(4):375–84.

Jonnagaddala J, Godinho MA, Liaw S-T. From telehealth to virtual primary care in Australia? A Rapid scoping review. Int J Med Informatics. 2021;151:104470.

Jonasdottir SK, Thordardottir I, Jonsdottir T. Health professionals’ perspective towards challenges and opportunities of telehealth service provision: a scoping review. Int J Med Informatics. 2022;167:104862.

Clay-Williams R, Hibbert P, Carrigan A, Roberts N, Austin E, Fajardo Pulido D, et al. The diversity of providers’ and consumers’ views of virtual versus inpatient care provision: a qualitative study. BMC Health Serv Res. 2023;23(1):724.

Uscher-Pines L, Sousa J, Raja P, Mehrotra A, Barnett ML, Huskamp HA. Suddenly becoming a virtual doctor: experiences of psychiatrists transitioning to telemedicine during the COVID-19 pandemic. Psychiatric Serv. 2020;71(11):1143–50.

Henry BW, Ames LJ, Block DE, Vozenilek JA. Experienced practitioners’ views on interpersonal skills in telehealth delivery. Internet J Allied Health Sci Pract. 2018;16(2):2.

McCullough K, Bayes S, Whitehead L, Williams A, Cope V. Nursing in a different world: remote area nursing as a specialist–generalist practice area. Aust J Rural Health. 2022;30(5):570–81.

Reeve C, Johnston K, Young L. Health profession education in remote or geographically isolated settings: a scoping review. J Med Educ Curric Dev. 2020;7:2382120520943595.

PubMed   PubMed Central   Google Scholar  

Cummings E, Merolli M, Schaper L, editors. Barriers to telehealth uptake in rural, regional, remote Australia: what can be done to expand telehealth access in remote areas. Digital Health: Changing the Way Healthcare is Conceptualised and Delivered: Selected Papers from the 27th Australian National Health Informatics Conference (HIC 2019); 2019: IOS Press.

Download references

Acknowledgements

The authors would like to acknowledge Georgina Luscombe, Julian Grant, Claire Seaman, Jennifer Cox, Sarah Redshaw and Jennifer Schwarz who contributed to various elements of the project.

The study authors are employed by Three Rivers Department of Rural Health. Three Rivers Department of Rural Health is funded by the Australian Government under the Rural Health Multidisciplinary Training (RHMT) Program.

Author information

Authors and affiliations.

Three Rivers Department of Rural Health, Charles Sturt University, Locked Bag 588, Tooma Way, Wagga Wagga, NSW, 2678, Australia

Rebecca Barry, Elyce Green, Kristy Robson & Melissa Nott

You can also search for this author in PubMed   Google Scholar

Contributions

RB & EG contributed to the conceptualisation of the study and methodological design. RB & MN collected the research data. RB, EG, MN, KR contributed to analysis and interpretation of the research data. RB, EG, MN, KR drafted the manuscript. All authors provided feedback on the manuscript and approved the final submitted manuscript.

Corresponding author

Correspondence to Rebecca Barry .

Ethics declarations

Ethics approval and consent to participate.

Ethics approvals were obtained from the Greater Western NSW Human Research Ethics Committee and Charles Sturt University Human Research Ethics Committee (approval numbers: 2021/ETH00088 and H21215). Informed written consent was obtained from all participants. All methods were carried out in accordance with the relevant guidelines and regulations.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Barry, R., Green, E., Robson, K. et al. Factors critical for the successful delivery of telehealth to rural populations: a descriptive qualitative study. BMC Health Serv Res 24 , 908 (2024). https://doi.org/10.1186/s12913-024-11233-3

Download citation

Received : 19 March 2024

Accepted : 23 June 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s12913-024-11233-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Service provision
  • Rural health
  • Allied health
  • Rural workforce

BMC Health Services Research

ISSN: 1472-6963

critical sampling in qualitative research

2023 Articles

“Why me?”: Qualitative research on why patients ask, what they mean, how they answer and what factors and processes are involved

Klitzman, Robert

Patients often ask, “why me?” but questions arise regarding what this statement means, how, when and why patients ask, how they answer and why. Interviews were conducted as part of several qualitative research studies exploring how patients view and cope with various conditions, including HIV, cancer, Huntington’s disease and infertility. A secondary qualitative analysis was performed. Many patients ask, “why me?” but this statement emerges as having varying meanings, and entailing complex psychosocial processes. Patients commonly recognize that this question may lack a clear answer and that asking it is irrational, but they ask nonetheless, given the roles of unknown factors and chance in disease causation, psychological stresses of illness and lack of definitive answers. Patients may focus on different aspects of the question – e.g., on possible causes of illness (Why me? – whether God or randomness is involved) and/or on whether they are being singled out and/or punished (Why me vs. someone else?). Patients frequently undergo dynamic processes, confronting this question at various points, and arriving at different answers, looking for explanations that have narrative coherence for them, and make sense to them emotionally. Social contexts can affect these processes, with friends, family, providers or others rejecting or accepting patients’ responses to this question (e.g., beliefs about whether the patient is being punished and/or these questions are worth asking). Anger, depression, despair and/or resistance to notions about the roles of randomness or chaos can also shape these processes. While prior studies have each operationalized “why me?” in differing ways, focusing on varying aspects of it, the concept emerges here as highly multidimensional, involving complex processes and often affected by social contexts. These data, the first to examine key aspects and meanings of the phrase, “why me?” have critical implications for future practice, research and education.

  • Medical ethics
  • Hospital care

thumnail for Klitzman - 2023 - “Why me” Qualitative research on why patients as.pdf

Also Published In

More about this work.

  • DOI Copy DOI to clipboard

Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research

  • Original Article
  • Published: 06 November 2013
  • Volume 42 , pages 533–544, ( 2015 )

Cite this article

critical sampling in qualitative research

  • Lawrence A. Palinkas 1 ,
  • Sarah M. Horwitz 2 ,
  • Carla A. Green 3 ,
  • Jennifer P. Wisdom 4 ,
  • Naihua Duan 5 &
  • Kimberly Hoagwood 2  

137k Accesses

4198 Citations

73 Altmetric

Explore all metrics

Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

critical sampling in qualitative research

Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research

critical sampling in qualitative research

Sampling Techniques for Qualitative Research

Sampling techniques for quantitative research.

Aarons, G. A., Green, A. E., Palinkas, L. A., Self-Brown, S., Whitaker, D. J., Lutzker, J. R., et al.  (2012). Dynamic adaptation process to implement an evidence-based maltreatment intervention. Implementation Science, 7 , 32.

Article   PubMed Central   PubMed   Google Scholar  

Aarons, G. A., Hurlburt, M., & Horwitz, S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in child welfare. Administration and Policy in Mental Health and Mental Health Services Research, 38 , 4–23.

Aarons, G. A., & Palinkas, L. A. (2007). Implementation of evidence-based practice in child welfare: Service provider perspectives. Administration and Policy in Mental Health and Mental Health Services Research, 34 , 411–419.

Article   PubMed   Google Scholar  

Aarons, G. A., Wells, R., Zagursky, K., Fettes, D. L., & Palinkas, L. A. (2009). Implementing evidence-based practice in community mental health agencies: Multiple stakeholder perspectives. American Journal of Public Health, 99 (11), 2087–2095.

Bachman, M. O., O’Brien, M., Husbands, C., Shreeve, A., Jones, N., Watson, J., et al. (2009). Integrating children’s services in England: National evaluation of children’s trusts. Child: Care, Health and Development, 35 , 257–265.

Google Scholar  

Bernard, H. R. (2002). Research methods in anthropology: Qualitative and quantitative approaches (3rd ed.). Walnut Creek, CA: Alta Mira Press.

Bloom, H. S., & Michalopoulos, C. (2013). When is the story in the subgroups? Strategies for interpreting and reporting intervention effects for subgroups. Prevention Science, 14 , 179–188.

Brown, C., Ten Have, T., Jo, B., Dagne, G., Wyman, P., Muthén, B., et al. (2009). Adaptive designs for randomized trials in public health. Annual Review of Public Health, 30 , 1–25.

Brown, C. H., Wang, W., Kellam, S. G., Muthén, B. O., & Prevention Science and Methodology Group. (2008). Methods for testing theory and evaluating impact in randomized field trials: Intent-to-treat analyses for integrating the perspectives of person, place, and time. Drug and Alcohol Dependence, S95 , S74–S104.

Article   Google Scholar  

Brown, C. H., Wyman, P. A., Guo, J., & Peña, J. (2006). Dynamic wait-listed designs for randomized trials: New designs for prevention of youth suicide. Clinical Trials, 3 , 259–271.

Brunette, M. F., Asher, D., Whitley, R., Lutz, W. J., Weider, B. L., Jones, A. M., et al. (2008). Implementation of integrated dual disorders treatment: A qualitative analysis of facilitators and barriers. Psychiatric Services, 59 , 989–995.

Cresswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed method research (2nd ed.). Thousand Oaks, CA: Sage.

Curran, G. M., Bauer, M., Mittman, B., Pyne, J. M., & Stetler, C. (2012). Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care, 50 , 217–226.

Denzen, N. K. (1978). The research act: A theoretical introduction to sociological methods (2nd ed.). New York: McGraw Hill.

Duan, N., Bhaumik, D. K., Palinkas, L. A., & Hoagwood, K. (this issue). Purposeful sampling and optimal design. Administration and Policy in Mental Health and Mental Health Services Research .

Gioia, D., & Dziadosz, G. (2008).  Adoption of evidence-based practices in community mental health: A mixed method study of practitioner experience. Community Mental Health Journal, 44 , 347–357.

Glaser, B. G. (1978). Theoretical sensitivity . Mill Valley, CA: Sociology Press.

Glaser, B. G., & Straus, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research . New York: Aldine de Gruyter.

Glasgow, R., Magid, D., Beck, A., Ritzwoller, D., & Estabrooks, P. (2005). Practical clinical trials for translating research to practice: Design and measurement recommendations. Medical Care, 43 (6), 551.

Green, A. E., & Aarons, G. A. (2011). A comparison of policy and direct practice stakeholder perceptions of factors affecting evidence-based practice implementation using concept mapping. Implementation Science, 6 , 104.

Guest, G., Bunce, A., & Johnson, L. (2006). How many interviews are enough? An experiment with data saturation and variability. Field Methods, 18 (1), 59–82.

Henke, R. M., Chou, A. F., Chanin, J. C., Zides, A. B., & Scholle, S. H. (2008). Physician attitude toward depression care interventions: Implications for implementation of quality improvement initiatives. Implementation Science, 3 , 40.

Hoagwood, K. E., Vogel, J. M., Levitt, J. M., D’Amico, P. J., Paisner, W. I., & Kaplan, S. J. (2007). Implementing an evidence-based trauma treatment in a state system after September 11: The CATS Project. Journal of the American Academy of Child and Adolescent Psychiatry, 46 (6), 773–779.

Kemper, E. A., Stringfield, S., & Teddlie, C. (2003). Mixed methods sampling strategies in social science research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in the social and behavioral sciences (pp. 273–296). Thousand Oaks, CA: Sage.

Kramer, T. F., & Burns, B. J. (2008). Implementing cognitive behavioral therapy in the real world: A case study of two mental health centers. Implementation Science, 3 , 14.

Landsverk, J., Brown, H., Chamberlain, P., Palinkas, L. A., & Horwitz, S. M. (2012). Design and analysis in dissemination and implementation research. In R. C. Brownson, G. A. Colditz, & E. K. Proctor (Eds.), Translating science to practice (pp. 225–260). New York: Oxford University Press.

Marshall, T., Rapp, C. A., Becker, D. R., & Bond, G. R. (2008). Key factors for implementing supported employment. Psychiatric Services, 59 , 886–892.

Marty, D., Rapp, C., McHugo, G., & Whitley, R. (2008). Factors influencing consumer outcome monitoring in implementation of evidence-based practices: Results from the National EBP Implementation Project. Administration and Policy In Mental Health, 35 , 204–211.

Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.

Minkler, M., & Wallerstein, N. (Eds.). (2003). Community-based participatory research for health . San Francisco, CA: Jossey-Bass.

Morgan, D. L. (1997). Focus groups as qualitative research . Newbury Park, CA: Sage.

Morse, J. M., & Niehaus, L. (2009). Mixed method design: Principles and procedures . Walnut Creek, CA: Left Coast Press.

Padgett, D. K. (2008). Qualitative methods in social work research (2nd ed.). Los Angeles: Sage.

Palinkas, L. A., Aarons, G. A., Horwitz, S. M., Chamberlain, P., Hurlburt, M., & Landsverk, J. (2011a). Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 38 , 44–53.

Palinkas, L. A., Ell, K., Hansen, M., Cabassa, L. J., & Wells, A. A. (2011b). Sustainability of collaborative care interventions in primary care settings. Journal of Social Work, 11 , 99–117.

Palinkas, L. A., Holloway, I. W., Rice, E., Fuentes, D., Wu, Q., & Chamberlain, P. (2011c). Social networks and implementation of evidence-based practices in public youth-serving systems: A mixed methods study. Implementation Science, 6 , 113.

Palinkas, L. A., Fuentes, D., Garcia, A. R., Finno, M., Holloway, I. W., & Chamberlain, P. (2012). Inter-organizational collaboration in the implementation of evidence-based practices among agencies serving abused and neglected youth. Administration and Policy in Mental Health and Mental Health Services Research . doi: 10.1007/s10488-012-0437-5 .

Palinkas, L. A., & Soydan, H. (2012). Translation and implementation of evidence-based practice . New York: Oxford University Press.

Patton, M. Q. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.

Proctor, E. K., Knudsen, K. J., Fedoracivius, N., Hovmand, P., Rosen, A., & Perron, B. (2007). Implementation of evidence-based practice in community behavioral health: Agency director perspectives. Administration and Policy in Mental Health, 34 , 479–488.

Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, C. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36 , 24–34.

Rapp, C. A., Etzel-Wise, D., Marty, D., Coffman, M., Carlson, L., Asher, D., et al. (2010). Barriers to evidence-based practice implementation: Results of a qualitative study. Community Mental Health Journal, 46 , 112–118.

Raudenbush, S., & Liu, X. (2000). Statistical power and optimal design for multisite randomized trials. Psychological Methods, 5 , 199–213.

Article   CAS   PubMed   Google Scholar  

Slade, M., Gask, L., Leese, M., McCrone, P., Montana, C., Powell, R., et al. (2008). Failure to improve appropriateness of referrals to adult community mental health services—Lessons from a multi-site cluster randomized controlled trial. Family Practice, 25 , 181–190.

Spradley, J. P. (1979). The ethnographic interview . New York: Holt, Rinehart & Winston.

Swain, K., Whitley, R., McHugo, G. J., & Drake, R. E. (2010). The sustainability of evidence-based practices in routine mental health agencies. Community Mental Health Journal, 46 , 119–129.

Teddlie, C., & Tashakkori, A. (2003). Major issues and controversies in the use of mixed methods in the social and behavioral sciences. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in the social and behavioral sciences (pp. 3–50). Thousand Oaks, CA: Sage.

Tunis, S. R., Stryer, D. B., & Clancey, C. M. (2003). Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association, 290 (1624–1632), 2003.

Wisdom, J. P., Cavaleri, M. C., Onwuegbuzie, A. T., & Green, C. A. (2011). Methodological reporting in qualitative, quantitative, and mixed methods health services research articles. Health Services Research, 47 , 721–745.

Woltmann, E. M., Whitley, R., McHugo, G. J., et al. (2008). The role of staff turnover in the implementation of evidence-based practices in health care. Psychiatric Services, 59 , 732–737.

Zazzali, J. L., Sherbourne, C., Hoagwood, K. E., Greene, D., Bigley, M. F., & Sexton, T. L. (2008). The adoption and implementation of an evidence based practice in child and family mental health services organizations: A pilot study of functional family therapy in New York State. Administration and Policy in Mental Health, 35 , 38–49.

Download references

Acknowledgments

This study was funded through a Grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).

Author information

Authors and affiliations.

School of Social Work, University of Southern California, 669 W. 34th Street, Los Angeles, CA, 90089-0411, USA

Lawrence A. Palinkas

Department of Child and Adolescent Psychiatry, New York University, New York, NY, USA

Sarah M. Horwitz & Kimberly Hoagwood

Center for Health Research, Kaiser Permanente Northwest, Portland, OR, USA

Carla A. Green

George Washington University, Washington, DC, USA

Jennifer P. Wisdom

Department of Psychiatry and New York State Neuropsychiatric Institute, Columbia University, New York, NY, USA

Naihua Duan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Lawrence A. Palinkas .

Rights and permissions

Reprints and permissions

About this article

Palinkas, L.A., Horwitz, S.M., Green, C.A. et al. Purposeful Sampling for Qualitative Data Collection and Analysis in Mixed Method Implementation Research. Adm Policy Ment Health 42 , 533–544 (2015). https://doi.org/10.1007/s10488-013-0528-y

Download citation

Published : 06 November 2013

Issue Date : September 2015

DOI : https://doi.org/10.1007/s10488-013-0528-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Mental health services
  • Children and adolescents
  • Mixed methods
  • Qualitative methods implementation
  • State systems
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 16 August 2024

Examining the perception of undergraduate health professional students of their learning environment, learning experience and professional identity development: a mixed-methods study

  • Banan Mukhalalati 1 ,
  • Aaliah Aly 1 ,
  • Ola Yakti 1 ,
  • Sara Elshami 1 ,
  • Alaa Daud 2 ,
  • Ahmed Awaisu 1 ,
  • Ahsan Sethi 3 ,
  • Alla El-Awaisi 1 ,
  • Derek Stewart 1 ,
  • Marwan Farouk Abu-Hijleh 4 &
  • Zubin Austin 5  

BMC Medical Education volume  24 , Article number:  886 ( 2024 ) Cite this article

34 Accesses

Metrics details

The quality of the learning environment significantly impacts student engagement and professional identity formation in health professions education. Despite global recognition of its importance, research on student perceptions of learning environments across different health education programs is scarce. This study aimed to explore how health professional students perceive their learning environment and its influence on their professional identity development.

An explanatory mixed-methods approach was employed. In the quantitative phase, the Dundee Ready Education Environment Measure [Minimum–Maximum possible scores = 0–200] and Macleod Clark Professional Identity Scale [Minimum–Maximum possible scores = 1–45] were administered to Qatar University-Health students ( N  = 908), with a minimum required sample size of 271 students. Data were analyzed using SPSS, including descriptive statistics and inferential analysis. In the qualitative phase, seven focus groups (FGs) were conducted online via Microsoft Teams. FGs were guided by a topic guide developed from the quantitative results and the framework proposed by Gruppen et al. (Acad Med 94:969-74, 2019), transcribed verbatim, and thematically analyzed using NVIVO®.

The questionnaire response rate was 57.8% (525 responses out of 908), with a usability rate of 74.3% (390 responses out of 525) after excluding students who only completed the demographic section. The study indicated a “more positive than negative” perception of the learning environment (Median [IQR] = 132 [116–174], Minimum–Maximum obtained scores = 43–185), and a “good” perception of their professional identity (Median [IQR] = 24 [22–27], Minimum–Maximum obtained scores = 3–36). Qualitative data confirmed that the learning environment was supportive in developing competence, interpersonal skills, and professional identity, though opinions on emotional support adequacy were mixed. Key attributes of an ideal learning environment included mentorship programs, a reward system, and measures to address fatigue and boredom.

Conclusions

The learning environment at QU-Health was effective in developing competence and interpersonal skills. Students' perceptions of their learning environment positively correlated with their professional identity. Ideal environments should include mentorship programs, a reward system, and strategies to address fatigue and boredom, emphasizing the need for ongoing improvements in learning environments to enhance student satisfaction, professional identity development, and high-quality patient care.

Peer Review reports

The learning environment is fundamental to higher education and has a profound impact on student outcomes. As conceptualized by Gruppen et al. [ 1 ], it comprises a complex interplay of physical, social, and virtual factors that shape student engagement, perception, and overall development. Over the last decade, there has been a growing global emphasis on the quality of the learning environment in higher education [ 2 , 3 , 4 ]. This focus stems from the recognition that a well-designed learning environment that includes good facilities, effective teaching methods, strong social interactions, and adherence to cultural and administrative standards can greatly improve student development [ 2 , 5 , 6 , 7 ]. Learning environments impact not only knowledge acquisition and skill development but also value formation and the cultivation of professional attitudes [ 5 ].

Professional identity is defined as the “attitudes, values, knowledge, beliefs, and skills shared with others within a professional group” [ 8 ]. The existing research identified a significant positive association between the development of professional identity and the quality of the learning environment, and this association is characterized by being multifaceted and dynamic [ 9 ]. According to Hendelman and Byszewski [ 10 ] a supportive learning environment, characterized by positive role models, effective feedback mechanisms, and opportunities for reflective practice, fosters the development of a strong professional identity among medical students. Similarly, Jarvis-Selinger et al. [ 11 ] argue that a nurturing learning environment facilitates the socialization process which enables students to adopt and integrate the professional behaviors and attitudes expected in their field. Furthermore, Sarraf-Yazdi et al. [ 12 ] highlighted that professional identity formation is a continuous and multifactorial process involving the interplay of individual values, beliefs, and environmental factors. This dynamic process is shaped by both clinical and non-clinical experiences within the learning environment [ 12 ].

Various learning theories, such as the Communities of Practice (CoP) theory [ 13 ], emphasize the link between learning environments and learning outcomes, including professional identity development. The CoP theory describes communities of professionals with a shared knowledge interest who learn through regular interaction [ 13 , 14 ]. Within the CoP, students transition from being peripheral observers to central members [ 15 ]. Therefore, the CoP theory suggests that a positive learning environment is crucial for fostering learning, professional identity formation, and a sense of community [ 16 ].

Undoubtedly, health professional education programs (e.g., Medicine, Dental Medicine, Pharmacy, and Health Sciences) play a vital role not only in shaping the knowledge, expertise, and abilities of health professional students but also in equipping them with the necessary competencies for implementing healthcare initiatives and strategies and responding to evolving healthcare demands [ 17 ]. Within the field of health professions education, international organizations like the United Nations Educational, Scientific, and Cultural Organization (UNESCO), European Union (EU), American Council on Education (ACE), and World Federation for Medical Education (WFME) have emphasized the importance of high-quality learning environments in fostering the development of future healthcare professionals and called for considerations of the enhancement of the quality of the learning environment of health profession education programs [ 18 , 19 ]. These environments are pivotal for nurturing both the academic and professional growth necessary to navigate an increasingly globalized healthcare landscape [ 18 , 19 ].

Professional identity development is integral to health professions education which evolves continuously from early university years until later stages of the professional life as a healthcare practitioner [ 20 , 21 ]. This ongoing development helps students establish clear professional roles and boundaries, thereby reducing role ambiguity within multidisciplinary teams [ 9 ]. It is expected that as students advance in their professional education, their perception of the quality of the learning environment changes, which influences their learning experiences, the development of their professional identity, and their sense of community [ 22 ]. Cruess et al. [ 23 ] asserted that medical schools foster professional identity through impactful learning experiences, effective role models, clear curricula, and assessments. A well-designed learning environment that incorporates these elements supports medical students' socialization and professional identity formation through structured learning, reflective practices, and constructive feedback in both preclinical and clinical stages [ 23 ].

Despite the recognized importance of the quality of learning environments and their influence on student-related outcomes, this topic has been overlooked regionally and globally [ 24 , 25 , 26 , 27 , 28 , 29 , 30 ]. There is a significant knowledge gap in understanding how different components of the learning environment specifically contribute to professional identity formation. Most existing studies focus on general educational outcomes without exploring the detailed ways in which the learning environment shapes professional attitudes, values, and identity. Moreover, there is a global scarcity of research exploring how students’ perceptions of the quality of the learning environment and professional identity vary across various health profession education programs at different stages of their undergraduate education. This lack of comparative studies makes it challenging to identify best practices that can be adapted across different educational contexts. Furthermore, most research tends to focus on single-discipline studies, neglecting the interdisciplinary nature of modern healthcare education, which is essential for preparing students for collaborative practice in real-world healthcare settings. Considering the complex and demanding nature of health profession education programs and the increased emphasis on the quality of learning environments by accreditation bodies, examining the perceived quality of the educational learning environment by students is crucial [ 19 ]. Understanding students’ perspectives can provide valuable insights into areas needing improvement and highlight successful strategies that enhance both learning environment and experiences and professional identity development.

This research addresses this gap by focusing on the interdisciplinary health profession education programs to understand the impact of the learning environment on the development of the professional identity of students and its overall influence on their learning experiences. The objectives of this study are to 1) examine the perception of health professional students of the quality of their learning environment and their professional identity, 2) identify the association between health professional students’ perception of the quality of their learning environment and the development of their professional identity, and 3) explore the expectations of health professional students of the ideal educational learning environment. This research is essential in providing insights to inform educational practices globally to develop strategies to enhance the quality of health profession education.

Study setting and design

This study was conducted at Qatar University Health (QU Health) Cluster which is an interdisciplinary health profession education program that was introduced as the national provider of higher education in health and medicine in the state of Qatar. QU Health incorporates five colleges: Health Sciences (CHS), Pharmacy (CPH), Medicine (CMED), Dental Medicine (CDEM) and Nursing (CNUR) [ 31 ]. QU Health is dedicated to advancing inter-professional education (IPE) through its comprehensive interdisciplinary programs. By integrating IPE principles into the curriculum and fostering collaboration across various healthcare disciplines, the cluster prepares students to become skilled and collaborative professionals. Its holistic approach to teaching, research, and community engagement not only enhances the educational experience but also addresses local and regional healthcare challenges, thereby making a significant contribution to the advancement of population health in Qatar [ 32 ]. This study was conducted from November 2022 to July 2023. An explanatory sequential mixed methods triangulation approach was used for an in-depth exploration and validation of the quantitative results qualitatively [ 33 , 34 ]. Ethical approval for the study was obtained from the Qatar University Institutional Review Board (approval number: QU-IRB 1734-EA/22).

For the quantitative phase, a questionnaire was administered via SurveyMonkey® incorporating two previously validated questionnaires: the Dundee Ready Educational Environment Measure (DREEM), developed by Roff et al. in 1997 [ 35 ], and the Macleod Clark Professional Identity Scale-9 (MCPIS-9), developed by Adam et al. in 2006 [ 8 ]. Integrating DREEM and MCPIS-9 into a single questionnaire was undertaken to facilitate a comprehensive evaluation of two distinct yet complementary dimensions—namely, the educational environment and professional identity—that collectively influence the learning experience and outcomes of students, as no single instrument effectively assesses both aspects simultaneously [ 36 ]. The survey comprised three sections—Section A: sociodemographic characteristics, Section B: the DREEM scoring scale for assessing the quality of the learning environment, and Section C: the MCPIS-9 scoring scale for assessing professional identity. For the qualitative phase, seven focus groups (FGs) were arranged with a sample of QU-Health students. The qualitative and quantitative data obtained were integrated at the interpretation and reporting level using a narrative, contiguous approach [ 37 , 38 ].

Quantitative phase

Population and sampling.

The total population sampling approach in which all undergraduate QU-Health students who had declared their majors (i.e., the primary field of study that an undergraduate student has chosen during their academic program) at the time of conducting the study in any of the four health colleges under QU-Health ( N  = 908), namely, CPH, CMED, CDEM, and CHS, such as Human Nutrition (Nut), Biomedical Science (Biomed), Public Health (PH), and Physiotherapy (PS), were invited to participate in the study. Nursing students were excluded from this study because the college was just established in 2022; therefore, students were in their general year and had yet to declare their majors at the time of the study. The minimum sample size required for the study was determined to be 271 students based on a margin error of 5%, a confidence level of 95%, and a response distribution of 50%.

Data collection

Data was collected in a cross-sectional design. After obtaining the approval of the head of each department, contact information for eligible students was extracted from the QU-Health student databases for each college, and invitations were sent via email. The distribution of these invitations was done by the administrators of the respective colleges. The invitation included a link to a self-administered questionnaire on SurveyMonkey® (Survey Monkey Inc., San Mateo, California, USA), along with informed consent information. All 908 students were informed about the study’s purpose, data collection process, anonymity and confidentiality assurance, and the voluntary nature of participation. The participants were sent regular reminders to complete the survey to increase the response rate.

A focused literature review identified the DREEM as the most suitable validated tool for this study. The DREEM is considered the gold standard for assessing undergraduate students' perceptions of their learning environment [ 35 ]. Its validity and reliability have been consistently demonstrated across various settings (i.e., clinical and non-clinical) and health professions (e.g., nursing, medicine, dentistry, and pharmacy), in multiple countries worldwide, including the Gulf Cooperation Council countries [ 24 , 35 , 39 , 40 , 41 , 42 ]. The DREEM is a 50-item inventory divided into 5 subscales and developed to measure the academic climate of educational institutions using a five-point Likert scale from 0 “strongly disagree” to 4 “strongly agree”. The total score ranges from 0 to 200, with higher scores reflecting better perceptions of the learning environment [ 35 , 39 , 43 ]. The interpretation includes very poor (0–50), plenty of problems (51–100), more positive than negative (101–151), and excellent (151–200).

The first subscale, Perception to Learning (SpoL), with 12 items scoring 0–48. Interpretation includes very poor (0–12), teaching is viewed negatively (13–24), a more positive approach (25–36), and teaching is highly thought of (37–48). The second domain, Perception to Teachers (SpoT), with 11 items scoring 0–44. Interpretation includes abysmal (0–11), in need of some retraining (12–22), moving in the right direction (23–33), and model teachers (34–44). The third domain, academic self-perception (SASP), with 8 items scoring 0–32. Interpretation includes a feeling of total failure (0–8), many negative aspects (9–16), feeling more on the positive side (17–24), and confident (25–32). The fourth domain, Perception of the atmosphere (SPoA), with 12 items scoring 0–48. Interpretation includes a terrible environment (0–12); many issues need to be changed (13–24), a more positive atmosphere (25–36), and a good feeling overall (37–48). Lastly, the fifth domain, social self-perception (SSSP), with 7 items scoring 0–28. Interpretation includes Miserable (0–7), Not a nice place (8–14), Not very bad (15–21), and very good socially (22–28).

Several tools have been developed to explore professional identity in health professions [ 44 ], but there is limited research on their psychometric qualities [ 45 ]. The MCPIS-9 is notable for its robust psychometric validation and was chosen for this study due to its effectiveness in a multidisciplinary context as opposed to other questionnaires that were initially developed for the nursing profession [ 8 , 46 , 47 ]. MCPIS-9 is a validated 9-item instrument, which uses a 5-point Likert response scale, with scores ranging from 1 “strongly disagree” to 5 “strongly agree”. Previous studies that utilized the MCPIS-9 had no universal guidance for interpreting the MCPIS-9 score; however, the higher the score, the stronger the sense of professional identity [ 46 , 48 ].

Data analysis

The quantitative data were analyzed using SPSS software (IBM SPSS Statistics for Windows, version 27.0; IBM Corp., Armonk, NY, USA). The original developers of the DREEM inventory identified nine negative items: items 11, 12, 19, 20, 21, 23, 42, 43, and 46 – these items were reverse-coded. Additionally, in the MCPIS-9 tool, the original developers identified three negative items: items 3, 4, and 5. Descriptive and inferential analyses were also conducted. Descriptive statistics including number (frequencies [%]), mean ± SD, and median (IQR), were used to summarize the demographics and responses to the DREEM and MCPIS-9 scoring scales. In the inferential analysis, to test for significant differences between demographic subgroups in the DREEM and MCPIS-9 scores, Kruskal–Wallis tests were used for variables with more than two categories, and Mann–Whitney U-tests were used for variables with two categories. Spearman's rank correlation analysis was used to investigate the association between perceived learning environment and professional identity development. The level of statistical significance was set a priori at p  < 0.05. The internal consistency of the DREEM and MCPIS-9 tools was tested against the acceptable Cronbach's alpha value of 0.7.

Qualitative phase

A purposive sampling approach was employed to select students who were most likely to provide valuable insights to gain a deeper understanding of the topic. The inclusion criteria required that participants should have declared their major in one of the following programs: CPH, CMED, CDEM, CHS: Nut, Biomed, PS, and PH. This selection criterion aimed to ensure that participants had sufficient knowledge and experience related to their chosen fields of study within QU-Health. Students were included if they were available and willing to share their experiences and thoughts. Students who did not meet these criteria were excluded from participation. To ensure a representative sample, seven FGs were conducted, one with each health professional education program. After obtaining the approval of the head of each department, participants were recruited by contacting the class representative of each professional year to ask for volunteers to join and provide their insights. Each FG involved students from different professional years to ensure a diverse representation of experiences and perspectives.

The topic guide (Supplementary Material 1) was developed and conceptualized based on the research objectives, selected results from the quantitative phase, and the Gruppen et. al. framework [ 1 ]. FGs were conducted online using Microsoft Teams® through synchronous meetings. Before initiating the FGs, participants were informed of their rights and returned signed consent forms to the researchers. FGs were facilitated by two research assistants (AA and OY), each facilitating separate sessions. The facilitators, who had prior experience with conducting FGs and who were former pharmacy students from the CPH, were familiar with some of the participants, and hence were able to encourage open discussion, making it easier for students to share their perceptions of the learning environment within the QU Health Cluster. Participants engaged in concurrent discussions were encouraged to use the "raise hand" feature on Microsoft Teams to mimic face-to-face interactions. Each FG lasted 45–60 min, was conducted in English, and was recorded and transcribed verbatim and double-checked for accuracy. After the seventh FG, the researchers were confident that a saturation point had been reached where no new ideas emerged, and any further data collection through FGs was unnecessary. Peer and supervisory audits were conducted throughout the research process.

The NVIVO ® software (version 12) was utilized to perform a thematic analysis incorporating both deductive and inductive approaches. The deductive approach involved organizing the data into pre-determined categories based on the Gruppen et al. framework, which outlines key components of the learning environment. This framework enabled a systematic analysis of how each component of the learning environment contributes to students' professional development and highlighted areas for potential improvement. Concurrently, the inductive approach was applied to explore students' perceptions of an ideal learning environment, facilitating the emergence of new themes and insights directly from the data, independent of pre-existing categories. This dual approach provided a comprehensive understanding of the data by validating the existing theory while also exploring new findings [ 49 ]. Two coders were involved in coding the transcripts (AA and BM) and in cases of disagreements between researchers, consensus was achieved through discussion.

The response rate was 57.8% (525 responses out of 908), while the usability rate was 74.3% (390 responses out of 525) after excluding students who only completed the demographic section. The demographic and professional characteristics of the participants are presented in Table  1 . The majority were Qataris (37.0% [ n  = 142]), females (85.1% [ n  = 332]), and of the age group of 21–23 years (51.7% [ n  = 201]). The students were predominantly studying at the CHS (36.9%[ n  = 144]), in their second professional year (37.4% [ n  = 146]), and had yet to be exposed to experiential learning, that is, clinical rotations (70.2% [ n  = 273]).

Perceptions of students of their learning environment

The overall median DREEM score for study participants indicated that QU Health students perceive their learning environment to be "more positive than negative" (132 [IQR = 116–174]). The reliability analysis for this sample of participants indicated a Cronbach's alpha for the total DREEM score of 0.94, and Cronbach's alpha scores for each domain of the DREEM tool, SPoL, SPoT, SASP, SPoA, and SSSP of 0.85, 0.74, 0.81, 0.85, and 0.65, respectively.

Individual item responses representing each domain of the DREEM tool are presented in Table  2 . For Domain I, QU Health students perceived the teaching approach in QU Health to be "more positive" (32 [IQR = 27–36]). Numerous participants agreed that the teaching was well-focused (70.7% [ n  = 274]), student-focused (66.1% [ n  = 254]) and aimed to develop the competencies of students (72.0% [ n  = 278]). The analysis of students’ perceptions related to Domain II revealed that faculty members were perceived to be “moving in the right direction” (30 [IQR = 26–34]). Most students agreed that faculty members were knowledgeable (90.7%[ n  = 345]) and provided students with clear examples and constructive feedback (77.6% [ n  = 294] and 63.8% [ n  = 224], respectively. Furthermore, the analysis of Domain III demonstrated that QU Health students were shown to have a "positive academic self-perception" (22 [IQR = 19–25]). In this regard, most students believed that they were developing their problem-solving skills (78% [ n  = 292]) and that what they learned was relevant to their professional careers (76% [ n  = 288]). Furthermore, approximately 80% ( n  = 306) of students agreed that they had learned empathy in their profession. For Domain IV, students perceived the atmosphere of their learning environment to be "more positive" (32 [IQR = 14–19]). A substantial number of students asserted that there were opportunities for them to develop interpersonal skills (77.7% [ n  = 293]), and that the atmosphere motivated them as learners (63.0% [ n  = 235]). Approximately one-third of students believed that the enjoyment did not outweigh the stress of studying (32.3% [ n  = 174]). Finally, analysis of Domain V indicates that students’ social self-perception was “not very bad” (17 [IQR = 27–36]). Most students agreed that they had good friends at their colleges (83% [ n  = 314]) and that their social lives were good (68% [ n  = 254]).

Table 3 illustrates the differences in the perception of students of their overall learning environment according to their demographic and professional characteristics. No significant differences were noted in the perception of the learning environment among the subgroups with selected demographic and professional characteristics, except for the health profession program in which they were enrolled ( p -value < 0.001), whether they had relatives who studied or had studied the same profession ( p -value < 0.002), and whether they started their experiential learning ( p -value = 0.043). Further analyses comparing the DREEM subscale scores according to their demographic and professional characteristics are presented in Supplementary Material 1.

Students’ perceptions of their professional identities

The students provided positive responses relating to their perceptions of their professional identity (24.00 IQR = [22–27]). The reliability analysis of this sample indicated a Cronbach's alpha of 0.605. The individual item responses representing the MCPIS-9 tool are presented in Table  2 . Most students (85% [ n  = 297]) expressed pleasant feelings about belonging to their own profession, and 81% ( n  = 280) identified positively with members of their profession. No significant differences were noted in the perception of students of their professional identity when analyzed against selected demographic subgroups, except for whether they had relatives who had studied or were studying the same profession ( p -value = 0.027). Students who had relatives studying or had studied the same profession tended to perceive their professional identity better (25 IQR = [22–27] and 24 IQR = [21–26], respectively) (Table  3 ).

Association between MCPIS-9 and DREEM

Spearman's rank correlation between the DREEM and MCPIS-9 total scores indicated an intermediate positive correlation between perceptions of students toward their learning environment and their professional identity development (r = 0.442, p -value < 0.001). The DREEM questionnaire, with its 50 items divided into five subscales, comprehensively assessed various dimensions of the learning environment. Each subscale evaluated a distinct aspect of the educational experience, such as the effectiveness of teaching, teacher behavior and attitudes, academic confidence, the overall learning atmosphere, and social integration. The MCPIS-9 questionnaire specifically assessed professional identity through nine items that measure attitudes, values, and self-perceived competence in the professional domain. The positive correlation demonstrated between the DREEM and MCPIS-9 scores indicated that as students perceive their learning environment more positively, their professional identity is also enhanced.

Thirty-seven students from the QU Health colleges were interviewed: eleven from CPH, eight from CMED, four from CDEM, and fourteen from CHS (six from Nut, three from PS, three from Biomed, and three from PH). Four conventional themes were generated deductively using Gruppen et al.’s conceptual framework, while one theme was derived through inductive analysis. The themes and sub-themes generated are demonstrated in Table  4 .

Theme 1. The personal component of the learning environment

This theme focused on student interactions and experiences within their learning environment and their impact on perceptions of learning, processes, growth, and professional development.

Sub-theme 1.1. Experiences influencing professional identity formation

Students classified their experiences into positive and negative. Positive experiences included hands-on activities such as on-campus practical courses and pre-clinical activities, which built their confidence and professional identity. In this regard, one student mentioned:

“Practical courses are one of the most important courses to help us develop into pharmacists. They make you feel confident in your knowledge and more willing to share what you know.” [CPH-5]

Many students claimed that interprofessional education (IPE) activities enhanced their self-perception, clarified their roles, and boosted their professional identity and confidence. An interviewee stated:

"I believe that the IPE activity,…., is an opportunity for us to explore our role. It has made me know where my profession stands in the health sector and how we all depend on each other through interprofessional thinking and discussions." [CHS-Nut-32]

However, several participants reported that an extensive workload hindered their professional identity development. A participant stated:

“The excessive workload prevents us from joining activities that would contribute to our professional identity development. Also, it restricts our networking opportunities and makes us always feel burnt out.” [CHS-Nut-31]

Sub-theme 1.2. Strategies used by students to pursue their goals

QU Health students employed various academic and non-academic strategies to achieve their objectives, with many emphasizing list-making and identifying effective study methods as key approaches:

“Documentation. I like to see tasks that I need to do on paper. Also, I like to classify my tasks based on their urgency. I mean, deadlines.” [CHS-Nut-31]
“I always try to be as efficient as possible when studying and this can be by knowing what studying method best suits me.” [CHS-Biomed-35]

Nearly all students agreed that seeking feedback from faculty was crucial for improving their work and performance. In this context, a student said:

“We must take advantage of the provided opportunity to discuss our assignments, projects, and exams, like what we did correctly, and what we did wrongly. They always discuss with us how to improve our work on these things.” [CHS-Nut-32]

Moreover, many students also believed that developing communication skills was vital for achieving their goals, given their future roles in interprofessional teams. A student mentioned:

“Improving your communication skills is a must because inshallah (with God’s will) in the future we will not only work with biomedical scientists, but also with nurses, pharmacists, and doctors. So, you must have good communication abilities.” [CHS-Biomed-34]

Finally, students believe that networking is crucial for achieving their goals because it opens new opportunities for them as stated by a student:

“Networking with different physicians or professors can help you to know about research or training opportunities that you could potentially join.” [CMED-15]

Subtheme 1.3. Students’ mental and physical well-being

Students agreed that while emotional well-being is crucial for good learning experiences and professional identity development, colleges offered insufficient support. An interviewee stated:

“We simply don't have the optimal support we need to take care of our emotional well-being as of now, despite how important it is and how it truly reflects on our learning and professional development” [CDEM-20]

Another student added:

“…being in an optimal mental state provides us with the opportunity to acquire all required skills that would aid in our professional identity development. I mean, interpersonal skills, adaptability, self-reflection” [CPH-9]

Students mentioned some emotional support provided by colleges, such as progress tracking and stress-relief activities. Students said:

“During P2 [professional year 2], I missed a quiz, and I was late for several lectures. Our learning support specialist contacted me … She was like, are you doing fine? I explained everything to her, and she contacted the professors for their consideration and support.” [CPH-7]
“There are important events that are done to make students take a break and recharge, but they are not consistent” [CHS-PS-27]

On the physical well-being front, students felt that their colleges ensured safety, especially in lab settings, with proper protocols to avoid harm. A student mentioned:

“The professors and staff duly ensure our safety, especially during lab work. They make sure that we don't go near any harmful substances and that we abide by the lab safety rules” [CHS-Biomed -35]

Theme 2. Social component of the learning environment

This theme focused on how social interactions shape students’ perceptions of learning environments and learning experiences.

Sub-theme 2.1. Opportunities for community engagement

Participants identified various opportunities for social interactions through curricular and extracurricular activities. Project-based learning (PBL) helped them build connections, improve teamwork and enhance critical thinking and responsibility as stated by one student:

“I believe that having PBL as a big part of our learning process improves our teamwork and interpersonal skills and makes us take responsibility in learning, thinking critically, and going beyond what we would have received in class to prepare very well and deep into the topic.” [CMED-12]

Extracurricular activities, including campaigns and events, helped students expand their social relationships and manage emotional stress. A student stated:

“I think that the extracurricular activities that we do, like the campaigns or other things that we hold in the college with other students from other colleges, have been helpful for me in developing my personality and widening my social circle. Also, it dilutes the emotional stress we are experiencing in class” [CDEM-22]

Sub-theme 2.2. Opportunities for learner-to-patient interactions

Students noted several approaches their colleges used to enhance patient-centered education and prepare them for real-world patient interactions. These approaches include communication skills classes, simulated patient scenarios, and field trips. Students mentioned:

“We took a class called Foundation of Health, which mainly focused on how to communicate our message to patients to ensure that they were getting optimal care. This course made us appreciate the term ‘patient care’ more.” [CHS-PH-38]
“We began to appreciate patient care when we started to take a professional skills course that entailed the implementation of a simulated patient scenario. We started to realize that communication with patients didn’t go as smoothly as when we did it with a colleague in the classroom.” [CPH-1]
“We went on a field trip to ‘Shafallah Center for Persons with Disability’ and that helped us to realize that there were a variety of patients that we had to care for, and we should be physically and mentally prepared to meet their needs.” [CDEM-21]

Theme 3. Organizational component of the learning environment

This theme explored students' perceptions of how the college administration, policies, culture, coordination, and curriculum design impact their learning experiences.

Sub-theme 3.1. Curriculum and study plan

Students valued clinical placements for their role in preparing them for the workplace and developing professional identity. A student stated:

“Clinical placements are very crucial for our professional identity development; we get the opportunity to be familiarized with and prepared for the work environment.” [CHS-PS-27]

However, students criticized their curriculum for not equipping them with adequate knowledge and skills. For example, a student said:

“… Not having a well-designed curriculum is of concern. We started very late in studying dentistry stuff and that led to us cramming all the necessary information that we should have learned.” [CDEM-20]

Furthermore, students reported that demanding schedules and limited course availability hindered learning and delayed progress:

“Last semester, I had classes from Sunday to Thursday from 8:00 AM till 3:00 PM in the same classroom, back-to-back, without any break. I was unable to focus in the second half of the day.” [CHS-Nut-38]
“Some courses are only offered once a year, and they are sometimes prerequisites for other courses. This can delay our clinical internship or graduation by one year.” [CHS-Biomed-36]

Additionally, the outdated curriculum was seen as misaligned with advancements in artificial intelligence (AI). One student stated:

“… What we learn in our labs is old-fashioned techniques, while Hamad Medical Corporation (HMC) is following a new protocol that uses automation and AI. So, I believe that we need to get on track with HMC as most of us will be working there after graduation.” [CHS-Biomed-35]

Sub-theme 3.2. Organizational climate and policies

Students generally appreciated the positive university climate and effective communication with the college administration which improves course quality:

“Faculty members and the college administration usually listen to our comments about courses or anything that we want to improve, and by providing a course evaluation at the end of the semester, things get better eventually.” [CPH-2]

Students also valued faculty flexibility with scheduling exams and assignments, and praised the new makeup exam policy which enhances focus on learning:

“Faculty members are very lenient with us. If we want to change the date of the exam or the deadline for any assignment, they agree if everyone in the class agrees. They prioritize the quality of our work over just getting an assignment done.” [CHS-PS-37]
“I am happy with the introduction of makeup exams. Now, we are not afraid of failing and losing a whole year because of a course. I believe that this will help us to focus on topics, not just cramming the knowledge to pass.” [CPH-9]

However, students expressed concerns about the lack of communication between colleges and clinical placements and criticized the lengthy approval process for extracurricular activities:

“There is a contract between QU and HMC, but the lack of communication between them puts students in a grey area. I wish there would be better communication between them.” [CMED-15]
“To get a club approved by QU, you must go through various barriers, and it doesn't work every time. A lot of times you won't get approved.” [CMED-14]

Theme 4. Materialistic component of the learning environment

This theme discussed how physical and virtual learning spaces affect students' learning experiences and professional identity.

Sub-theme 4.1. The physical space for learning

Students explained that the interior design of buildings and the fully equipped laboratory facilities in their programs enhanced focus and learning:

“The design has a calming effect, all walls are simple and isolate the noise, the classrooms are big with big windows, so that the sunlight enters easily, and we can see the green grass. This is very important for focusing and optimal learning outcomes.” [CPH-5]
“In our labs, we have beds and all the required machines for physiotherapy exercises and practical training, and we can practice with each other freely.” [CHS-PS-27]

Students from different emphasized the need for dedicated lecture rooms for each batch and highlighted the importance of having on-site cafeterias to avoid disruptions during the day:

“We don't have lecture rooms devoted to each batch. Sometimes we don't even find a room to attend lectures and we end up taking the lectures in the lab, which makes it hard for us to focus and study later.” [CDEM-23]
“Not having a cafeteria in this building is a negative point. Sometimes we miss the next lecture or part of it if we go to another building to buy breakfast.” [CHS-Nut-29]

Sub-theme 4.2. The virtual space for online learning

Students appreciated the university library's extensive online resources and free access to platforms like Microsoft Teams and Webex for efficient learning and meetings. They valued recorded lectures for flexible study and appreciated virtual webinars and workshops for global connectivity.

“QU Library provides us with a great diversity and a good number of resources, like journals or books, as well as access medicine, massive open online courses, and other platforms that are very useful for studying.” [CMED-16].
“Having your lectures recorded through virtual platforms made it easier to take notes efficiently and to study at my own pace.” [CHS-PS-38]
"I hold a genuine appreciation for the provided opportunities to register in online conferences. I remember during the COVID-19 pandemic, I got the chance to attend an online workshop. This experience allowed me to connect with so many people from around the world." [CMED-15]

Theme 5. Characteristics of an ideal learning environment

This theme explored students’ perceptions of an ideal learning environment and its impact on their professional development and identity.

Sub-theme 5.1. Active learning and professional development supporting environment

Students highlighted that an ideal learning environment should incorporate active learning methods and a supportive atmosphere. They suggested using simulated patients in case-based learning and the use of game-based learning platforms:

“I think if we have, like in ITQAN [a Clinical Simulation and Innovation Center located on the Hamad Bin Khalifa Medical City (HBKMC) campus of Hamad Medical Corporation (HMC)], simulated patients, I think that will be perfect like in an “Integrated Case-Based Learning” case or professional skills or patient assessment labs where we can go and intervene with simulated patients and see what happens as a consequence. This will facilitate our learning.” [CPH-4]
“I feel that ‘Kahoot’ activities add a lot to the session. We get motivated and excited to solve questions and win. We keep laughing, and I honestly feel that the answers to these questions get stuck in my head.” [CHS-PH-38].

Students emphasized the need for more opportunities for research, career planning, and equity in terms of providing resources and opportunities for students:

“Students should be provided with more opportunities to do research, publish, and practice.” [CMED-16]
“We need better career planning and workshops or advice regarding what we do after graduation or what opportunities we have.” [CHS-PS-25]
“I think that opportunities are disproportionate, and this is not ideal. I believe all students should have the same access to opportunities like having the chance to participate in conferences and receiving research opportunities, especially if one fulfills the requirements.” [CHS-Biomed-35]

Furthermore, the students proposed the implementation of mentorship programs and a reward system to enable a better learning experience:

“Something that could enable our personal development is a mentorship program, which our college started to implement this year, and I hope they continue to because it’s an attribute of an ideal learning environment.” [CPH-11]
“There has to be some form of reward or acknowledgments to students, especially those who, for example, have papers published or belong to leading clubs, not just those who are, for example, on a dean’s list because education is much more than just academics.” [CHS-PS-26]

Subtheme 5.2. Supportive physical environment

Participants emphasized that the physical environment of the college significantly influences their learning attitudes. A student said:

“The first thing that we encounter when we arrive at the university is the campus. I mean, our early thoughts toward our learning environment are formed before we even know anything about our faculty members or the provided facilities. So, ideally, it starts here.” [CPH-10]

Therefore, students identified key characteristics of an optimal physical environment which included: having a walkable campus, designated study and social areas, and accessible food and coffee.

“I think that learning in what they refer to as a walkable campus, which entails having the colleges and facilities within walking distance from each other, without restrictions of high temperature and slow transportation, is ideal.” [CPH-8]
“The classrooms and library should be conducive to studying and focusing, and there should also be other places where one can actually socialize and sit with one’s friends.” [CDEM-22]
“It is really important to have a food court or café in each building, as our schedules are already packed, and we have no time to go get anything for nearby buildings.” [CHS-Biomed-34]

Data integration

Table 5 represents the integration of data from the quantitative and qualitative phases. It demonstrates how the quantitative findings informed and complemented the qualitative analysis and explains how quantitative data guided the selection of themes in the qualitative phase. The integration of quantitative and qualitative data revealed both convergences and divergences in students' views of their learning environment. Both data sources consistently indicated that the learning environment supported the development of interpersonal skills, fostered strong relationships with faculty, and promoted an active, student-centered learning approach. This environment was credited with enhancing critical thinking, independence, and responsibility, as well as boosting students' confidence and competence through clear role definitions and constructive faculty feedback.

However, discrepancies emerged between the two phases. Quantitative data suggested general satisfaction with timetables and support systems, while qualitative data uncovered significant dissatisfaction. Although quantitative results indicated that students felt well-prepared and able to memorize necessary material, qualitative findings revealed challenges with concentration and focus. Furthermore, while quantitative data showed contentment with institutional support, qualitative responses pointed to shortcomings in emotional and physical support.

This study examined the perceptions of QU Health students regarding the quality of their learning environment and the characteristics of an ideal learning environment. Moreover, this study offered insights into the development of professional identity, emphasizing the multifaceted nature of learning environments and their substantial impact on professional identity formation.

Perceptions of the learning environment

The findings revealed predominantly positive perceptions among students regarding the quality of the overall learning environment at QU Health and generally favorable perception of all five DREEM subscales, which is consistent with the international studies using the DREEM tool [ 43 , 50 , 51 , 52 , 53 , 54 ]. Specifically, participants engaged in experiential learning expressed heightened satisfaction, which aligns with existing research indicating that practical educational approaches enhance student engagement and satisfaction [ 55 , 56 ]. Additionally, despite limited literature, students without relatives in the same profession demonstrated higher perceptions of their learning environment, possibly due to fewer preconceived expectations. A 2023 systematic review highlighted how students’ expectations influence their satisfaction and academic achievement [ 57 ]. However, specific concerns arose regarding the learning environment, including overemphasis on factual learning in teaching, student fatigue, and occasional boredom. These issues were closely linked to the overwhelming workload and conventional teaching methods, as identified in the qualitative phase.

Association between learning environment and professional identity

This study uniquely integrated the perceptions of the learning environment with insights into professional identity formation in the context of healthcare education which is a relatively underexplored area in quantitative studies [ 44 , 58 , 59 , 60 ]. This study demonstrated a positive correlation between students' perceptions of the learning environment (DREEM) and their professional identity development (MCPIS-9) which suggested that a more positive learning environment is associated with enhanced professional identity formation. For example, a supportive and comfortable learning atmosphere (i.e., high SPoA scores) can enhance students' confidence and professional self-perception (i.e., high MCPIS-9 scores). The relationship between these questionnaires is fundamental to this study. The DREEM subscales, particularly Perception of Learning (SpoL) and Academic Self-Perception (SASP), relate to how the learning environment supports or hinders the development of a professional identity, as measured by MCPIS-9. Furthermore, the Perception of Teachers (SpoT) subscale examines how teacher behaviors and attitudes impact students, which can influence their professional identity development. The Perception of Atmosphere (SPoA) and Social Self-Perception (SSSP) subscales evaluate the broader environment and social interactions, which are crucial for professional identity formation as they foster a sense of community and belonging.

Employing a mixed methods approach and analyzing both questionnaires and FGs through the framework outlined by Gruppen et al. highlighted key aspects across four dimensions of the learning environment: personal development, social dimension, organizational setting, and materialistic dimension [ 1 ]. First, the study underscored the significance of both personal development and constructive feedback. IPE activities emerged as a key factor that promotes professional identity by cultivating collaboration and role identification which is consistent with Bendowska and Baum's findings [ 61 ]. Similarly, the positive impact of constructive faculty feedback on student learning outcomes aligned with the work of Gan et al. which revealed that feedback from faculty members positively influences course satisfaction and knowledge retention, which are usually reflected in course results [ 62 ]. Importantly, the research also emphasized the need for workload management strategies to mitigate negative impacts on student well-being, a crucial factor for academic performance and professional identity development [ 63 , 64 ]. The inclusion of community events and support services could play a significant role in fostering student well-being and reducing stress, as suggested by Hoferichter et al. [ 65 ]. Second, the importance of the social dimension of the learning environment was further highlighted by the study. Extracurricular activities were identified as opportunities to develop essential interpersonal skills needed for professional identity, mirroring the conclusions drawn by Achar Fujii et al. who argued that extracurricular activities lead to the development of fundamental skills and attitudes to build and refine their professional identity and facilitate the learning process, such as leadership, commitment, and responsibility [ 66 ]. Furthermore, Magpantay-Monroe et al. concluded that community and social engagement led to professional identity development in nursing students through the expansion of their knowledge and communication with other nursing professionals [ 67 ]. PBL activities were another key element that promoted critical thinking, learning, and ultimately, professional identity development in this study similar to what was reported by Zhou et al. and Du et al. [ 68 , 69 ]. Third, the organizational setting, particularly the curriculum and clinical experiences, emerged as crucial factors. Clinical placements and field trips were found to be instrumental in cultivating empathy and professional identity [ 70 , 71 ]. However, maintaining an up-to-date curriculum that reflects advancements in AI healthcare education is equally important, as highlighted by Randhawa and Jackson in 2019 [ 72 ]. Finally, the study underlined the role of the materialistic dimension of the learning environment. Physical learning environments with natural light and managed noise levels were found to contribute to improved academic performance [ 73 , 74 ]. Additionally, the value of online educational resources, such as online library resources and massive open online course, as tools facilitating learning by providing easy access to materials, was emphasized, which is consistent with the observations of Haleem et al. [ 75 ].

The above collectively contribute to shaping students' professional identities through appreciating their roles, developing confidence, and understanding the interdependence of different health professions. These indicate that a supportive and engaging learning environment is crucial for fostering a strong sense of professional identity. Incorporating these student-informed strategies can assist educational institutions in cultivating well-rounded healthcare professionals equipped with the knowledge, skills, and emotional resilience needed to thrive in the dynamic healthcare landscape. Compared to existing quantitative data, this study reported a lower median MCPIS-9 score of 24.0, in contrast to previously reported scores of 39.0, 38.0, 38.0, respectively. [ 76 , 77 , 78 ]. This discrepancy may be influenced by the fact that the participants were in their second professional year, known for weaker identity development [ 79 ]. Students with relatives in the same profession perceived their identity more positively, which is likely due to role model influences [ 22 ].

Expectations of the ideal educational learning environment

This study also sought to identify the key attributes of an ideal learning environment from the perspective of students at QU-Health. The findings revealed a strong emphasis on active learning strategies, aligning with Kolb's experiential learning theory [ 80 ]. This preference suggests a desire to move beyond traditional lecture formats and engage in activities that promote experimentation and reflection, potentially mitigating issues of student boredom. Furthermore, students valued the implementation of simple reward systems such as public recognition, mirroring the positive impact such practices have on academic achievement reported by Dannan in 2020 [ 81 ]. The perceived importance of mentorship programs resonates with the work of Guhan et al. who demonstrated improved academic performance, particularly for struggling students [ 82 ]. Finally, the study highlighted the significance of a walkable campus with accessible facilities. This aligns with Rohana et al. who argued that readily available and useable facilities contribute to effective teaching and learning processes, ultimately resulting in improved student outcomes [ 83 ]. Understanding these student perceptions, health professions education programs can inform strategic planning for curricular and extracurricular modifications alongside infrastructural development.

The complementary nature of qualitative and quantitative methods in understanding student experiences

This study underscored the benefits of employing mixed methods to comprehensively explore the interplay between the learning environment and professional identity formation as complex phenomena. The qualitative component provided nuanced insights that complemented the baseline data provided by DREEM and MCPIS-9 questionnaires. While DREEM scores generally indicated positive perceptions, qualitative findings highlighted the significant impact of experiential learning on students' perceptions of the learning environment and professional identity development. Conversely, discrepancies emerged between questionnaire responses and FG interviews, revealing deeper issues such as fatigue and boredom associated with traditional teaching methods and heavy workloads, potentially influenced by cultural factors. In FGs, students revealed cultural pressures to conform and stigma against expressing dissatisfaction, which questionnaire responses may not capture. Qualitative data allowed students to openly discuss culturally sensitive issues, indicating that interviews complement surveys by revealing insights overlooked in quantitative assessments alone. These insights can inform the design of learning environments that support holistic student development. The study also suggested that cultural factors can influence student perceptions and should be considered in educational research and practice.

Application of findings

The findings from this study can be directly applied to inform and enhance educational practices, as well as to influence policy and practice sectors. Educational institutions should prioritize integrating active learning strategies and mentorship programs to combat issues such as student fatigue and boredom. Furthermore, practical opportunities, including experiential learning and IPE activities, should be emphasized to strengthen professional identity and engagement. To address these challenges comprehensively, policymakers should consider developing policies that support effective workload management and community support services, which are essential for improving student well-being and academic performance. Collaboration between educational institutions and practice sectors can greatly improve students' satisfaction with their learning environment and experience. This partnership enhances the relevance and engagement of their education, leading to a stronger professional identity and better preparation for successful careers.

Limitations

As with all research, this study has several limitations. For instance, there was a higher percentage of female participants compared to males; however, it is noteworthy to highlight the demographic composition of QU Health population, where students are majority female. Furthermore, the CHS, which is one of the participating colleges in this study, enrolls only female students. Another limitation is the potentially underpowered statistical comparisons among the sociodemographic characteristics in relation to the total DREEM and MCPIS-9 scores. Thus, the findings of this study should be interpreted with caution.

The findings of this study reveal that QU Health students generally hold a positive view of their learning environment and professional identity, with a significant positive correlation exists between students’ perceptions of their learning environment and their professional identity. Specifically, students who engaged in experiential learning or enrolled in practical programs rated their learning environment more favorably, and those with relatives in the same profession had a more positive view of their professional identity. The participants of this study also identified several key attributes that contribute to a positive learning environment, including active learning approaches and mentorship programs. Furthermore, addressing issues like fatigue and boredom is crucial for enhancing student satisfaction and professional development.

To build on these findings, future research should focus on longitudinal studies that monitor changes in the perceptions of students over time and identify the long-term impact of implementing the proposed attributes of an ideal learning environment on the learning process and professional identity development of students. Additionally, exploring the intricate dynamics of learning environments and their impact on professional identity can allow educators to better support students in their professional journey. Future research should also continue to explore these relationships, particularly on diverse cultural settings, in order to develop more inclusive and effective educational strategies. This approach will ensure that health professional students are well-prepared to meet the demands of their profession and provide high-quality care to their patients.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

United Nations Educational, Scientific, and Cultural Organization

European Union

American Council on Education

World Federation for Medical Education

Communities of Practice

Qatar University Health

College of Health Sciences

College of Pharmacy

College of Medicine

Dental Medicine

College of Nursing

Human Nutrition

Biomedical Science

Public Health

Physiotherapy

Dundee Ready Education Environment Measure

Perception to Learning

Perception to Teachers

Academic Self-Perception

Perception of the Atmosphere

Social Self-Perception

Macleod Clark Professional Identity Scale

Focus Group

InterProfessional Education

Project-Based Learning

Hamad Medical Corporation

Hamad Bin Khalifa Medical City

Artificial Intelligence

Gruppen LD, Irby DM, Durning SJ, Maggio LA. Conceptualizing Learning Environments in the Health Professions. Acad Med. 2019;94(7):969–74.

Article   Google Scholar  

OECD. Trends Shaping Education 2019. 2019.

Rawas H, Yasmeen N. Perception of nursing students about their educational environment in College of Nursing at King Saud Bin Abdulaziz University for Health Sciences. Saudi Arabia Med Teach. 2019;41(11):1307–14.

Google Scholar  

Rusticus SA, Wilson D, Casiro O, Lovato C. Evaluating the Quality of Health Professions Learning Environments: Development and Validation of the Health Education Learning Environment Survey (HELES). Eval Health Prof. 2020;43(3):162–8.

Closs L, Mahat M, Imms W. Learning environments’ influence on students’ learning experience in an Australian Faculty of Business and Economics. Learning Environ Res. 2022;25(1):271–85.

Bakhshialiabad H, Bakhshi G, Hashemi Z, Bakhshi A, Abazari F. Improving students’ learning environment by DREEM: an educational experiment in an Iranian medical sciences university (2011–2016). BMC Med Educ. 2019;19(1):397.

Karani R. Enhancing the Medical School Learning Environment: A Complex Challenge. J Gen Intern Med. 2015;30(9):1235–6.

Adams K, Hean S, Sturgis P, Clark JM. Investigating the factors influencing professional identity of first-year health and social care students. Learn Health Soc Care. 2006;5(2):55–68.

Brown B, Crawford P, Darongkamas J. Blurred roles and permeable boundaries: the experience of multidisciplinary working in community mental health. Health Soc Care Community. 2000;8(6):425–35.

Hendelman W, Byszewski A. Formation of medical student professional identity: categorizing lapses of professionalism, and the learning environment. BMC Med Educ. 2014;14(1):139.

Jarvis-Selinger S, MacNeil KA, Costello GRL, Lee K, Holmes CL. Understanding Professional Identity Formation in Early Clerkship: A Novel Framework. Acad Med. 2019;94(10):1574–80.

Sarraf-Yazdi S, Teo YN, How AEH, Teo YH, Goh S, Kow CS, et al. A Scoping Review of Professional Identity Formation in Undergraduate Medical Education. J Gen Intern Med. 2021;36(11):3511–21.

Lave J, Wenger E. Learning in Doing: Social, cognitive and computational perspectives. Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press; 1991. https://www.cambridge.org/highereducation/books/situatedlearning/6915ABD21C8E4619F750A4D4ACA616CD#overview .

Wenger, E. Communities of practice: Learning, meaning and identity. Cambridge: Cambridge University; 1998.

Eberle J, Stegmann K, Fischer F. Legitimate Peripheral Participation in Communities of Practice: Participation Support Structures for Newcomers in Faculty Student Councils. J Learn Sci. 2014;23(2):216–44.

Graven M, Lerman S, Wenger E. Communities of practice: Learning, meaning and identity. J Math Teacher Educ. 1998;2003(6):185–94.

Brown T, Williams B, Lynch M. The Australian DREEM: evaluating student perceptions of academic learning environments within eight health science courses. Int J Med Educ. 2011;2:94.

International standards in medical education: assessment and accreditation of medical schools'--educational programmes. A WFME position paper. The Executive Council, The World Federation for Medical Education. Med Educ. 1998;32(5):549–58.

Frank JR, Taber S, van Zanten M, Scheele F, Blouin D, on behalf of the International Health Professions Accreditation Outcomes C. The role of accreditation in 21st century health professions education: report of an International Consensus Group. BMC Medical Education. 2020;20(1):305.

Trede F, Macklin R, Bridges D. Professional identity development: A review of the higher education literature. Stud High Educ. 2012;37:365–84.

de Lasson L, Just E, Stegeager N, Malling B. Professional identity formation in the transition from medical school to working life: a qualitative study of group-coaching courses for junior doctors. BMC Med Educ. 2016;16(1):165.

Findyartini A, Greviana N, Felaza E, Faruqi M, Zahratul Afifah T, Auliya FM. Professional identity formation of medical students: A mixed-methods study in a hierarchical and collectivist culture. BMC Med Educ. 2022;22(1):443.

Cruess RL, Cruess SR, Boudreau JD, Snell L, Steinert Y. A schematic representation of the professional identity formation and socialization of medical students and residents: a guide for medical educators. Acad Med. 2015;90(6):718–25.

Prashanth GP, Ismail SK. The Dundee Ready Education Environment Measure: A prospective comparative study of undergraduate medical students’ and interns’ perceptions in Oman. Sultan Qaboos Univ Med J. 2018;18(2):e173–81.

Helou MA, Keiser V, Feldman M, Santen S, Cyrus JW, Ryan MS. Student well-being and the learning environment. Clin Teach. 2019;16(4):362–6.

Brown T, Williams B, McKenna L, Palermo C, McCall L, Roller L, et al. Practice education learning environments: the mismatch between perceived and preferred expectations of undergraduate health science students. Nurse Educ Today. 2011;31(8):e22–8.

Wasson LT, Cusmano A, Meli L, Louh I, Falzon L, Hampsey M, et al. Association Between Learning Environment Interventions and Medical Student Well-being: A Systematic Review. JAMA. 2016;316(21):2237–52.

Aktaş YY, Karabulut N. A Survey on Turkish nursing students’ perception of clinical learning environment and its association with academic motivation and clinical decision making. Nurse Educ Today. 2016;36:124–8.

Enns SC, Perotta B, Paro HB, Gannam S, Peleias M, Mayer FB, et al. Medical Students’ Perception of Their Educational Environment and Quality of Life: Is There a Positive Association? Acad Med. 2016;91(3):409–17.

Rodríguez-García MC, Gutiérrez-Puertas L, Granados-Gámez G, Aguilera-Manrique G, Márquez-Hernández VV. The connection of the clinical learning environment and supervision of nursing students with student satisfaction and future intention to work in clinical placement hospitals. J Clin Nurs. 2021;30(7–8):986–94.

QU Health QU. QU Health Members https://www.qu.edu.qa/sites/en_US/health/members2020 . Accessed 11 May 2024.

QU Health QU. Vision and Mission https://www.qu.edu.qa/sites/en_US/health/2018 . Accessed 11 May 2024.

Schoonenboom J, Johnson RB. How to Construct a Mixed Methods Research Design. Kolner Z Soz Sozpsychol. 2017;69(Suppl 2):107–31.

Almeida F. Strategies to perform a mixed methods study. Eur J Educ Stud. 2018;5(1):137–51. https://doi.org/10.5281/zenodo.1406214 .

Roff S, McAleer S, Harden RM, Al-Qahtani M, Ahmed AU, Deza H, et al. Development and validation of the Dundee ready education environment measure (DREEM). Med Teach. 1997;19(4):295–9.

Woodside AG. Book Review: Handbook of Research Design and Social Measurement. J Mark Res. 1993;30(2):259–63.

Creswell JW, Poth CN. Qualitative Inquiry and Research Design Choosing among Five Approaches. 4th Edition, Thousand Oaks: SAGE Publications, Inc., 2018.

Fetters MD, Curry LA, Creswell JW. Achieving integration in mixed methods designs-principles and practices. Health Serv Res. 2013;48(6 Pt 2):2134–56.

Dunne F, McAleer S, Roff S. Assessment of the undergraduate medical education environment in a large UK medical school. Health Educ J. 2006;65(2):149–58.

Koohpayehzadeh J, Hashemi A, Arabshahi KS, Bigdeli S, Moosavi M, Hatami K, et al. Assessing validity and reliability of Dundee ready educational environment measure (DREEM) in Iran. Med J Islam Repub Iran. 2014;28:60.

Shehnaz SI, Sreedharan J. Students’ perceptions of educational environment in a medical school experiencing curricular transition in United Arab Emirates. Med Teach. 2011;33(1):e37–42.

Zawawi A, Owaiwid L, Alanazi F, Alsogami L, Alageel N, Alassafi M, et al. Using Dundee Ready Educational Environment Measure (DREEM) to evaluate educational environments in Saudi Arabia. Int J Med Develop Countr. 2022;1:1526–33.

McAleer S, Roff S. A practical guide to using the Dundee Ready Education Environment Measure (DREEM). AMEE medical education guide. 2001;23(5):29–33.

Soemantri D, Herrera C, Riquelme A. Measuring the educational environment in health professions studies: a systematic review. Med Teach. 2010;32(12):947–52.

Matthews J, Bialocerkowski A, Molineux M. Professional identity measures for student health professionals–a systematic review of psychometric properties. BMC Med Educ. 2019;19(1):1–10.

Worthington M, Salamonson Y, Weaver R, Cleary M. Predictive validity of the Macleod Clark Professional Identity Scale for undergraduate nursing students. Nurse Educ Today. 2013;33(3):187–91.

Cowin LS, Johnson M, Wilson I, Borgese K. The psychometric properties of five Professional Identity measures in a sample of nursing students. Nurse Educ Today. 2013;33(6):608–13.

Brown R, Condor S, Mathews A, Wade G, Williams J. Explaining intergroup differentiation in an industrial organization. J Occup Psychol. 1986;59(4):273–86.

Proudfoot K. Inductive/Deductive Hybrid Thematic Analysis in Mixed Methods Research. J Mixed Methods Res. 2022;17(3):308–26.

Kossioni A, Varela R, Ekonomu I, Lyrakos G, Dimoliatis I. Students’ perceptions of the educational environment in a Greek Dental School, as measured by DREEM. Eur J Dent Educ. 2012;16(1):e73–8.

Leman M. Conctruct Validity Assessment of Dundee Ready Educational Environment Measurement (Dreem) in a School of Dentistry. Jurnal Pendidikan Kedokteran Indonesia: The Indonesian Journal of Medical Education. 2017;6:11.

Mohd Said N, Rogayah J, Hafizah A. A study of learning environments in the kulliyyah (faculty) of nursing, international islamic university malaysia. Malays J Med Sci. 2009;16(4):15–24.

Ugusman A, Othman NA, Razak ZNA, Soh MM, Faizul PNK, Ibrahim SF. Assessment of learning environment among the first year Malaysian medical students. Journal of Taibah Univ Med Sci. 2015;10(4):454–60.

Zamzuri A, Ali A, Roff S, McAleer S. Students perceptions of the educational environment at dental training college. Malaysian Dent J. 2004;25:15–26.

Ye J-H, Lee Y-S, He Z. The relationship among expectancy belief, course satisfaction, learning effectiveness, and continuance intention in online courses of vocational-technical teachers college students. Front Psychol. 2022;13: 904319.

Ashby SE, Adler J, Herbert L. An exploratory international study into occupational therapy students’ perceptions of professional identity. Aust Occup Ther J. 2016;63(4):233–43.

Al-Tameemi RAN, Johnson C, Gitay R, Abdel-Salam A-SG, Al Hazaa K, BenSaid A, et al. Determinants of poor academic performance among undergraduate students—A systematic literature review. Int J Educ Res Open. 2023;4:100232.

Adeel M, Chaudhry A, Huh S. Physical therapy students’ perceptions of the educational environment at physical therapy institutes in Pakistan. jeehp. 2020;17(0):7–0.

Clarke C, Martin M, Sadlo G, de-Visser R. The development of an authentic professional identity on role-emerging placements. Bri J Occupation Ther. 2014;77(5):222–9.

Hunter AB, Laursen SL, Seymour E. Becoming a scientist: The role of undergraduate research in students’ cognitive, personal, and professional development. Sci Educ. 2007;91(1):36–74.

Bendowska A, Baum E. The significance of cooperation in interdisciplinary health care teams as perceived by polish medical students. Int J Environ Res Public Health. 2023;20(2):954.

Gan Z, An Z, Liu F. Teacher feedback practices, student feedback motivation, and feedback behavior: how are they associated with learning outcomes? Front Psychol. 2021;12: 697045.

Sattar K, Yusoff MSB, Arifin WN, Mohd Yasin MA, Mat Nor MZ. A scoping review on the relationship between mental wellbeing and medical professionalism. Med Educ Online. 2023;28(1):2165892.

Yangdon K, Sherab K, Choezom P, Passang S, Deki S. Well-Being and Academic Workload: Perceptions of Science and Technology Students. Educ Res Reviews. 2021;16(11):418–27.

Hoferichter F, Kulakow S, Raufelder D. How teacher and classmate support relate to students’ stress and academic achievement. Front Psychol. 2022;13: 992497.

Achar Fujii RN, Kobayasi R, Claassen Enns S, Zen Tempski P. Medical Students’ Participation in Extracurricular Activities: Motivations, Contributions, and Barriers. A Qualitative Study. Advances in Medical Education and Practice. 2022;13:1133–41. https://doi.org/10.2147/amep.s359047 .

Magpantay-Monroe ER, Koka O-H, Aipa K. Community Engagement Leads to Professional Identity Formation of Nursing Students. Asian/Pacific Island Nurs J. 2020;5(3):181.

Zhou F, Sang A, Zhou Q, Wang QQ, Fan Y, Ma S. The impact of an integrated PBL curriculum on clinical thinking in undergraduate medical students prior to clinical practice. BMC Med Educ. 2023;23(1):460.

Du X, Al Khabuli JOS, Ba Hattab RAS, Daud A, Philip NI, Anweigi L, et al. Development of professional identity among dental students - A qualitative study. J Dent Educ. 2023;87(1):93–100.

Zulu BM, du Plessis E, Koen MP. Experiences of nursing students regarding clinical placement and support in primary healthcare clinics: Strengthening resilience. Health SA Gesondheid. 2021;26:1–11. https://doi.org/10.4102/hsag.v26i0.1615 .

McNally G, Haque E, Sharp S, Thampy H. Teaching empathy to medical students. Clin Teach. 2023;20(1): e13557.

Randhawa GK, Jackson M. The role of artificial intelligence in learning and professional development for healthcare professionals. Healthc Manage Forum. 2019;33(1):19–24.

Cooper AZ, Simpson D, Nordquist J. Optimizing the Physical Clinical Learning Environment for Teaching. J Grad Med Educ. 2020;12(2):221–2.

Gad SE-S, Noor W, Kamar M. How Does The Interior Design of Learning Spaces Impact The Students` Health, Behavior, and Performance? J Eng Res. 2022;6(4):74–87.

Haleem A, Javaid M, Qadri MA, Suman R. Understanding the role of digital technologies in education: A review. Sustain Operation Comput. 2022;3:275–85.

Faihs V, Heininger S, McLennan, S. et al. Professional Identity and Motivation for Medical School in First-Year Medical Students: A Cross-sectional Study. Med Sci Educ. 2023;33:431–41. https://doi.org/10.1007/s40670-023-01754-7 .

Johnston T, Bilton N. Investigating paramedic student professional identity. Australasian J Paramed. 2020;17:1–8.

Mumena WA, Alsharif BA, Bakhsh AM, Mahallawi WH. Exploring professional identity and its predictors in health profession students and healthcare practitioners in Saudi Arabia. PLoS ONE. 2024;19(5): e0299356.

Kis V. Quality assurance in tertiary education: Current practices in OECD countries and a literature review on potential effects. Tertiary Review: A contribution to the OECD thematic review of tertiary education. 2005;14(9):1–47.

Kolb D. Experiential learning as the science of learning and development. Englewood Cliffs, NJ: Prentice Hall; 1984.

Dannan A. The Effect of a Simple Reward Model on the Academic Achievement of Syrian Dental Students. International Journal of Educational Research Review. 2020;5(4):308–14.

Guhan N, Krishnan P, Dharshini P, Abraham P, Thomas S. The effect of mentorship program in enhancing the academic performance of first MBBS students. J Adv Med Educ Prof. 2020;8(4):196–9.

Rohana K, Zainal N, Mohd Aminuddin Z, Jusoff K. The Quality of Learning Environment and Academic Performance from a Student’s Perception. Int J Business Manag. 2009;4:171–5.

Download references

Acknowledgements

The authors would like to thank all students who participated in this study.

This work was supported by the Qatar University Internal Collaborative Grant: QUCG-CPH-22/23–565.

Author information

Authors and affiliations.

Department of Clinical Pharmacy and Practice, College of Pharmacy, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar

Banan Mukhalalati, Aaliah Aly, Ola Yakti, Sara Elshami, Ahmed Awaisu, Alla El-Awaisi & Derek Stewart

College of Dental Medicine, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar

College of Health Sciences, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar

Ahsan Sethi

College of Medicine, QU Health, Qatar University, P.O. Box 2713, Doha, Qatar

Marwan Farouk Abu-Hijleh

Leslie Dan Faculty of Pharmacy, University of Toronto, Toronto, ON, Canada

Zubin Austin

You can also search for this author in PubMed   Google Scholar

Contributions

Study conception and design: BM, and SE; data collection: BM, OY, AA, and AD; analysis and interpretation of results: all authors; draft manuscript preparation: all authors. All authors reviewed the results and approved the final version of the manuscript.

Corresponding author

Correspondence to Banan Mukhalalati .

Ethics declarations

Ethics approval and consent to participate.

The data of human participants in this study were conducted in accordance with the Helsinki Declaration. Ethical approval for the study was obtained from the Qatar University Institutional Review Board (approval number: QU-IRB 1734-EA/22). All participants provided informed consent prior to participation.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Mukhalalati, B., Aly, A., Yakti, O. et al. Examining the perception of undergraduate health professional students of their learning environment, learning experience and professional identity development: a mixed-methods study. BMC Med Educ 24 , 886 (2024). https://doi.org/10.1186/s12909-024-05875-4

Download citation

Received : 03 July 2024

Accepted : 08 August 2024

Published : 16 August 2024

DOI : https://doi.org/10.1186/s12909-024-05875-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Learning Environment
  • Professional Identity
  • Healthcare Professions Education
  • Gruppen et al. Framework

BMC Medical Education

ISSN: 1472-6920

critical sampling in qualitative research

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals

You are here

  • Volume 33, Issue 9
  • Patient safety in remote primary care encounters: multimethod qualitative study combining Safety I and Safety II analysis
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Rebecca Payne 1 ,
  • Aileen Clarke 1 ,
  • Nadia Swann 1 ,
  • Jackie van Dael 1 ,
  • Natassia Brenman 1 ,
  • Rebecca Rosen 2 ,
  • Adam Mackridge 3 ,
  • Lucy Moore 1 ,
  • Asli Kalin 1 ,
  • Emma Ladds 1 ,
  • Nina Hemmings 2 ,
  • Sarah Rybczynska-Bunt 4 ,
  • Stuart Faulkner 1 ,
  • Isabel Hanson 1 ,
  • Sophie Spitters 5 ,
  • http://orcid.org/0000-0002-7758-8493 Sietse Wieringa 1 , 6 ,
  • Francesca H Dakin 1 ,
  • Sara E Shaw 1 ,
  • Joseph Wherton 1 ,
  • Richard Byng 4 ,
  • Laiba Husain 1 ,
  • http://orcid.org/0000-0003-2369-8088 Trisha Greenhalgh 1
  • 1 Nuffield Department of Primary Care Health Sciences , University of Oxford , Oxford , UK
  • 2 Nuffield Trust , London , UK
  • 3 Betsi Cadwaladr University Health Board , Bangor , UK
  • 4 Peninsula Schools of Medicine and Dentistry , University of Plymouth , Plymouth , UK
  • 5 Wolfson Institute of Population Health , Queen Mary University of London , London , UK
  • 6 Sustainable Health Unit , University of Oslo , Oslo , Norway
  • Correspondence to Professor Trisha Greenhalgh; trish.greenhalgh{at}phc.ox.ac.uk

Background Triage and clinical consultations increasingly occur remotely. We aimed to learn why safety incidents occur in remote encounters and how to prevent them.

Setting and sample UK primary care. 95 safety incidents (complaints, settled indemnity claims and reports) involving remote interactions. Separately, 12 general practices followed 2021–2023.

Methods Multimethod qualitative study. We explored causes of real safety incidents retrospectively (‘Safety I’ analysis). In a prospective longitudinal study, we used interviews and ethnographic observation to produce individual, organisational and system-level explanations for why safety and near-miss incidents (rarely) occurred and why they did not occur more often (‘Safety II’ analysis). Data were analysed thematically. An interpretive synthesis of why safety incidents occur, and why they do not occur more often, was refined following member checking with safety experts and lived experience experts.

Results Safety incidents were characterised by inappropriate modality, poor rapport building, inadequate information gathering, limited clinical assessment, inappropriate pathway (eg, wrong algorithm) and inadequate attention to social circumstances. These resulted in missed, inaccurate or delayed diagnoses, underestimation of severity or urgency, delayed referral, incorrect or delayed treatment, poor safety netting and inadequate follow-up. Patients with complex pre-existing conditions, cardiac or abdominal emergencies, vague or generalised symptoms, safeguarding issues, failure to respond to previous treatment or difficulty communicating seemed especially vulnerable. General practices were facing resource constraints, understaffing and high demand. Triage and care pathways were complex, hard to navigate and involved multiple staff. In this context, patient safety often depended on individual staff taking initiative, speaking up or personalising solutions.

Conclusion While safety incidents are extremely rare in remote primary care, deaths and serious harms have resulted. We offer suggestions for patient, staff and system-level mitigations.

  • Primary care
  • Diagnostic errors
  • Safety culture
  • Qualitative research
  • Prehospital care

Data availability statement

Data are available upon reasonable request. Details of real safety incidents are not available for patient confidentiality reasons. Requests for data on other aspects of the study from other researchers will be considered.

This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See:  https://creativecommons.org/licenses/by/4.0/ .

https://doi.org/10.1136/bmjqs-2023-016674

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Safety incidents are extremely rare in primary care but they do happen. Concerns have been raised about the safety of remote triage and remote consultations.

WHAT THIS STUDY ADDS

Rare safety incidents (involving death or serious harm) in remote encounters can be traced back to various clinical, communicative, technical and logistical causes. Telephone and video encounters in general practice are occurring in a high-risk (extremely busy and sometimes understaffed) context in which remote workflows may not be optimised. Front-line staff use creativity and judgement to help make care safer.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

As remote modalities become mainstreamed in primary care, staff should be trained in the upstream causes of safety incidents and how they can be mitigated. The subtle and creative ways in which front-line staff already contribute to safety culture should be recognised and supported.

Introduction

In early 2020, remote triage and remote consultations (together, ‘remote encounters’), in which the patient is in a different physical location from the clinician or support staff member, were rapidly expanded as a safety measure in many countries because they eliminated the risk of transmitting COVID-19. 1–4 But by mid-2021, remote encounters had begun to be depicted as potentially unsafe because they had come to be associated with stories of patient harm, including avoidable deaths and missed cancers. 5–8

Providing triage and clinical care remotely is sometimes depicted as a partial solution to the system pressures facing primary healthcare in many countries, 9–11 including rising levels of need or demand, the ongoing impact of the COVID-19 pandemic and workforce challenges (especially short-term or longer-term understaffing). In this context, remote encounters may be an important component of a mixed-modality health service when used appropriately alongside in-person contacts. 12 13 But this begs the question of what ‘appropriate’ and ‘safe’ use of remote modalities in a primary care context is. Safety incidents (defined as ‘any unintended or unexpected incident which could have, or did, lead to harm for one or more patients receiving healthcare 14 ’) are extremely rare in primary healthcare consultations generally, 15 16 in-hours general practice telephone triage 17 and out-of-hours primary care. 18 But the recent widespread expansion of remote triage and remote consulting in primary care means that a wider range of patients and conditions are managed remotely, making it imperative to re-examine where the risks lie.

Theoretical approaches to safety in healthcare fall broadly into two traditions. 19 ‘Safety I’ studies focus on what went wrong. Incident reports are analysed to identify ‘root causes’ and ‘safety gaps’, and recommendations are made to reduce the chance that further similar incidents will happen in the future. 20 Such studies, undertaken in isolation, tend to lead to a tightening of rules, procedures and protocols. ‘Safety II’ studies focus on why, most of the time, things do not go wrong. Ethnography and other qualitative methods are employed to study how humans respond creatively to unique and unforeseen situations, thereby preventing safety incidents most of the time. 19 Such studies tend to show that actions which achieve safety are highly context specific, may entail judiciously breaking the rules and require human qualities such as courage, initiative and adaptability. 21 Few previous studies have combined both approaches.

In this study, we aimed to use Safety I methods to learn why safety incidents occur (although rarely) in remote primary care encounters and also apply Safety II methods to examine the kinds of creative actions taken by front-line staff that contribute to a safety culture and thereby prevent such incidents.

Study design and origins

Multimethod qualitative study across UK, including incident analysis, longitudinal ethnography and national stakeholder interviews.

The idea for this safety study began during a longitudinal ethnographic study of 12 general practices across England, Scotland and Wales as they introduced (and, in some cases, subsequently withdrew) various remote and digital modalities. Practices were selected for maximum diversity in geographical location, population served and digital maturity and followed from mid-2021 to end 2023 using staff and patient interviews and in-person ethnographic visits. The study protocol, 22 baseline findings 23 and a training needs analysis 24 have been published. To provide context for our ethnography, we interviewed a sample of national stakeholders in remote and digital primary care, including out-of-hours providers running telephone-led services, and held four online multistakeholder workshops, one of which was on the theme of safety, for policymakers, clinicians, patients and other parties. Early data from this detailed qualitative work revealed staff and patient concerns about the safety of remote encounters but no actual examples of harm.

To explore the safety theme further, we decided to take a dual approach. First, following Safety I methodology for the study of rare harms, 20 we set out to identify and analyse a sample of safety incidents involving remote encounters. These were sourced from arm’s-length bodies (NHS England, NHS Resolution, Healthcare Safety Investigation Branch) and providers of healthcare at scale (health boards, integrated care systems and telephone advice services), since our own small sample had not identified any of these rare occurrences. Second, we extended our longitudinal ethnographic design to more explicitly incorporate Safety II methodology, 19 allowing us to examine safety culture and safety practices in our 12 participating general practices, especially the adaptive work done by staff to avert potential safety incidents.

Data sources and management

Table 1 summarises the data sources.

  • View inline

Summary of data sources

The Safety I dataset (rows 2-5) consisted of 95 specific incident reports, including complaints submitted to the main arm’s-length NHS body in England, NHS England, between 2020 and 2023 (n=69), closed indemnity claims that had been submitted to a national indemnity body, NHS Resolution, between 2015 and 2023 (n=16), reports from an urgent care telephone service in Wales (NHS 111 Wales) between 2020 and 2023 (n=6) and a report on an investigation of telephone advice during the COVID-19 crisis between 2020 and 2022 7 (n=4). These 95 incidents were organised using Microsoft Excel spreadsheets.

The Safety II dataset (rows 6-10) consisted of extracts from fieldnotes, workshop transcripts and interviews collected over 2 years, stored and coded on NVivo qualitative software. These were identified by searching for text words and codes (e.g. ‘risk’, ‘safety’, ‘incident’) and by asking researchers-in-residence, who were closely familiar with practices, to highlight safety incidents involving harm and examples of safety-conscious work practices. This dataset included over 100 formal interviews and numerous on-the-job interviews with practice staff, plus interviews with a sample of 10 GP (general practitioner) trainers and 10 GP trainees (penultimate row of table 1 ) and with six clinical safety experts identified through purposive sampling from government, arm’s-length bodies and health boards (bottom row of table 1 ).

Data analysis

We analysed incident reports, interview data and ethnographic fieldnotes using thematic analysis as described by Braun and Clarke. 25 These authors define a theme as an important, broad pattern in a set of qualitative data, which can (where necessary) be further refined using coding.

Themes in the incident dataset were identified by five steps. First, two researchers (both medically qualified) read each source repeatedly to gain familiarity. Second, those researchers worked independently using Braun and Clarke’s criterion (‘whether it captures something important in relation to the overall research question’—p 82 25 ) to identify themes. Third, they discussed their initial interpretations with each other and resolved differences through discussion. Fourth, they extracted evidence from the data sources to illustrate and refine each theme. Finally, they presented their list of themes along with illustrative examples to the wider team. Cases used to illustrate themes were systematically fictionalised by changing age, randomly allocating gender and altering clinical details. 26 For example, an acute appendicitis could be changed to acute diverticulitis if the issue was a missed acute abdomen.

These safety themes were then used to sensitise us to seek relevant (confirming and disconfirming) material from our ethnographic and interview datasets. For example, the theme ‘poor communication’ (and subthemes such as ‘failure to seek further clarification’ within this) promoted us to look for examples in our stakeholder interviews of poor communication offered as a cause of safety incidents and examples in our ethnographic notes of good communication (including someone seeking clarification). We used these wider data to add nuance to the initial list of themes.

As a final sense-checking step, the draft findings from this study were shown to each of the six safety experts in our sample and refined in the light of their comments (in some cases, for example, they considered the case to have been overfictionalised, thereby losing key clinical messages; they also gave additional examples to illustrate some of the themes we had identified, which underlined the importance of those themes).

Overview of dataset

The dataset ( table 1 ) consisted of 95 incident reports (see fictionalised examples in box 1 ), plus approximately 400 pages of extracts from interviews, ethnographic fieldnotes and workshop discussions, including situated safety practices (see examples in box 2 ), plus strategic insights relating to policy, organisation and planning of services. Notably, almost all incidents related to telephone calls.

Examples of safety incidents involving death or serious harm in remote encounters

All these cases have been systematically fictionalised as explained in the text.

Case 1 (death)

A woman in her 70s experiencing sudden breathlessness called her GP (general practitioner) surgery. The receptionist answered the phone and informed her that she would place her on the doctor’s list for an emergency call-back. The receptionist was distracted by a patient in the waiting room and did not do so. The patient deteriorated and died at home that afternoon.—NHS Resolution case, pre-2020

Case 2 (death)

An elderly woman contacted her GP after a telephone contact with the out-of-hours service, where constipation had been diagnosed. The GP prescribed laxatives without seeing the patient. The patient self-presented to the emergency department (ED) the following day in obstruction secondary to an incarcerated hernia and died in the operating theatre.—NHS Resolution case, pre-2020

Case 3 (risk to vulnerable patients)

A daughter complained that her elderly father was unable to access his GP surgery as he could not navigate the online triage system. When he phoned the surgery directly, he was directed back to the online system and told to get a relative to complete the form for him.—Complaint to NHS England, 2021

Case 4 (harm)

A woman in her first pregnancy at 28 weeks’ gestation experiencing urinary incontinence called NHS 111. She was taken down by a ‘urinary problems’ algorithm. Both the call handler and the subsequent clinician failed to recognise that she had experienced premature rupture of membranes. She later presented to the maternity department in active labour, and the opportunity to give early steroids to the premature infant was missed.—NHS Resolution case, pre-2020

Case 5 (death)

A doctor called about a 16-year-old girl with lethargy, shaking, fever and poor oral intake who had been unwell for 5 days. The doctor spoke to her older sister and advised that the child had likely glandular fever and should rest. When the parents arrived home, they called an ambulance but the child died of sepsis in the ED.—NHS Resolution case, pre-2020

Case 6 (death)

A 40-year-old woman, 6 weeks after caesarean section, contacted her GP due to shortness of breath, increased heart rate and dry cough. She was advised to get a COVID test and to dial 111 if she developed a productive cough, fever or pain. The following day she collapsed and died at home. The postmortem revealed a large pulmonary embolus. On reviewing the case, her GP surgery felt that had she been seen face to face, her oxygen saturations would have been measured and may have led to suspicion of the diagnosis.—NHS Resolution case, 2020

Case 7 (death)

A son complained that his father with diabetes and chronic kidney disease did not receive any in-person appointments over a period of 1 year. His father went on to die following a leg amputation arising from a complication of his diabetes.—Complaint to NHS England, 2021

Case 8 (death)

A 73-year-old diabetic woman with throat pain and fatigue called the surgery. She was diagnosed with a viral illness and given self-care advice. Over the next few days, she developed worsening breathlessness and was advised to do a COVID test and was given a pulse oximeter. She was found dead at home 4 days later. Postmortem found a blocked coronary artery and a large amount of pulmonary oedema. The cause of death was myocardial infarction and heart failure.—NHS Resolution case, pre-2020

Case 9 (harm)

A patient with a history of successfully treated cervical cancer developed vaginal bleeding. A diagnosis of fibroids was made and the patient received routine care by telephone over the next few months until a scan revealed a local recurrence of the original cancer.—Complaint to NHS England, 2020

Case 10 (death)

A 65-year-old female smoker with chronic cough and breathlessness presented to her GP. She was diagnosed with chronic obstructive pulmonary disease (COPD) and monitored via telephone. She did not respond to inhalers or antibiotics but continued to receive telephone monitoring without further investigation. Her symptoms continued to worsen and she called an ambulance. In the ED, she was diagnosed with heart failure and died soon after.—Complaint to NHS England, 2021

Case 11 (harm)

A 30-year-old woman presented with intermittent episodes of severe dysuria over a period of 2 years. She was given repeated courses of antibiotics but no urine was sent for culture and she was not examined. After 4 months of symptoms, she saw a private GP and was diagnosed with genital herpes.—Complaint to NHS England, 2021

Case 12 (harm)

There were repeated telephone consultations about a baby whose parents were concerned that the child was having a funny colour when feeding or crying. The 6-week check was done by telephone and at no stage was the child seen in person. Photos were sent in, but the child’s dark skin colour meant that cyanosis was not easily apparent to the reviewing clinician. The child was subsequently admitted by emergency ambulance where a significant congenital cardiac abnormality was found.—Complaint to NHS England, 2020 1

Case 13 (harm)

A 35-year-old woman in her third trimester of pregnancy had a telephone appointment with her GP about a breast lump. She was informed that this was likely due to antenatal breast changes and was not offered an in-person appointment. She attended after delivery and was referred to a breast clinic where a cancer was diagnosed.—Complaint to NHS England, 2020

Case 14 (harm)

A 63-year-old woman with a variety of physical symptoms including diarrhoea, hip girdle pain, palpitations, light-headedness and insomnia called her surgery on multiple occasions. She was told her symptoms were likely due to anxiety, but was diagnosed with stage 4 ovarian cancer and died soon after.—Complaint to NHS England, 2021

Case 15 (death)

A man with COPD with worsening shortness of breath called his GP surgery. The staff asked him if it was an emergency, and when the patient said no, scheduled him for 2 weeks later. The patient died before the appointment.—Complaint to NHS England, 2021

Examples of safety practices

Case 16 (safety incident averted by switching to video call for a sick child)

‘I’ve remembered one father that called up. Really didn’t seem to be too concerned. And was very much under-playing it and then when I did a video call, you know this child… had intercostal recession… looked really, really poorly. And it was quite scary actually that, you know, you’d had the conversation and if you’d just listened to what Dad was saying, actually, you probably wouldn’t be concerned.’—GP (general practitioner) interview 2022

Case 17 (‘red flag’ spotted by support staff member)

A receptionist was processing routine ‘administrative’ encounters sent in by patients using AccuRx (text messaging software). She became concerned about a sick note renewal request from a patient with a mental health condition. The free text included a reference to feeling suicidal, so the receptionist moved the request to the ‘red’ (urgent call-back) list. In interviews with staff, it became apparent that there had recently been heated discussion in the practice about whether support staff were adding ‘too many’ patients to the red list. After discussing cases, the doctors concluded that it should be them, not the support staff, who should absorb the risk in uncertain cases. The receptionist said that they had been told: ‘if in doubt, put it down as urgent and then the duty doctor can make a decision.’—Ethnographic fieldnotes from general practice 2023

Case 18 (‘check-in’ phone call added on busy day)

A duty doctor was working through a very busy Monday morning ‘urgent’ list. One patient had acute abdominal pain, which would normally have triggered an in-person appointment, but there were no slots and hard decisions were being made. This patient had had the pain already for a week, so the doctor judged that the general rule of in-person examination could probably be over-ridden. But instead of simply allocating to a call-back, the doctor asked a support staff member to phone the patient, ask ‘are you OK to wait until tomorrow?’ and offer basic safety-netting advice.—Ethnographic fieldnotes from general practice 2023

Case 19 (receptionist advocating on behalf of ‘angry’ walk-in patient)

A young Afghan man with limited English walked into a GP surgery on a very busy day, ignoring the prevailing policy of ‘total triage’ (make contact by phone or online in the first instance). He indicated that he wanted a same-day in-person appointment for a problem he perceived as urgent. A heated exchange occurred with the first receptionist, and the patient accused her of ‘racism’. A second receptionist of non-white ethnicity herself noted the man’s distress and suspected that there may indeed be an urgent problem. She asked the first receptionist to leave the scene, saying she wanted to ‘have a chat’ with the patient (‘the colour of my skin probably calmed him down more than anything’). Through talking to the patient and looking through his record, she ascertained that he had an acute infection that likely needed prompt attention. She tried to ‘bend the rules’ and persuade the duty doctor to see the patient, conveying the clinical information but deliberately omitting the altercation. But the first receptionist complained to the doctor (‘he called us racists’) and the doctor decided that the patient would not therefore be offered a same-day appointment. The second receptionist challenged the doctor (‘that’s not a reason to block him from getting care’). At this point, the patient cried and the second receptionist also became upset (‘this must be serious, you know’). On this occasion, despite her advocacy the patient was not given an immediate appointment.—Ethnographic fieldnotes from general practice 2022

Case 20 (long-term condition nurse visits ‘unengaged’ patients at home)

An advanced nurse practitioner talks of two older patients, each with a long-term condition, who are ‘unengaged’ and lacking a telephone. In this practice, all long-term condition reviews are routinely done by phone. She reflects that some people ‘choose not to have avenues of communication’ (ie, are deliberately not contactable), and that there may be reasons for this (‘maybe health anxiety or just old’). She has, on occasion, ‘turned up’ unannounced at the patient’s home and asked to come in and do the review, including bloods and other tests. She reflects that while most patients engage well with the service, ‘half my job is these patients who don’t engage very well.’—Ethnographic fieldnotes from digitally advanced general practice 2022

Case 21 (doctor over-riding patient’s request for telephone prescribing)

A GP trainee described a case of a 53-year-old first-generation immigrant from Pakistan, a known smoker with hypertension and diabetes. He had booked a telephone call for vomiting and sinus pain. There was no interpreter available but the man spoke some English. He said he had awoken in the night with pain in his sinuses and vomiting. All he wanted was painkillers for his sinuses. The story did not quite make sense, and the man ‘sounded unwell’. The GP told him he needed to come in and be examined. The patient initially resisted but was persuaded to come in. When the GP went to call him in, the man was visibly unwell and lying down in the waiting room. When seen in person, he admitted to shoulder pain. The GP sent him to accident and emergency (A&E) where a myocardial infarction was diagnosed.—Trainee interview 2023

Below, we describe the main themes that were evident in the safety incidents: a challenging organisational and system context, poor communication compounded by remote modalities, limited clinical information, patient and carer burden and inadequate training. Many safety incidents illustrated multiple themes—for example, poor communication and failures of clinical assessment or judgement and patient complexity and system pressures. In the detailed findings below, we illustrate why safety incidents occasionally occur and why they are usually avoided.

The context for remote consultations: system and operational challenges

Introduction of remote triage and expansion of remote consultations in UK primary care occurred at a time of unprecedented system stress (an understaffed and chronically under-resourced primary care sector, attempting to cope with a pandemic). 23 Many organisations had insufficient telephone lines or call handlers, so patients struggled to access services (eg, half of all calls to the emergency COVID-19 telephone service in March 2020 were never answered 7 ). Most remote consultations were by telephone. 27

Our safety incident dataset included examples of technically complex access routes which patients found difficult or impossible to navigate (case 3 in box 1 ) and which required non-clinical staff to make clinical or clinically related judgements (cases 4 and 15). Our ethnographic dataset contained examples of inflexible application of triage rules (eg, no face-to-face consultation unless the patient had already had a telephone call), though in other practices these rules could be over-ridden by staff using their judgement or asking colleagues. Some practices had a high rate of failed telephone call-backs (patient unobtainable).

High demand, staff shortages and high turnover of clinical and support staff made the context for remote encounters inherently risky. Several incidents were linked to a busy staff member becoming distracted (case 1). Telephone consultations, which tend to be shorter, were sometimes used in the hope of improving efficiency. Some safety incidents suggested perfunctory and transactional telephone consultations, with flawed decisions made on the basis of incomplete information (eg, case 2).

Many practices had shifted—at least to some extent—from a demand-driven system (in which every request for an appointment was met) to a capacity-driven one (in which, if a set capacity was exceeded, patients were advised to seek care elsewhere), though the latter was often used flexibly rather than rigidly with an expectation that some patients would be ‘squeezed in’. In some practices, capacity limits had been introduced to respond to escalation of demand linked to overuse of triage templates (eg, to inquire about minor symptoms).

As a result of task redistribution and new staff roles, a single episode of care for one problem often involved multiple encounters or tasks distributed among clinical and non-clinical staff (often in different locations and sometimes also across in-hours and out-of-hours providers). Capacity constraints in onward services placed pressure on primary care to manage risk in the community, leading in some cases to failure to escalate care appropriately (case 6).

Some safety incidents were linked to organisational routines that had not adapted sufficiently to remote—for example, a prescription might be issued but (for various reasons) it could not be transmitted electronically to the pharmacy. Certain urgent referrals were delayed if the consultation occurred remotely (a referral for suspected colon cancer, for example, would not be accepted without a faecal immunochemical test).

Training, supervising and inducting staff was more difficult when many were working remotely. If teams saw each other less frequently, relationship-building encounters and ‘corridor’ conversations were reduced, with knock-on impacts for individual and team learning and patient care. Those supervising trainees or allied professionals reported loss of non-verbal cues (eg, more difficult to assess how confident or distressed the trainee was).

Clinical and support staff regularly used initiative and situated judgement to compensate for an overall lack of system resilience ( box 1 ). Many practices had introduced additional safety measures such as lists of patients who, while not obviously urgent, needed timely review by a clinician. Case 17 illustrates how a rule of thumb ‘if in doubt, put it down as urgent’ was introduced and then applied to avert a potentially serious mental health outcome. Case 18 illustrates how, in the context of insufficient in-person slots to accommodate all high-risk cases, a unique safety-netting measure was customised for a patient.

Poor communication is compounded by remote modalities

Because sense data (eg, sight, touch, smell) are missing, 28 remote consultations rely heavily on the history. Many safety incidents were characterised by insufficient or inaccurate information for various reasons. Sometimes (cases 2, 5, 6, 8, 9, 10 and 11), the telephone consultation was too short to do justice to the problem; the clinician asked few or no questions to build rapport, obtain a full history, probe the patient’s answers for additional detail, confirm or exclude associated symptoms and inquire about comorbidities and medication. Video provided some visual cues but these were often limited to head and shoulders, and photographs were sometimes of poor quality.

Cases 2, 4, 5 and 9 illustrate the dangers of relying on information provided by a third party (another staff member or a relative). A key omission (eg, in case 5) was failing to ask why the patient was unable to come to the phone or answer questions directly.

Some remote triage conversations were conducted using an inappropriate algorithm. In case 4, for example, the call handler accepted a pregnant patient’s assumption that leaking fluid was urine when the problem was actually ruptured membranes. The wrong pathway was selected; vital questions remained unasked; and a skewed history was passed to (and accepted by) the clinician. In case 8, the patient’s complaint of ‘throat’ pain was taken literally and led to ‘viral illness’ advice, overlooking a myocardial infarction.

The cases in box 2 illustrate how staff compensated for communication challenges. In case 16, a GP plays a hunch that a father’s account of his child’s asthma may be inaccurate and converts a phone encounter to video, revealing the child’s respiratory distress. In case 19 (an in-person encounter but relevant because the altercation occurs partly because remote triage is the default modality), one receptionist correctly surmises that the patient’s angry demeanour may indicate urgency and uses her initiative and interpersonal skills to obtain additional clinical information. In case 20, a long-term condition nurse develops a labour-intensive workaround to overcome her elderly patients’ ‘lack of engagement’. More generally, we observed numerous examples of staff using both formal tools (eg, see ‘red list’ in case 17) and informal measures (eg, corridor chats) to pass on what they believed to be crucial information.

Remote consulting can provide limited clinical information

Cases 2 and 4–14 all describe serious conditions including congenital cyanotic heart disease, pulmonary oedema, sepsis, cancer and diabetic foot which would likely have been readily diagnosed with an in-person examination. While patients often uploaded still images of skin lesions, these were not always of sufficient quality to make a confident diagnosis.

Several safety incidents involved clinicians assuming that a diagnosis made on a remote consultation was definitive rather than provisional. Especially when subsequent consultations were remote, such errors could become ingrained, leading to diagnostic overshadowing and missed or delayed diagnosis (cases 2, 8, 9, 10, 11 and 13). Patients with pre-existing conditions (especially if multiple or progressive), the very young and the elderly were particularly difficult to assess by telephone (cases 1, 2, 8, 10, 12 and 16). Clinical conditions difficult to assess remotely included possible cardiac pain (case 8), acute abdomen (case 2), breathing difficulties (cases 1, 6 and 10), vague and generalised symptoms (cases 5 and 14) and symptoms which progressed despite treatment (cases 9, 10 and 11). All these categories came up repeatedly in interviews and workshops as clinically risky.

Subtle aspects of the consultation which may have contributed to safety incidents in a telephone consultation included the inability to fully appraise the patient’s overall health and well-being (including indicators relevant to mental health such as affect, eye contact, personal hygiene and evidence of self-harm), general demeanour, level of agitation and concern, and clues such as walking speed and gait (cases 2, 5, 6, 7, 8, 10, 12 and 14). Our interviews included stories of missed cases of new-onset frailty and dementia in elderly patients assessed by telephone.

In most practices we studied, most long-term condition management was undertaken by telephone. This may be appropriate (and indeed welcome) when the patient is well and confident and a physical examination is not needed. But diabetes reviews, for example, require foot examination. Case 7 describes the deterioration and death of a patient with diabetes whose routine check-ups had been entirely by telephone. We also heard stories of delayed diagnosis of new diabetes in children when an initial telephone assessment failed to pick up lethargy, weight loss and smell of ketones, and point-of-care tests of blood or urine were not possible.

Nurses observed that remote consultations limit opportunities for demonstrating or checking the patient’s technique in using a device for monitoring or treating their condition such as an inhaler, oximeter or blood pressure machine.

Safety netting was inadequate in many remote safety incidents, even when provided by a clinician (cases 2, 5, 6, 8, 10, 12 and 13) but especially when conveyed by a non-clinician (case 15). Expert interviewees identified that making life-changing diagnoses remotely and starting patients on long-term medication without an in-person appointment was also risky.

Our ethnographic data showed that various measures were used to compensate for limited clinical information, including converting a phone consultation to video (case 16), asking the patient if they felt they could wait until an in-person slot was available (case 18), visiting the patient at home (case 20) and enacting a ‘if the history doesn’t make sense, bring the patient in for an in-person assessment’ rule of thumb (case 21). Out-of-hours providers added examples of rules of thumb that their services had developed over years of providing remote services, including ‘see a child face-to-face if the parent rings back’, ‘be cautious about third-party histories’, ‘visit a palliative care patient before starting a syringe driver’ and ‘do not assess abdominal pain remotely’.

Remote modalities place additional burdens on patients and carers

Given the greater importance of the history in remote consultations, patients who lacked the ability to communicate and respond in line with clinicians’ expectations were at a significant disadvantage. Several safety incidents were linked to patients’ limited fluency in the language and culture of the clinician or to specific vulnerabilities such as learning disability, cognitive impairment, hearing impairment or neurodiversity. Those with complex medical histories and comorbidities, and those with inadequate technical set-up and skills (case 3), faced additional challenges.

In many practices, in-person appointments were strictly limited according to more or less rigid triage criteria. Some patients were unable to answer the question ‘is this an emergency?’ correctly, leading to their condition being deprioritised (case 15). Some had learnt to ‘game’ the triage system (eg, online templates 29 ) by adapting their story to obtain the in-person appointment they felt they needed. This could create distrust and lead to inaccurate information on the patient record.

Our ethnographic dataset contained many examples of clinical and support staff using initiative to compensate for vulnerable patients’ inability or unwillingness to take on the additional burden of remote modalities (cases 19 and 20 in Box 2 30 31 ).

Training for remote encounters is often inadequate

Safety incidents highlighted various training needs for support staff members (eg, customer care skills, risks of making clinical judgements) and clinicians (eg, limitations of different modalities, risks of diagnostic overshadowing). Whereas out-of-hours providers gave thorough training to novice GPs (covering such things as attentiveness, rapport building, history taking, probing, attending to contextual cues and safety netting) in telephone consultations, 32–34 many in-hours clinicians had never been formally taught to consult by telephone. Case 17 illustrates how on-the-job training based on acknowledgement of contextual pressures and judicious use of rules of thumb may be very effective in averting safety incidents.

Statement of principal findings

An important overall finding from this study is that examples of deaths or serious harms associated with remote encounters in primary care were extremely rare, amounting to fewer than 100 despite an extensive search going back several years.

Analysis of these 95 safety incidents, drawn from multiple complementary sources, along with rich qualitative data from ethnography, interviews and workshops has clarified where the key risks lie in remote primary care. Remote triage and consultations expanded rapidly in the context of the COVID-19 crisis; they were occurring in the context of resource constraints, understaffing and high demand. Triage and care pathways were complex, multilayered and hard to navigate; some involved distributed work among multiple clinical and non-clinical staff. In some cases, multiple remote encounters preceded (and delayed) a needed in-person assessment.

In this high-risk context, safety incidents involving death or serious harm were rare, but those that occurred were characterised by a combination of inappropriate choice of modality, poor rapport building, inadequate information gathering, limited clinical assessment, inappropriate clinical pathway (eg, wrong algorithm) and failure to take account of social circumstances. These led to missed, inaccurate or delayed diagnoses, underestimation of severity or urgency, delayed referral, incorrect or delayed treatment, poor safety netting and inadequate follow-up. Patients with complex or multiple pre-existing conditions, cardiac or abdominal emergencies, vague or generalised symptoms, safeguarding issues and failure to respond to previous treatment, and those who (for any reason) had difficulty communicating, seemed particularly at risk.

Strengths and limitations of the study

The main strength of this study was that it combined the largest Safety I study undertaken to date of safety incidents in remote primary care (using datasets which have not previously been tapped for research), with a large, UK-wide ethnographic Safety II analysis of general practice as well as stakeholder interviews and workshops. Limitations of the safety incident sample (see final column in table 1 ) include that it was skewed towards very rare cases of death and serious harm, with relatively few opportunities for learning that did not result in serious harm. Most sources were retrospective and may have suffered from biases in documentation and recall. We also failed to obtain examples of safeguarding incidents (which would likely turn up in social care audits). While all cases involved a remote modality (or a patient who would not or could not use one), it is impossible to definitively attribute the harm to that modality.

Comparison with existing literature

This study has affirmed previous findings that processes, workflows and training in in-hours general practice have not adapted adequately to the booking, delivery and follow-up of remote consultations. 24 35 36 Safety issues can arise, for example, from how the remote consultation interfaces with other key practice routines (eg, for making urgent referrals for possible cancer). The sheer complexity and fragmentation of much remote and digital work underscores the findings from a systematic review of the importance of relational coordination (defined as ‘a mutually reinforcing process of communicating and relating for the purpose of task integration ’ (p 3) 37 ) and psychological safety (defined as ‘people’s perceptions of the consequences of taking interpersonal risks in a particular context such as a workplace ’ (p 23) 38 ) in building organisational resilience and assuring safety.

The additional workload and complexity associated with running remote appointments alongside in-person ones is cognitively demanding for staff and requires additional skills for which not all are adequately trained. 24 39 40 We have written separately about the loss of traditional continuity of care as primary care services become digitised, 41–43 and about the unmet training needs of both clinical and support staff for managing remote and digital encounters. 24

Our findings also resonate with research showing that remote modalities can interfere with communicative tasks such as rapport building, establishing a therapeutic relationship and identifying non-verbal cues such as tearfulness 35 36 44 ; that remote consultations tend to be shorter and feature less discussion, information gathering and safety netting 45–48 ; and that clinical assessment in remote encounters may be challenging, 27 49 50 especially when physical examination is needed. 35 36 51 These factors may rarely contribute to incorrect or delayed diagnoses, underestimation of the seriousness or urgency of a case, and failure to identify a deteriorating trajectory. 35 36 52–54

Even when systems seem adequate, patients may struggle to navigate them. 23 30 31 This finding aligns with an important recent review of cognitive load theory in the context of remote and digital health services: because such services are more cognitively demanding for patients, they may widen inequities of access. 55 Some patients lack navigating and negotiating skills, access to key technologies 13 36 or confidence in using them. 30 35 The remote encounter may require the patient to have a sophisticated understanding of access and cross-referral pathways, interpret their own symptoms (including making judgements about severity and urgency), obtain and use self-monitoring technologies (such as a blood pressure machine or oximeter) and convey these data in medically meaningful ways (eg, by completing algorithmic triage forms or via a telephone conversation). 30 56 Furthermore, the remote environment may afford fewer opportunities for holistically evaluating, supporting or safeguarding the vulnerable patient, leading to widening inequities. 13 35 57 Previous work has also shown that patients with pre-existing illness, complex comorbidities or high-risk states, 58 59 language non-concordance, 13 35 inability to describe their symptoms (eg, due to autism 60 ), extremes of age 61 and those with low health or system literacy 30 are more difficult to assess remotely.

Lessons for safer care

Many of the contributory factors to safety incidents in remote encounters have been suggested previously, 35 36 and align broadly with factors that explain safety incidents more generally. 53 62 63 This new study has systematically traced how upstream factors may, very rarely, combine to contribute to avoidable human tragedies—and also how primary care teams develop local safety practices and cultures to help avoid them. Our study provides some important messages for practices and policymakers.

First, remote encounters in general practice are mostly occurring in a system designed for in-person encounters, so processes and workflows may work less well.

Second, because the remote encounter depends more on history taking and dialogue, verbal communication is even more mission critical. Working remotely under system pressures and optimising verbal communication should both be priorities for staff training.

Third, the remote environment may increase existing inequities as patients’ various vulnerabilities (eg, extremes of age, poverty, language and literacy barriers, comorbidities) make remote communication and assessment more difficult. Our study has revealed impressive efforts from staff to overcome these inequities on an individual basis; some of these workarounds may become normalised and increase efficiency, but others are labour intensive and not scalable.

A final message from this study is that clinical assessment provides less information when a physical examination (and even a basic visual overview) is not possible. Hence, the remote consultation has a higher degree of inherent uncertainty. Even when processes have been optimised (eg, using high-quality triage to allocate modality), but especially when they have not, diagnoses and assessments of severity or urgency should be treated as more provisional and revisited accordingly. We have given examples in the Results section of how local adaptation and rule breaking bring flexibility into the system and may become normalised over time, leading to the creation of locally understood ‘rules of thumb’ which increase safety.

Overall, these findings underscore the need to share learning and develop guidance about the drivers of risk, how these play out in different kinds of remote encounters and how to develop and strengthen Safety II approaches to mitigate those risks. Table 2 shows proposed mitigations at staff, process and system levels, as well as a preliminary list of suggestions for patients, which could be refined with patient input using codesign methods. 64

Reducing safety incidents in remote primary care

Unanswered questions and future research

This study has helped explain where the key risks lie in remote primary care encounters, which in our dataset were almost all by telephone. It has revealed examples of how front-line staff create and maintain a safety culture, thereby helping to prevent such incidents. We suggest four key avenues for further research. First, additional ethnographic studies in general practice might extend these findings and focus on specific subquestions (eg, how practices identify, capture and learn from near-miss incidents). Second, ethnographic studies of out-of-hours services, which are mostly telephone by default, may reveal additional elements of safety culture from which in-hours general practice could learn. Third, the rise in asynchronous e-consultations (in which patients complete an online template and receive a response by email) raises questions about the safety of this new modality which could be explored in mixed-methods studies including quantitative analysis of what kinds of conditions these consultations cover and qualitative analysis of the content and dynamics of the interaction. Finally, our findings suggest that the safety of new clinically related ‘assistant’ roles in general practice should be urgently evaluated, especially when such staff are undertaking remote assessment or remote triage.

Ethics statements

Patient consent for publication.

Not applicable.

Ethics approval

Ethical approval was granted by the East Midlands—Leicester South Research Ethics Committee and UK Health Research Authority (September 2021, 21/EM/0170 and subsequent amendments). Access to the NHS Resolution dataset was obtained by secondment of the RP via honorary employment contract, where she worked with staff to de-identify and fictionalise relevant cases. The Remote by Default 2 study (referenced in main text) was co-designed by patients and lay people; it includes a diverse patient panel. Oversight was provided by an independent external advisory group with a lay chair and patient representation. A person with lived experience of a healthcare safety incident (NS) is a co-author on this paper and provided input to data analysis and writing up, especially the recommendations for patients in table 2 .

Acknowledgments

We thank the participating organisations for cooperating with this study and giving permission to use fictionalised safety incidents. We thank the participants in the ethnographic study (patients, practice staff, policymakers, other informants) who gave generously of their time and members of the study advisory group.

  • Sarbadhikari SN ,
  • Jacob AG , et al
  • Hall Dykgraaf S ,
  • Desborough J ,
  • de Toca L , et al
  • Koonin LM ,
  • Tsang CA , et al
  • England NHS
  • Papoutsi C ,
  • Greenhalgh T
  • ↵ Healthcare safety investigation branch: NHS 111’s response to callers with COVID-19-related symptoms during the pandemic . 2022 . Available : https://www.hsib.org.uk/investigations-and-reports/response-of-nhs-111-to-the-covid-19-pandemic/nhs-111s-response-to-callers-with-covid-19-related-symptoms-during-the-pandemic [Accessed 25 Jun 2023 ].
  • Royal College of General Practitioners
  • NHS Confederation
  • Gupta PP , et al
  • Panagioti M ,
  • Keers RN , et al
  • Panesar SS ,
  • deSilva D ,
  • Carson-Stevens A , et al
  • Campbell JL ,
  • Britten N ,
  • Green C , et al
  • Huibers L , et al
  • Hollnagel E ,
  • Braithwaite J
  • Institute of Medicine (US)
  • Jerak-Zuiderent S
  • Greenhalgh T ,
  • Alvarez Nishio A , et al
  • Hemmings N , et al
  • Hughes G , et al
  • Salisbury C , et al
  • Berens E-M ,
  • Nowak P , et al
  • Macdonald S ,
  • Browne S , et al
  • Edwards PJ ,
  • Bennett-Britton I ,
  • Ridd MJ , et al
  • Warren C , et al
  • Challiner J , et al
  • Wieringa S ,
  • Greenhalgh T , et al
  • Rushforth A , et al
  • Edmondson AC ,
  • Eddison N ,
  • Healy A , et al
  • Paparini S , et al
  • Byng R , et al
  • Moore L , et al
  • McKinstry B ,
  • Hammersley V ,
  • Burton C , et al
  • Gafaranga J ,
  • McKinstry B
  • Seuren LM ,
  • Wherton J , et al
  • Donaghy E ,
  • Parker R , et al
  • Sharma SC ,
  • Thakker A , et al
  • Johnsen TM ,
  • Norberg BL ,
  • Kristiansen E , et al
  • Wherton J ,
  • Ferwerda R ,
  • Tijssen R , et al
  • NHS Resolution
  • Marincowitz C ,
  • Bath P , et al
  • Antonio MG ,
  • Williamson A ,
  • Kameswaran V , et al
  • Oudshoorn N
  • Winters D ,
  • Newman T , et al
  • Huibers L ,
  • Renaud V , et al
  • ↵ Remote consultations . n.d. Available : https://www.gmc-uk.org/ethical-guidance/ethical-hub/remote-consultations
  • Doherty M ,
  • Neilson S ,
  • O’Sullivan J , et al
  • Carson-Stevens A ,
  • Hibbert P ,
  • Williams H , et al
  • Edwards A ,
  • Powell C , et al
  • Morris RL ,
  • Fraser SW ,

X @dakinfrancesca, @trishgreenhalgh

Contributors RP led the Safety I analysis with support from AC. The Safety II analysis was part of a wider ethnographic study led by TG and SS, on which all other authors undertook fieldwork and contributed data. TG and RP wrote the paper, with all other authors contributing refinements. All authors checked and approved the final manuscript. RP is guarantor.

Funding Funding was from NIHR HS&DR (grant number 132807) (Remote by Default 2 study) and NIHR School for Primary Care Research (grant number 594) (ModCons study), plus an NIHR In-Practice Fellowship for RP.

Competing interests RP was National Professional Advisor, Care Quality Commission 2017–2022, where her role included investigation of safety issues.

Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

  • Editorial Examining telehealth through the Institute of Medicine quality domains: unanswered questions and research agenda Timothy C Guetterman Lorraine R Buis BMJ Quality & Safety 2024; 33 552-555 Published Online First: 09 May 2024. doi: 10.1136/bmjqs-2023-016872

Read the full text or download the PDF:

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

Lawrence a. palinkas.

1 School of Social Work, University of Southern California, Los Angeles, CA 90089-0411

Sarah M. Horwitz

2 Department of Child and Adolescent Psychiatry, New York University, New York, NY

Carla A. Green

3 Center for Health Research, Kaiser Permanente Northwest, Portland, OR

Jennifer P. Wisdom

4 George Washington University, Washington DC

Naihua Duan

5 New York State Neuropsychiatric Institute and Department of Psychiatry, Columbia University, New York, NY

Kimberly Hoagwood

Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

Recently there have been several calls for the use of mixed method designs in implementation research ( Proctor et al., 2009 ; Landsverk et al., 2012 ; Palinkas et al. 2011 ; Aarons et al., 2012). This has been precipitated by the realization that the challenges of implementing evidence-based and other innovative practices, treatments, interventions and programs are sufficiently complex that a single methodological approach is often inadequate. This is particularly true of efforts to implement evidence-based practices (EBPs) in statewide systems where relationships among key stakeholders extend both vertically (from state to local organizations) and horizontally (between organizations located in different parts of a state). As in other areas of research, mixed method designs are viewed as preferable in implementation research because they provide a better understanding of research issues than either qualitative or quantitative approaches alone ( Palinkas et al., 2011 ). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation ( Teddlie & Tashakkori, 2003 ).

Sampling strategies for quantitative methods used in mixed methods designs in implementation research are generally well-established and based on probability theory. In contrast, sampling strategies for qualitative methods in implementation studies are less explicit and often less evident. Although the samples for qualitative inquiry are generally assumed to be selected purposefully to yield cases that are “information rich” (Patton, 2001), there are no clear guidelines for conducting purposeful sampling in mixed methods implementation studies, particularly when studies have more than one specific objective. Moreover, it is not entirely clear what forms of purposeful sampling are most appropriate for the challenges of using both quantitative and qualitative methods in the mixed methods designs used in implementation research. Such a consideration requires a determination of the objectives of each methodology and the potential impact of selecting one strategy to achieve one objective on the selection of other strategies to achieve additional objectives.

In this paper, we present different approaches to the use of purposeful sampling strategies in implementation research. We begin with a review of the principles and practice of purposeful sampling in implementation research, a summary of the types and categories of purposeful sampling strategies, and a set of recommendations for matching the appropriate single strategy or multistage strategy to study aims and quantitative method designs.

Principles of Purposeful Sampling

Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources ( Patton, 2002 ). This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ( Cresswell & Plano Clark, 2011 ). In addition to knowledge and experience, Bernard (2002) and Spradley (1979) note the importance of availability and willingness to participate, and the ability to communicate experiences and opinions in an articulate, expressive, and reflective manner. In contrast, probabilistic or random sampling is used to ensure the generalizability of findings by minimizing the potential for bias in selection and to control for the potential influence of known and unknown confounders.

As Morse and Niehaus (2009) observe, whether the methodology employed is quantitative or qualitative, sampling methods are intended to maximize efficiency and validity. Nevertheless, sampling must be consistent with the aims and assumptions inherent in the use of either method. Qualitative methods are, for the most part, intended to achieve depth of understanding while quantitative methods are intended to achieve breadth of understanding ( Patton, 2002 ). Qualitative methods place primary emphasis on saturation (i.e., obtaining a comprehensive understanding by continuing to sample until no new substantive information is acquired) ( Miles & Huberman, 1994 ). Quantitative methods place primary emphasis on generalizability (i.e., ensuring that the knowledge gained is representative of the population from which the sample was drawn). Each methodology, in turn, has different expectations and standards for determining the number of participants required to achieve its aims. Quantitative methods rely on established formulae for avoiding Type I and Type II errors, while qualitative methods often rely on precedents for determining number of participants based on type of analysis proposed (e.g., 3-6 participants interviewed multiple times in a phenomenological study versus 20-30 participants interviewed once or twice in a grounded theory study), level of detail required, and emphasis of homogeneity (requiring smaller samples) versus heterogeneity (requiring larger samples) ( Guest, Bunce & Johnson., 2006 ; Morse & Niehaus, 2009 ; Padgett, 2008 ).

Types of purposeful sampling designs

There exist numerous purposeful sampling designs. Examples include the selection of extreme or deviant (outlier) cases for the purpose of learning from an unusual manifestations of phenomena of interest; the selection of cases with maximum variation for the purpose of documenting unique or diverse variations that have emerged in adapting to different conditions, and to identify important common patterns that cut across variations; and the selection of homogeneous cases for the purpose of reducing variation, simplifying analysis, and facilitating group interviewing. A list of some of these strategies and examples of their use in implementation research is provided in Table 1 .

Purposeful sampling strategies in implementation research

StrategyObjectiveExampleConsiderations
Emphasis on similarity
Criterion-iTo identify and select all
cases that meet some
predetermined criterion
of importance
Selection of consultant
trainers and program
leaders at study sites to
facilitators and barriers
to EBP implementation
( ).
Can be used to identify
cases from standardized
questionnaires for in-
depth follow-up
( )
Criterion-eTo identify and select all
cases that exceed or fall
outside a specified
criterion
Selection of directors of
agencies that failed to
move to the next stage
of implementation
within expected period
of time.
Typical caseTo illustrate or highlight
what is typical, normal
or average
A child undergoing
treatment for trauma
( )
The purpose is to
describe and illustrate
what is typical to those
unfamiliar with the
setting, not to make
generalized statements
about the experiences
of all participants
( ).
HomogeneityTo describe a particular
subgroup in depth, to
reduce variation,
simplify analysis and
facilitate group
interviewing
Selecting Latino/a
directors of mental
health services agencies
to discuss challenges of
implementing evidence-
based treatments for
mental health problems
with Latino/a clients.
Often used for selecting
focus group participants
SnowballTo identify cases of
interest from sampling
people who know
people that generally
have similar
characteristics who, in
turn know people, also
with similar
characteristics.
Asking recruited
program managers to
identify clinicians,
administrative support
staff, and consumers for
project recruitment
( ).
Begins by asking key
informants or well-
situated people “Who
knows a lot about…”
(Patton, 2001)
Extreme or deviant caseTo illuminate both the
unusual and the typical
Selecting clinicians from
state agencies or
mental health with best
and worst performance
records or
implementation
outcomes
Extreme successes or
failures may be
discredited as being too
extreme or unusual to
yield useful
information, leading
one to select cases that
manifest sufficient
intensity to illuminate
the nature of success or
failure, but not in the
extreme.
Emphasis on variation
IntensitySame objective as
extreme case sampling
but with less emphasis
on extremes
Clinicians providing
usual care and clinicians
who dropped out of a
study prior to consent
to contrast with
clinicians who provided
the intervention under
investigation.
( )
Requires the researcher
to do some exploratory
work to determine the
nature of the variation
of the situation under
study, then sampling
intense examples of the
phenomenon of
interest.
Maximum variationImportant shared
patterns that cut across
cases and derived their
significance from having
emerged out of
heterogeneity.
Sampling mental health
services programs in
urban and rural areas in
different parts of the
state (north, central,
south) to capture
maximum variation in
location
( ).
Can be used to
document unique or
diverse variations that
have emerged in
adapting to different
conditions
( ).
Critical caseTo permit logical
generalization and
maximum application of
information because if
it is true in this one
case, it’s likely to be
true of all other cases
Investigation of a group
of agencies that
decided to stop using
an evidence-based
practice to identify
reasons for lack of EBP
sustainment.
Depends on recognition
of key dimensions that
make for a critical case.
Particularly important
when resources may
limit the study of only
one site (program,
community, population)
( )
Theory-basedTo find manifestations
of a theoretical
construct so as to
elaborate and examine
the construct and its
variations
Sampling therapists
based on academic
training to understand
the impact of CBT
training versus
psychodynamic training
in graduate school of
acceptance of EBPs
Sample on the basis of
potential manifestation
or representation of
important theoretical
constructs.
Sampling on the basis of
emerging concepts with
the aim being to
explore the dimensional
range or varied
conditions along which
the properties of
concepts vary.
Confirming and
disconfirming case
To confirm the
importance and
meaning of possible
patterns and checking
out the viability of
emergent findings with
new data and additional
cases.
Once trends are
identified, deliberately
seeking examples that
are counter to the
trend.
Usually employed in
later phases of data
collection. Confirmatory
cases are additional
examples that fit
already emergent
patterns to add
richness, depth and
credibility.
Disconfirming cases are
a source of rival
interpretations as well
as a means for placing
boundaries around
confirmed findings
Stratified purposefulTo capture major
variations rather than
to identify a common
core, although the
latter may emerge in
the analysis
Combining typical case
sampling with
maximum variation
sampling by taking a
stratified purposeful
sample of above
average, average, and
below average cases of
health care
expenditures for a
particular problem.
This represents less
than the full maximum
variation sample, but
more than simple
typical case sampling.
Purposeful randomTo increase the
credibility of results
Selecting for interviews
a random sample of
providers to describe
experiences with EBP
implementation.
Not as representative of
the population as a
probability random
sample.
Nonspecific emphasis
Opportunistic or
emergent
To take advantage of
circumstances, events
and opportunities for
additional data
collection as they arise.
Usually employed when
it is impossible to
identify sample or the
population from which
a sample should be
drawn at the outset of a
study. Used primarily in
conducting
ethnographic fieldwork
ConvenienceTo collect information
from participants who
are easily accessible to
the researcher
Recruiting providers
attending a staff
meeting for study
participation.
Although commonly
used, it is neither
purposeful nor strategic

Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”

Challenges to use of purposeful sampling

Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).

Purposeful Sampling in Implementation Research

Characteristics of implementation research.

In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.

A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.

Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.

Purposeful sampling strategies and mixed method designs in implementation research

Sampling strategyStructureDesignFunction
Single stage sampling (n = 22)
Criterion
(n = 18)
Simultaneous (n = 17)
Sequential (n = 6)
Merged (n = 9)
Connected (n = 9)
Embedded (n = 14)
Convergence (n = 6)
Complementarity (n = 12)
Expansion (n = 10)
Development (n = 3)
Sampling (n = 4)
Maximum variation
(n = 4)
Simultaneous (n = 3)
Sequential (n = 1)
Merged (n = 1)
Connected (n = 1)
Embedded (n = 2)
Convergence (n = 1)
Complementarity (n = 2)
Expansion (n = 1)
Development (n = 2)
Intensity
(n = 1)
Simultaneous
Sequential
Merged
Connected
Embedded
Convergence
Complementarity
Expansion
Development
Typical case Study
(n = 1)
SimultaneousEmbeddedComplementarity
Multistage sampling (n = 4)
Criterion/maximum
variation
(n = 2)
Simultaneous
Sequential
Embedded
Connected
Complementarity
Development
Criterion/intensity
(n = 1)
SimultaneousEmbeddedConvergence
Complementarity
Expansion
Criterion/snowball
(n = 1)
SequentialConnectedConvergence
Development

Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.

However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.

To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0001.jpg

Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies

  • (1) Priority and sequencing of Qualitative (QUAL) and Quantitative (QUAN) can be reversed.
  • (2) Refers to emphasis of sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-ig0002.jpg

Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.

Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.

Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.

Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.

Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.

On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.

Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.

Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.

Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).

Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.

Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0004.jpg

Multistage Purposeful Sampling Strategies

Acknowledgments

This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).

IMAGES

  1. sampling methods in qualitative research example

    critical sampling in qualitative research

  2. Sampling in qualitative researc

    critical sampling in qualitative research

  3. (PDF) Sampling in Qualitative Research

    critical sampling in qualitative research

  4. Sampling Strategies for Qualitative Research

    critical sampling in qualitative research

  5. Qualitative Research

    critical sampling in qualitative research

  6. Understanding Qualitative Research: An In-Depth Study Guide

    critical sampling in qualitative research

COMMENTS

  1. (PDF) Sampling in Qualitative Research

    Answer 1: In qualitative research, samples are selected subjectively according to. the pur pose of the study, whereas in quantitative researc h probability sampling. technique are used to select ...

  2. Series: Practical guidance to qualitative research. Part 3: Sampling

    In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data.

  3. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. ... Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling . In the above EVT example, a purposive sample could include all ...

  4. Big enough? Sampling in qualitative inquiry

    Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects ( Staller, 2013 ). As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies." (p.537). Patton (2002) argues, "perhaps nothing better captures the ...

  5. PDF Sampling Strategies in Qualitative Research

    Sampling Strategies in Qualitative Research. In this context, in part as a reaction against the positioning of qualitative research as less vital and relevant given its refusal to undertake random sampling with large numbers - due to a fundamental asymmetry in goals (e.g. Lincoln and Guba, 1985) and inability in

  6. Sampling Techniques for Qualitative Research

    Theoretical sampling is used in qualitative research when a study is designed to develop a theory. Specific (also known as selective or critical) sampling is frequently used in an exploratory study which seeks simply to understand a phenomenon about which not much is known so that further studies might be developed.

  7. Chapter 5. Sampling

    Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample. ... Think about which cases are "critical" or politically important—ones that "if it happens ...

  8. Sampling in Interview-Based Qualitative Research: A Theoretical and

    A four-point approach to sampling in qualitative interview-based research is presented and critically discussed in this article, which integrates theory and process for the following: (1) defining a sample universe, by way of specifying inclusion and exclusion criteria for potential participation; (2) deciding upon a sample size, through the ...

  9. Sampling in qualitative interview research: criteria, considerations

    What the sample size should be is a critical question, as Anderson notes "size does matter", but "more is not always better" (2017, p. 4). ... The research note was prepared based on experience in qualitative research sampling gained, among others, during running the project financed by the National Science Centre ...

  10. Sampling in Qualitative Research

    Critical skills in sampling are instilled during schooling and on-the-job training. ... Major features of sampling in qualitative research concern the issues of identifying the scope of the universe for sampling and the discovery of valid units for analyses. The practices of sampling, in comparison to quantitative research, are rooted in the ...

  11. Sage Research Methods

    This innovative book critically evaluates widely used sampling strategies, identifying key theoretical assumptions and considering how empirical and theoretical claims are made from these diverse methods. Nick Emmel presents a groundbreaking reworking of sampling and choosing cases in qualitative research. Drawing on international case studies ...

  12. Critical case sampling

    The first critical case is to evaluate the regulations in a community of well-educated citizens. If they can't understand the regulations, then less-educated folks are sure to find the regulations incomprehensible. Or, conversely, one might consider the critical case to be a community consisting of people with quite low levels of education: 'If ...

  13. PDF Qualitative Evaluation Checklist

    Purposeful sampling: Determine what purposeful sampling strategy (or strategies) will be used for the evaluation. Pick cases for study (e.g., program participants, staff, organizations, communities, cultures, events, critical incidences) that are "information rich" and illuminative, that is, that will

  14. Critical Case Sampling

    Critical Case Sampling. The process of selecting a small number of important cases - cases that are likely to "yield the most information and have the greatest impact on the development of knowledge" (Patton, 2001, p. 236). Why use this method? This is a good method to use when funds are limited. Although sampling for one or more critical cases ...

  15. PDF Qualitative sampling techniques

    A flexible research and sampling design is an important feature of qualitative research, particularly when the research being conducted is exploratory in nature. When little is known about a phenomenon or setting, a priori sampling decisions can be difficult. In such circumstances, creating a research design that is flexible enough to foster

  16. Calling for a Shared Understanding of Sampling Terminology in

    This second review provides a basis for proposing clarifications to address ambiguous understandings of common sampling-related terminology used in qualitative research. Stephen J. Gentles, CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main St W, Hamilton, ON, Canada L8 S 1C7. Email: [email protected].

  17. Critical Case Sampling

    Critical cases are the ones most likely to give you the information you need. Critical case sampling is where you collect samples that are most likely to give you the information you're looking for; They are particularly important cases or ones that highlight vital information. This type of sampling is "…particularly useful if a small ...

  18. Different Types of Sampling Techniques in Qualitative Research

    While qualitative research can provide rich and nuanced insights, the accuracy and generalizability of findings depend on the quality of the sampling process. Sampling techniques are a critical component of qualitative research as it involves selecting a group of participants who can provide valuable insights into the research questions.

  19. 17. Qualitative data and sampling

    Qualitative research generally exists on the idiographic end of this continuum. We are most often seeking to obtain a rich, deep, detailed understanding from a relatively small group of people. Figure 17.2 Idiographic vs. Nomothetic Non-probability sampling

  20. Sampling in Qualitative Research: Rationale, Issues, and Methods

    The focus is on criteria for designing samples; qualitative issues related to suitability of any given person for research are not addressed. The criteria for designing samples constitute what ...

  21. Purposive sampling

    Critical case sampling. Critical case sampling is a type of purposive sampling technique that is particularly useful in exploratory qualitative research, research with limited resources, as well as research where a single case (or small number of cases) can be decisive in explaining the phenomenon of interest.

  22. The Power of a Belief System: A Systematic Qualitative Synthesis of

    Background: Diagnosis with a brain tumor is a critical event in the lives of patients and their families due to poor medical prognoses and complex clinical care. Spiritual care interventions have been known to have meaningful effects in morbid diagnoses and palliative medicine, but their role in the neuro-oncologic patient's experience is poorly understood. This systematic review explores ...

  23. A Review of the Quality Indicators of Rigor in Qualitative Research

    Again, the process of developing and using a strong conceptual framework to guide and justify methodological decisions, in this case defining and establishing the study sample, is critical to rigor and quality. 30 Convenience sampling, using the most accessible research participants, is the least rigorous approach to defining a study sample and ...

  24. Factors critical for the successful delivery of telehealth to rural

    This research study adopted a qualitative descriptive design as described by Sandelowski [].The purpose of a descriptive study is to document and describe a phenomenon of interest [] and this method is useful when researchers seek to understand who was involved, what occurred, and the location of the phenomena of interest [].The phenomenon of interest for this research was the provision of ...

  25. "Why me?": Qualitative research on why patients ask, what they mean

    Patients often ask, "why me?" but questions arise regarding what this statement means, how, when and why patients ask, how they answer and why. Interviews were conducted as part of several qualitative research studies exploring how patients view and cope with various conditions, including HIV, cancer, Huntington's disease and infertility. A secondary qualitative analysis was performed ...

  26. Series: Practical guidance to qualitative research. Part 3: Sampling

    In quantitative studies, the sampling plan, including sample size, is determined in detail in beforehand but qualitative research projects start with a broadly defined sampling plan. This plan enables you to include a variety of settings and situations and a variety of participants, including negative cases or extreme cases to obtain rich data.

  27. Purposeful Sampling for Qualitative Data Collection and ...

    Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources (Patton 2002).This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest (Cresswell and Plano Clark 2011).

  28. Examining the perception of undergraduate health professional students

    For the qualitative phase, seven focus groups (FGs) were arranged with a sample of QU-Health students. The qualitative and quantitative data obtained were integrated at the interpretation and reporting level using a narrative, contiguous approach [37, 38]. Quantitative phase Population and sampling

  29. Patient safety in remote primary care encounters: multimethod

    Background Triage and clinical consultations increasingly occur remotely. We aimed to learn why safety incidents occur in remote encounters and how to prevent them. Setting and sample UK primary care. 95 safety incidents (complaints, settled indemnity claims and reports) involving remote interactions. Separately, 12 general practices followed 2021-2023. Methods Multimethod qualitative study ...

  30. Purposeful sampling for qualitative data collection and analysis in

    Principles of Purposeful Sampling. Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources (Patton, 2002).This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ...