Sago

What We Offer

With a comprehensive suite of qualitative and quantitative capabilities and 55 years of experience in the industry, Sago powers insights through adaptive solutions.

  • Recruitment
  • Communities
  • Methodify® Automated research
  • QualBoard® Digital Discussions
  • QualMeeting® Digital Interviews
  • Global Qualitative
  • Global Quantitative
  • In-Person Facilities
  • Research Consulting
  • Europe Solutions
  • Neuromarketing Tools
  • Trial & Jury Consulting

Who We Serve

Form deeper customer connections and make the process of answering your business questions easier. Sago delivers unparalleled access to the audiences you need through adaptive solutions and a consultative approach.

  • Consumer Packaged Goods
  • Financial Services
  • Media Technology
  • Medical Device Manufacturing
  • Marketing Research

With a 55-year legacy of impact, Sago has proven we have what it takes to be a long-standing industry leader and partner. We continually advance our range of expertise to provide our clients with the highest level of confidence.​

  • Global Offices
  • Partnerships & Certifications
  • News & Media
  • Researcher Events

professional woman looking down at tablet in office at night

Sago Announces Launch of Sago Health to Elevate Healthcare Research

man and woman sitting in front of laptop smiling broadly

Sago Launches AI Video Summaries on QualBoard to Streamline Data Synthesis

Steve Schlesinger, Quirks Lifetime Achievement Award

Sago Executive Chairman Steve Schlesinger to Receive Quirk’s Lifetime Achievement Award

Drop into your new favorite insights rabbit hole and explore content created by the leading minds in market research.

  • Case Studies
  • Knowledge Kit

two female medical professionals looking at computer screen

Revolutionizing Healthcare Research: A Q&A With Industry Experts

diverse people at voting booths, american voters

The Deciders April 2024: Michigan Voters in Union Households

Get in touch

what is sampling methods in qualitative research

  • Account Logins

what is sampling methods in qualitative research

Different Types of Sampling Techniques in Qualitative Research

  • Resources , Blog

clock icon

Key Takeaways:

  • Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling.
  • Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results.
  • It’s crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a sampling technique for your qualitative research.

Qualitative research seeks to understand social phenomena from the perspective of those experiencing them. It involves collecting non-numerical data such as interviews, observations, and written documents to gain insights into human experiences, attitudes, and behaviors. While qualitative research can provide rich and nuanced insights, the accuracy and generalizability of findings depend on the quality of the sampling process. Sampling is a critical component of qualitative research as it involves selecting a group of participants who can provide valuable insights into the research questions.

This article explores different types of sampling techniques used in qualitative research. First, we’ll provide a comprehensive overview of four standard sampling techniques used in qualitative research. and then compare and contrast these techniques to provide guidance on choosing the most appropriate method for a particular study. Additionally, you’ll find best practices for sampling and learn about ethical considerations researchers need to consider in selecting a sample. Overall, this article aims to help researchers conduct effective and high-quality sampling in qualitative research.

In this Article:

  • Purposive Sampling
  • Convenience Sampling
  • Snowball Sampling
  • Theoretical Sampling

Factors to Consider When Choosing a Sampling Technique

Practical approaches to sampling: recommended practices, final thoughts, get expert guidance on your sample needs.

Want expert input on the best sampling technique for your qualitative research project? Book a consultation for trusted advice.

Request a consultation

4 Types of Sampling Techniques and Their Applications

Sampling is a crucial aspect of qualitative research as it determines the representativeness and credibility of the data collected. Several sampling techniques are used in qualitative research, each with strengths and weaknesses. In this section, let’s explore four standard sampling techniques used in qualitative research: purposive sampling, convenience sampling, snowball sampling, and theoretical sampling. We’ll break down the definition of each technique, when to use it, and its advantages and disadvantages.

1. Purposive Sampling

Purposive sampling, or judgmental sampling, is a non-probability sampling technique commonly used in qualitative research. In purposive sampling, researchers intentionally select participants with specific characteristics or unique experiences related to the research question. The goal is to identify and recruit participants who can provide rich and diverse data to enhance the research findings.

Purposive sampling is used when researchers seek to identify individuals or groups with particular knowledge, skills, or experiences relevant to the research question. For instance, in a study examining the experiences of cancer patients undergoing chemotherapy, purposive sampling may be used to recruit participants who have undergone chemotherapy in the past year. Researchers can better understand the phenomenon under investigation by selecting individuals with relevant backgrounds.

Purposive Sampling: Strengths and Weaknesses

Purposive sampling is a powerful tool for researchers seeking to select participants who can provide valuable insight into their research question. This method is advantageous when studying groups with technical characteristics or experiences where a random selection of participants may yield different results.

One of the main advantages of purposive sampling is the ability to improve the quality and accuracy of data collected by selecting participants most relevant to the research question. This approach also enables researchers to collect data from diverse participants with unique perspectives and experiences related to the research question.

However, researchers should also be aware of potential bias when using purposive sampling. The researcher’s judgment may influence the selection of participants, resulting in a biased sample that does not accurately represent the broader population. Another disadvantage is that purposive sampling may not be representative of the more general population, which limits the generalizability of the findings. To guarantee the accuracy and dependability of data obtained through purposive sampling, researchers must provide a clear and transparent justification of their selection criteria and sampling approach. This entails outlining the specific characteristics or experiences required for participants to be included in the study and explaining the rationale behind these criteria. This level of transparency not only helps readers to evaluate the validity of the findings, but also enhances the replicability of the research.

2. Convenience Sampling  

When time and resources are limited, researchers may opt for convenience sampling as a quick and cost-effective way to recruit participants. In this non-probability sampling technique, participants are selected based on their accessibility and willingness to participate rather than their suitability for the research question. Qualitative research often uses this approach to generate various perspectives and experiences.

During the COVID-19 pandemic, convenience sampling was a valuable method for researchers to collect data quickly and efficiently from participants who were easily accessible and willing to participate. For example, in a study examining the experiences of university students during the pandemic, convenience sampling allowed researchers to recruit students who were available and willing to share their experiences quickly. While the pandemic may be over, convenience sampling during this time highlights its value in urgent situations where time and resources are limited.

Convenience Sampling: Strengths and Weaknesses

Convenience sampling offers several advantages to researchers, including its ease of implementation and cost-effectiveness. This technique allows researchers to quickly and efficiently recruit participants without spending time and resources identifying and contacting potential participants. Furthermore, convenience sampling can result in a diverse pool of participants, as individuals from various backgrounds and experiences may be more likely to participate.

While convenience sampling has the advantage of being efficient, researchers need to acknowledge its limitations. One of the primary drawbacks of convenience sampling is that it is susceptible to selection bias. Participants who are more easily accessible may not be representative of the broader population, which can limit the generalizability of the findings. Furthermore, convenience sampling may lead to issues with the reliability of the results, as it may not be possible to replicate the study using the same sample or a similar one.

To mitigate these limitations, researchers should carefully define the population of interest and ensure the sample is drawn from that population. For instance, if a study is investigating the experiences of individuals with a particular medical condition, researchers can recruit participants from specialized clinics or support groups for that condition. Researchers can also use statistical techniques such as stratified sampling or weighting to adjust for potential biases in the sample.

3. Snowball Sampling

Snowball sampling, also called referral sampling, is a unique approach researchers use to recruit participants in qualitative research. The technique involves identifying a few initial participants who meet the eligibility criteria and asking them to refer others they know who also fit the requirements. The sample size grows as referrals are added, creating a chain-like structure.

Snowball sampling enables researchers to reach out to individuals who may be hard to locate through traditional sampling methods, such as members of marginalized or hidden communities. For instance, in a study examining the experiences of undocumented immigrants, snowball sampling may be used to identify and recruit participants through referrals from other undocumented immigrants.

Snowball Sampling: Strengths and Weaknesses

Snowball sampling can produce in-depth and detailed data from participants with common characteristics or experiences. Since referrals are made within a network of individuals who share similarities, researchers can gain deep insights into a specific group’s attitudes, behaviors, and perspectives.

4. Theoretical Sampling

Theoretical sampling is a sophisticated and strategic technique that can help researchers develop more in-depth and nuanced theories from their data. Instead of selecting participants based on convenience or accessibility, researchers using theoretical sampling choose participants based on their potential to contribute to the emerging themes and concepts in the data. This approach allows researchers to refine their research question and theory based on the data they collect rather than forcing their data to fit a preconceived idea.

Theoretical sampling is used when researchers conduct grounded theory research and have developed an initial theory or conceptual framework. In a study examining cancer survivors’ experiences, for example, theoretical sampling may be used to identify and recruit participants who can provide new insights into the coping strategies of survivors.

Theoretical Sampling: Strengths and Weaknesses

One of the significant advantages of theoretical sampling is that it allows researchers to refine their research question and theory based on emerging data. This means the research can be highly targeted and focused, leading to a deeper understanding of the phenomenon being studied. Additionally, theoretical sampling can generate rich and in-depth data, as participants are selected based on their potential to provide new insights into the research question.

Participants are selected based on their perceived ability to offer new perspectives on the research question. This means specific perspectives or experiences may be overrepresented in the sample, leading to an incomplete understanding of the phenomenon being studied. Additionally, theoretical sampling can be time-consuming and resource-intensive, as researchers must continuously analyze the data and recruit new participants.

To mitigate the potential for bias, researchers can take several steps. One way to reduce bias is to use a diverse team of researchers to analyze the data and make participant selection decisions. Having multiple perspectives and backgrounds can help prevent researchers from unconsciously selecting participants who fit their preconceived notions or biases.

Another solution would be to use reflexive sampling. Reflexive sampling involves selecting participants aware of the research process and provides insights into how their biases and experiences may influence their perspectives. By including participants who are reflexive about their subjectivity, researchers can generate more nuanced and self-aware findings.

Choosing the proper sampling technique is one of the most critical decisions a researcher makes when conducting a study. The preferred method can significantly impact the accuracy and reliability of the research results.

For instance, purposive sampling provides a more targeted and specific sample, which helps to answer research questions related to that particular population or phenomenon. However, this approach may also introduce bias by limiting the diversity of the sample.

Conversely, convenience sampling may offer a more diverse sample regarding demographics and backgrounds but may also introduce bias by selecting more willing or available participants.

Snowball sampling may help study hard-to-reach populations, but it can also limit the sample’s diversity as participants are selected based on their connections to existing participants.

Theoretical sampling may offer an opportunity to refine the research question and theory based on emerging data, but it can also be time-consuming and resource-intensive.

Additionally, the choice of sampling technique can impact the generalizability of the research findings. Therefore, it’s crucial to consider the potential impact on the bias, sample diversity, and generalizability when choosing a sampling technique. By doing so, researchers can select the most appropriate method for their research question and ensure the validity and reliability of their findings.

Tips for Selecting Participants

When selecting participants for a qualitative research study, it is crucial to consider the research question and the purpose of the study. In addition, researchers should identify the specific characteristics or criteria they seek in their sample and select participants accordingly.

One helpful tip for selecting participants is to use a pre-screening process to ensure potential participants meet the criteria for inclusion in the study. Another technique is using multiple recruitment methods to ensure the sample is diverse and representative of the studied population.

Ensuring Diversity in Samples

Diversity in the sample is important to ensure the study’s findings apply to a wide range of individuals and situations. One way to ensure diversity is to use stratified sampling, which involves dividing the population into subgroups and selecting participants from each subset. This helps establish that the sample is representative of the larger population.

Maintaining Ethical Considerations

When selecting participants for a qualitative research study, it is essential to ensure ethical considerations are taken into account. Researchers must ensure participants are fully informed about the study and provide their voluntary consent to participate. They must also ensure participants understand their rights and that their confidentiality and privacy will be protected.

A qualitative research study’s success hinges on its sampling technique’s effectiveness. The choice of sampling technique must be guided by the research question, the population being studied, and the purpose of the study. Whether purposive, convenience, snowball, or theoretical sampling, the primary goal is to ensure the validity and reliability of the study’s findings.

By thoughtfully weighing the pros and cons of each sampling technique, researchers can make informed decisions that lead to more reliable and accurate results. In conclusion, carefully selecting a sampling technique is integral to the success of a qualitative research study, and a thorough understanding of the available options can make all the difference in achieving high-quality research outcomes.

If you’re interested in improving your research and sampling methods, Sago offers a variety of solutions. Our qualitative research platforms, such as QualBoard and QualMeeting, can assist you in conducting research studies with precision and efficiency. Our robust global panel and recruitment options help you reach the right people. We also offer qualitative and quantitative research services to meet your research needs. Contact us today to learn more about how we can help improve your research outcomes.

Find the Right Sample for Your Qualitative Research

Trust our team to recruit the participants you need using the appropriate techniques. Book a consultation with our team to get started .

downtown philadelphia, pennsylvania

The Swing Voter Project Pennsylvania: April 2024

woman smiling at laptop

Navigating the Dynamic Future of Qualitative Research

girl wearing medical mask in foreground, two people talking in medical masks in background

How Connecting with Gen C Can Help Your Brand Grow

Quantifying Digital Detox Experiences Across the Globe

Quantifying Digital Detox Experiences Across the Globe

two girls using phones and a laptop in a coffee shop

Digital Detox: How Different Generations Navigate Social Media Breaks

diverse group of happy friends sitting and laughing

Building Trust Through Inclusive Healthcare Research Recruitment

madison, wisconsin

The Swing Voter Project Wisconsin: March 2024

north carolina state flag on flag pole against blue sky

The Deciders February 2024: African American voters in North Carolina

Take a deep dive into your favorite market research topics

what is sampling methods in qualitative research

How can we help support you and your research needs?

what is sampling methods in qualitative research

BEFORE YOU GO

Have you considered how to harness AI in your research process? Check out our on-demand webinar for everything you need to know

what is sampling methods in qualitative research

Logo for Open Educational Resources

Chapter 5. Sampling

Introduction.

Most Americans will experience unemployment at some point in their lives. Sarah Damaske ( 2021 ) was interested in learning about how men and women experience unemployment differently. To answer this question, she interviewed unemployed people. After conducting a “pilot study” with twenty interviewees, she realized she was also interested in finding out how working-class and middle-class persons experienced unemployment differently. She found one hundred persons through local unemployment offices. She purposefully selected a roughly equal number of men and women and working-class and middle-class persons for the study. This would allow her to make the kinds of comparisons she was interested in. She further refined her selection of persons to interview:

I decided that I needed to be able to focus my attention on gender and class; therefore, I interviewed only people born between 1962 and 1987 (ages 28–52, the prime working and child-rearing years), those who worked full-time before their job loss, those who experienced an involuntary job loss during the past year, and those who did not lose a job for cause (e.g., were not fired because of their behavior at work). ( 244 )

The people she ultimately interviewed compose her sample. They represent (“sample”) the larger population of the involuntarily unemployed. This “theoretically informed stratified sampling design” allowed Damaske “to achieve relatively equal distribution of participation across gender and class,” but it came with some limitations. For one, the unemployment centers were located in primarily White areas of the country, so there were very few persons of color interviewed. Qualitative researchers must make these kinds of decisions all the time—who to include and who not to include. There is never an absolutely correct decision, as the choice is linked to the particular research question posed by the particular researcher, although some sampling choices are more compelling than others. In this case, Damaske made the choice to foreground both gender and class rather than compare all middle-class men and women or women of color from different class positions or just talk to White men. She leaves the door open for other researchers to sample differently. Because science is a collective enterprise, it is most likely someone will be inspired to conduct a similar study as Damaske’s but with an entirely different sample.

This chapter is all about sampling. After you have developed a research question and have a general idea of how you will collect data (observations or interviews), how do you go about actually finding people and sites to study? Although there is no “correct number” of people to interview, the sample should follow the research question and research design. You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.

Quick Terms Refresher

  • The population is the entire group that you want to draw conclusions about.
  • The sample is the specific group of individuals that you will collect data from.
  • Sampling frame is the actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).
  • Sample size is how many individuals (or units) are included in your sample.

The “Who” of Your Research Study

After you have turned your general research interest into an actual research question and identified an approach you want to take to answer that question, you will need to specify the people you will be interviewing or observing. In most qualitative research, the objects of your study will indeed be people. In some cases, however, your objects might be content left by people (e.g., diaries, yearbooks, photographs) or documents (official or unofficial) or even institutions (e.g., schools, medical centers) and locations (e.g., nation-states, cities). Chances are, whatever “people, places, or things” are the objects of your study, you will not really be able to talk to, observe, or follow every single individual/object of the entire population of interest. You will need to create a sample of the population . Sampling in qualitative research has different purposes and goals than sampling in quantitative research. Sampling in both allows you to say something of interest about a population without having to include the entire population in your sample.

We begin this chapter with the case of a population of interest composed of actual people. After we have a better understanding of populations and samples that involve real people, we’ll discuss sampling in other types of qualitative research, such as archival research, content analysis, and case studies. We’ll then move to a larger discussion about the difference between sampling in qualitative research generally versus quantitative research, then we’ll move on to the idea of “theoretical” generalizability, and finally, we’ll conclude with some practical tips on the correct “number” to include in one’s sample.

Sampling People

To help think through samples, let’s imagine we want to know more about “vaccine hesitancy.” We’ve all lived through 2020 and 2021, and we know that a sizable number of people in the United States (and elsewhere) were slow to accept vaccines, even when these were freely available. By some accounts, about one-third of Americans initially refused vaccination. Why is this so? Well, as I write this in the summer of 2021, we know that some people actively refused the vaccination, thinking it was harmful or part of a government plot. Others were simply lazy or dismissed the necessity. And still others were worried about harmful side effects. The general population of interest here (all adult Americans who were not vaccinated by August 2021) may be as many as eighty million people. We clearly cannot talk to all of them. So we will have to narrow the number to something manageable. How can we do this?

Null

First, we have to think about our actual research question and the form of research we are conducting. I am going to begin with a quantitative research question. Quantitative research questions tend to be simpler to visualize, at least when we are first starting out doing social science research. So let us say we want to know what percentage of each kind of resistance is out there and how race or class or gender affects vaccine hesitancy. Again, we don’t have the ability to talk to everyone. But harnessing what we know about normal probability distributions (see quantitative methods for more on this), we can find this out through a sample that represents the general population. We can’t really address these particular questions if we only talk to White women who go to college with us. And if you are really trying to generalize the specific findings of your sample to the larger population, you will have to employ probability sampling , a sampling technique where a researcher sets a selection of a few criteria and chooses members of a population randomly. Why randomly? If truly random, all the members have an equal opportunity to be a part of the sample, and thus we avoid the problem of having only our friends and neighbors (who may be very different from other people in the population) in the study. Mathematically, there is going to be a certain number that will be large enough to allow us to generalize our particular findings from our sample population to the population at large. It might surprise you how small that number can be. Election polls of no more than one thousand people are routinely used to predict actual election outcomes of millions of people. Below that number, however, you will not be able to make generalizations. Talking to five people at random is simply not enough people to predict a presidential election.

In order to answer quantitative research questions of causality, one must employ probability sampling. Quantitative researchers try to generalize their findings to a larger population. Samples are designed with that in mind. Qualitative researchers ask very different questions, though. Qualitative research questions are not about “how many” of a certain group do X (in this case, what percentage of the unvaccinated hesitate for concern about safety rather than reject vaccination on political grounds). Qualitative research employs nonprobability sampling . By definition, not everyone has an equal opportunity to be included in the sample. The researcher might select White women they go to college with to provide insight into racial and gender dynamics at play. Whatever is found by doing so will not be generalizable to everyone who has not been vaccinated, or even all White women who have not been vaccinated, or even all White women who have not been vaccinated who are in this particular college. That is not the point of qualitative research at all. This is a really important distinction, so I will repeat in bold: Qualitative researchers are not trying to statistically generalize specific findings to a larger population . They have not failed when their sample cannot be generalized, as that is not the point at all.

In the previous paragraph, I said it would be perfectly acceptable for a qualitative researcher to interview five White women with whom she goes to college about their vaccine hesitancy “to provide insight into racial and gender dynamics at play.” The key word here is “insight.” Rather than use a sample as a stand-in for the general population, as quantitative researchers do, the qualitative researcher uses the sample to gain insight into a process or phenomenon. The qualitative researcher is not going to be content with simply asking each of the women to state her reason for not being vaccinated and then draw conclusions that, because one in five of these women were concerned about their health, one in five of all people were also concerned about their health. That would be, frankly, a very poor study indeed. Rather, the qualitative researcher might sit down with each of the women and conduct a lengthy interview about what the vaccine means to her, why she is hesitant, how she manages her hesitancy (how she explains it to her friends), what she thinks about others who are unvaccinated, what she thinks of those who have been vaccinated, and what she knows or thinks she knows about COVID-19. The researcher might include specific interview questions about the college context, about their status as White women, about the political beliefs they hold about racism in the US, and about how their own political affiliations may or may not provide narrative scripts about “protective whiteness.” There are many interesting things to ask and learn about and many things to discover. Where a quantitative researcher begins with clear parameters to set their population and guide their sample selection process, the qualitative researcher is discovering new parameters, making it impossible to engage in probability sampling.

Looking at it this way, sampling for qualitative researchers needs to be more strategic. More theoretically informed. What persons can be interviewed or observed that would provide maximum insight into what is still unknown? In other words, qualitative researchers think through what cases they could learn the most from, and those are the cases selected to study: “What would be ‘bias’ in statistical sampling, and therefore a weakness, becomes intended focus in qualitative sampling, and therefore a strength. The logic and power of purposeful sampling like in selecting information-rich cases for study in depth. Information-rich cases are those from which one can learn a great deal about issues of central importance to the purpose of the inquiry, thus the term purposeful sampling” ( Patton 2002:230 ; emphases in the original).

Before selecting your sample, though, it is important to clearly identify the general population of interest. You need to know this before you can determine the sample. In our example case, it is “adult Americans who have not yet been vaccinated.” Depending on the specific qualitative research question, however, it might be “adult Americans who have been vaccinated for political reasons” or even “college students who have not been vaccinated.” What insights are you seeking? Do you want to know how politics is affecting vaccination? Or do you want to understand how people manage being an outlier in a particular setting (unvaccinated where vaccinations are heavily encouraged if not required)? More clearly stated, your population should align with your research question . Think back to the opening story about Damaske’s work studying the unemployed. She drew her sample narrowly to address the particular questions she was interested in pursuing. Knowing your questions or, at a minimum, why you are interested in the topic will allow you to draw the best sample possible to achieve insight.

Once you have your population in mind, how do you go about getting people to agree to be in your sample? In qualitative research, it is permissible to find people by convenience. Just ask for people who fit your sample criteria and see who shows up. Or reach out to friends and colleagues and see if they know anyone that fits. Don’t let the name convenience sampling mislead you; this is not exactly “easy,” and it is certainly a valid form of sampling in qualitative research. The more unknowns you have about what you will find, the more convenience sampling makes sense. If you don’t know how race or class or political affiliation might matter, and your population is unvaccinated college students, you can construct a sample of college students by placing an advertisement in the student paper or posting a flyer on a notice board. Whoever answers is your sample. That is what is meant by a convenience sample. A common variation of convenience sampling is snowball sampling . This is particularly useful if your target population is hard to find. Let’s say you posted a flyer about your study and only two college students responded. You could then ask those two students for referrals. They tell their friends, and those friends tell other friends, and, like a snowball, your sample gets bigger and bigger.

Researcher Note

Gaining Access: When Your Friend Is Your Research Subject

My early experience with qualitative research was rather unique. At that time, I needed to do a project that required me to interview first-generation college students, and my friends, with whom I had been sharing a dorm for two years, just perfectly fell into the sample category. Thus, I just asked them and easily “gained my access” to the research subject; I know them, we are friends, and I am part of them. I am an insider. I also thought, “Well, since I am part of the group, I can easily understand their language and norms, I can capture their honesty, read their nonverbal cues well, will get more information, as they will be more opened to me because they trust me.” All in all, easy access with rich information. But, gosh, I did not realize that my status as an insider came with a price! When structuring the interview questions, I began to realize that rather than focusing on the unique experiences of my friends, I mostly based the questions on my own experiences, assuming we have similar if not the same experiences. I began to struggle with my objectivity and even questioned my role; am I doing this as part of the group or as a researcher? I came to know later that my status as an insider or my “positionality” may impact my research. It not only shapes the process of data collection but might heavily influence my interpretation of the data. I came to realize that although my inside status came with a lot of benefits (especially for access), it could also bring some drawbacks.

—Dede Setiono, PhD student focusing on international development and environmental policy, Oregon State University

The more you know about what you might find, the more strategic you can be. If you wanted to compare how politically conservative and politically liberal college students explained their vaccine hesitancy, for example, you might construct a sample purposively, finding an equal number of both types of students so that you can make those comparisons in your analysis. This is what Damaske ( 2021 ) did. You could still use convenience or snowball sampling as a way of recruitment. Post a flyer at the conservative student club and then ask for referrals from the one student that agrees to be interviewed. As with convenience sampling, there are variations of purposive sampling as well as other names used (e.g., judgment, quota, stratified, criterion, theoretical). Try not to get bogged down in the nomenclature; instead, focus on identifying the general population that matches your research question and then using a sampling method that is most likely to provide insight, given the types of questions you have.

There are all kinds of ways of being strategic with sampling in qualitative research. Here are a few of my favorite techniques for maximizing insight:

  • Consider using “extreme” or “deviant” cases. Maybe your college houses a prominent anti-vaxxer who has written about and demonstrated against the college’s policy on vaccines. You could learn a lot from that single case (depending on your research question, of course).
  • Consider “intensity”: people and cases and circumstances where your questions are more likely to feature prominently (but not extremely or deviantly). For example, you could compare those who volunteer at local Republican and Democratic election headquarters during an election season in a study on why party matters. Those who volunteer are more likely to have something to say than those who are more apathetic.
  • Maximize variation, as with the case of “politically liberal” versus “politically conservative,” or include an array of social locations (young vs. old; Northwest vs. Southeast region). This kind of heterogeneity sampling can capture and describe the central themes that cut across the variations: any common patterns that emerge, even in this wildly mismatched sample, are probably important to note!
  • Rather than maximize the variation, you could select a small homogenous sample to describe some particular subgroup in depth. Focus groups are often the best form of data collection for homogeneity sampling.
  • Think about which cases are “critical” or politically important—ones that “if it happens here, it would happen anywhere” or a case that is politically sensitive, as with the single “blue” (Democratic) county in a “red” (Republican) state. In both, you are choosing a site that would yield the most information and have the greatest impact on the development of knowledge.
  • On the other hand, sometimes you want to select the “typical”—the typical college student, for example. You are trying to not generalize from the typical but illustrate aspects that may be typical of this case or group. When selecting for typicality, be clear with yourself about why the typical matches your research questions (and who might be excluded or marginalized in doing so).
  • Finally, it is often a good idea to look for disconfirming cases : if you are at the stage where you have a hypothesis (of sorts), you might select those who do not fit your hypothesis—you will surely learn something important there. They may be “exceptions that prove the rule” or exceptions that force you to alter your findings in order to make sense of these additional cases.

In addition to all these sampling variations, there is the theoretical approach taken by grounded theorists in which the researcher samples comparative people (or events) on the basis of their potential to represent important theoretical constructs. The sample, one can say, is by definition representative of the phenomenon of interest. It accompanies the constant comparative method of analysis. In the words of the funders of Grounded Theory , “Theoretical sampling is sampling on the basis of the emerging concepts, with the aim being to explore the dimensional range or varied conditions along which the properties of the concepts vary” ( Strauss and Corbin 1998:73 ).

When Your Population is Not Composed of People

I think it is easiest for most people to think of populations and samples in terms of people, but sometimes our units of analysis are not actually people. They could be places or institutions. Even so, you might still want to talk to people or observe the actions of people to understand those places or institutions. Or not! In the case of content analyses (see chapter 17), you won’t even have people involved at all but rather documents or films or photographs or news clippings. Everything we have covered about sampling applies to other units of analysis too. Let’s work through some examples.

Case Studies

When constructing a case study, it is helpful to think of your cases as sample populations in the same way that we considered people above. If, for example, you are comparing campus climates for diversity, your overall population may be “four-year college campuses in the US,” and from there you might decide to study three college campuses as your sample. Which three? Will you use purposeful sampling (perhaps [1] selecting three colleges in Oregon that are different sizes or [2] selecting three colleges across the US located in different political cultures or [3] varying the three colleges by racial makeup of the student body)? Or will you select three colleges at random, out of convenience? There are justifiable reasons for all approaches.

As with people, there are different ways of maximizing insight in your sample selection. Think about the following rationales: typical, diverse, extreme, deviant, influential, crucial, or even embodying a particular “pathway” ( Gerring 2008 ). When choosing a case or particular research site, Rubin ( 2021 ) suggests you bear in mind, first, what you are leaving out by selecting this particular case/site; second, what you might be overemphasizing by studying this case/site and not another; and, finally, whether you truly need to worry about either of those things—“that is, what are the sources of bias and how bad are they for what you are trying to do?” ( 89 ).

Once you have selected your cases, you may still want to include interviews with specific people or observations at particular sites within those cases. Then you go through possible sampling approaches all over again to determine which people will be contacted.

Content: Documents, Narrative Accounts, And So On

Although not often discussed as sampling, your selection of documents and other units to use in various content/historical analyses is subject to similar considerations. When you are asking quantitative-type questions (percentages and proportionalities of a general population), you will want to follow probabilistic sampling. For example, I created a random sample of accounts posted on the website studentloanjustice.org to delineate the types of problems people were having with student debt ( Hurst 2007 ). Even though my data was qualitative (narratives of student debt), I was actually asking a quantitative-type research question, so it was important that my sample was representative of the larger population (debtors who posted on the website). On the other hand, when you are asking qualitative-type questions, the selection process should be very different. In that case, use nonprobabilistic techniques, either convenience (where you are really new to this data and do not have the ability to set comparative criteria or even know what a deviant case would be) or some variant of purposive sampling. Let’s say you were interested in the visual representation of women in media published in the 1950s. You could select a national magazine like Time for a “typical” representation (and for its convenience, as all issues are freely available on the web and easy to search). Or you could compare one magazine known for its feminist content versus one antifeminist. The point is, sample selection is important even when you are not interviewing or observing people.

Goals of Qualitative Sampling versus Goals of Quantitative Sampling

We have already discussed some of the differences in the goals of quantitative and qualitative sampling above, but it is worth further discussion. The quantitative researcher seeks a sample that is representative of the population of interest so that they may properly generalize the results (e.g., if 80 percent of first-gen students in the sample were concerned with costs of college, then we can say there is a strong likelihood that 80 percent of first-gen students nationally are concerned with costs of college). The qualitative researcher does not seek to generalize in this way . They may want a representative sample because they are interested in typical responses or behaviors of the population of interest, but they may very well not want a representative sample at all. They might want an “extreme” or deviant case to highlight what could go wrong with a particular situation, or maybe they want to examine just one case as a way of understanding what elements might be of interest in further research. When thinking of your sample, you will have to know why you are selecting the units, and this relates back to your research question or sets of questions. It has nothing to do with having a representative sample to generalize results. You may be tempted—or it may be suggested to you by a quantitatively minded member of your committee—to create as large and representative a sample as you possibly can to earn credibility from quantitative researchers. Ignore this temptation or suggestion. The only thing you should be considering is what sample will best bring insight into the questions guiding your research. This has implications for the number of people (or units) in your study as well, which is the topic of the next section.

What is the Correct “Number” to Sample?

Because we are not trying to create a generalizable representative sample, the guidelines for the “number” of people to interview or news stories to code are also a bit more nebulous. There are some brilliant insightful studies out there with an n of 1 (meaning one person or one account used as the entire set of data). This is particularly so in the case of autoethnography, a variation of ethnographic research that uses the researcher’s own subject position and experiences as the basis of data collection and analysis. But it is true for all forms of qualitative research. There are no hard-and-fast rules here. The number to include is what is relevant and insightful to your particular study.

That said, humans do not thrive well under such ambiguity, and there are a few helpful suggestions that can be made. First, many qualitative researchers talk about “saturation” as the end point for data collection. You stop adding participants when you are no longer getting any new information (or so very little that the cost of adding another interview subject or spending another day in the field exceeds any likely benefits to the research). The term saturation was first used here by Glaser and Strauss ( 1967 ), the founders of Grounded Theory. Here is their explanation: “The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation . Saturation means that no additional data are being found whereby the sociologist can develop properties of the category. As he [or she] sees similar instances over and over again, the researcher becomes empirically confident that a category is saturated. [They go] out of [their] way to look for groups that stretch diversity of data as far as possible, just to make certain that saturation is based on the widest possible range of data on the category” ( 61 ).

It makes sense that the term was developed by grounded theorists, since this approach is rather more open-ended than other approaches used by qualitative researchers. With so much left open, having a guideline of “stop collecting data when you don’t find anything new” is reasonable. However, saturation can’t help much when first setting out your sample. How do you know how many people to contact to interview? What number will you put down in your institutional review board (IRB) protocol (see chapter 8)? You may guess how many people or units it will take to reach saturation, but there really is no way to know in advance. The best you can do is think about your population and your questions and look at what others have done with similar populations and questions.

Here are some suggestions to use as a starting point: For phenomenological studies, try to interview at least ten people for each major category or group of people . If you are comparing male-identified, female-identified, and gender-neutral college students in a study on gender regimes in social clubs, that means you might want to design a sample of thirty students, ten from each group. This is the minimum suggested number. Damaske’s ( 2021 ) sample of one hundred allows room for up to twenty-five participants in each of four “buckets” (e.g., working-class*female, working-class*male, middle-class*female, middle-class*male). If there is more than one comparative group (e.g., you are comparing students attending three different colleges, and you are comparing White and Black students in each), you can sometimes reduce the number for each group in your sample to five for, in this case, thirty total students. But that is really a bare minimum you will want to go. A lot of people will not trust you with only “five” cases in a bucket. Lareau ( 2021:24 ) advises a minimum of seven or nine for each bucket (or “cell,” in her words). The point is to think about what your analyses might look like and how comfortable you will be with a certain number of persons fitting each category.

Because qualitative research takes so much time and effort, it is rare for a beginning researcher to include more than thirty to fifty people or units in the study. You may not be able to conduct all the comparisons you might want simply because you cannot manage a larger sample. In that case, the limits of who you can reach or what you can include may influence you to rethink an original overcomplicated research design. Rather than include students from every racial group on a campus, for example, you might want to sample strategically, thinking about the most contrast (insightful), possibly excluding majority-race (White) students entirely, and simply using previous literature to fill in gaps in our understanding. For example, one of my former students was interested in discovering how race and class worked at a predominantly White institution (PWI). Due to time constraints, she simplified her study from an original sample frame of middle-class and working-class domestic Black and international African students (four buckets) to a sample frame of domestic Black and international African students (two buckets), allowing the complexities of class to come through individual accounts rather than from part of the sample frame. She wisely decided not to include White students in the sample, as her focus was on how minoritized students navigated the PWI. She was able to successfully complete her project and develop insights from the data with fewer than twenty interviewees. [1]

But what if you had unlimited time and resources? Would it always be better to interview more people or include more accounts, documents, and units of analysis? No! Your sample size should reflect your research question and the goals you have set yourself. Larger numbers can sometimes work against your goals. If, for example, you want to help bring out individual stories of success against the odds, adding more people to the analysis can end up drowning out those individual stories. Sometimes, the perfect size really is one (or three, or five). It really depends on what you are trying to discover and achieve in your study. Furthermore, studies of one hundred or more (people, documents, accounts, etc.) can sometimes be mistaken for quantitative research. Inevitably, the large sample size will push the researcher into simplifying the data numerically. And readers will begin to expect generalizability from such a large sample.

To summarize, “There are no rules for sample size in qualitative inquiry. Sample size depends on what you want to know, the purpose of the inquiry, what’s at stake, what will be useful, what will have credibility, and what can be done with available time and resources” ( Patton 2002:244 ).

How did you find/construct a sample?

Since qualitative researchers work with comparatively small sample sizes, getting your sample right is rather important. Yet it is also difficult to accomplish. For instance, a key question you need to ask yourself is whether you want a homogeneous or heterogeneous sample. In other words, do you want to include people in your study who are by and large the same, or do you want to have diversity in your sample?

For many years, I have studied the experiences of students who were the first in their families to attend university. There is a rather large number of sampling decisions I need to consider before starting the study. (1) Should I only talk to first-in-family students, or should I have a comparison group of students who are not first-in-family? (2) Do I need to strive for a gender distribution that matches undergraduate enrollment patterns? (3) Should I include participants that reflect diversity in gender identity and sexuality? (4) How about racial diversity? First-in-family status is strongly related to some ethnic or racial identity. (5) And how about areas of study?

As you can see, if I wanted to accommodate all these differences and get enough study participants in each category, I would quickly end up with a sample size of hundreds, which is not feasible in most qualitative research. In the end, for me, the most important decision was to maximize the voices of first-in-family students, which meant that I only included them in my sample. As for the other categories, I figured it was going to be hard enough to find first-in-family students, so I started recruiting with an open mind and an understanding that I may have to accept a lack of gender, sexuality, or racial diversity and then not be able to say anything about these issues. But I would definitely be able to speak about the experiences of being first-in-family.

—Wolfgang Lehmann, author of “Habitus Transformation and Hidden Injuries”

Examples of “Sample” Sections in Journal Articles

Think about some of the studies you have read in college, especially those with rich stories and accounts about people’s lives. Do you know how the people were selected to be the focus of those stories? If the account was published by an academic press (e.g., University of California Press or Princeton University Press) or in an academic journal, chances are that the author included a description of their sample selection. You can usually find these in a methodological appendix (book) or a section on “research methods” (article).

Here are two examples from recent books and one example from a recent article:

Example 1 . In It’s Not like I’m Poor: How Working Families Make Ends Meet in a Post-welfare World , the research team employed a mixed methods approach to understand how parents use the earned income tax credit, a refundable tax credit designed to provide relief for low- to moderate-income working people ( Halpern-Meekin et al. 2015 ). At the end of their book, their first appendix is “Introduction to Boston and the Research Project.” After describing the context of the study, they include the following description of their sample selection:

In June 2007, we drew 120 names at random from the roughly 332 surveys we gathered between February and April. Within each racial and ethnic group, we aimed for one-third married couples with children and two-thirds unmarried parents. We sent each of these families a letter informing them of the opportunity to participate in the in-depth portion of our study and then began calling the home and cell phone numbers they provided us on the surveys and knocking on the doors of the addresses they provided.…In the end, we interviewed 115 of the 120 families originally selected for the in-depth interview sample (the remaining five families declined to participate). ( 22 )

Was their sample selection based on convenience or purpose? Why do you think it was important for them to tell you that five families declined to be interviewed? There is actually a trick here, as the names were pulled randomly from a survey whose sample design was probabilistic. Why is this important to know? What can we say about the representativeness or the uniqueness of whatever findings are reported here?

Example 2 . In When Diversity Drops , Park ( 2013 ) examines the impact of decreasing campus diversity on the lives of college students. She does this through a case study of one student club, the InterVarsity Christian Fellowship (IVCF), at one university (“California University,” a pseudonym). Here is her description:

I supplemented participant observation with individual in-depth interviews with sixty IVCF associates, including thirty-four current students, eight former and current staff members, eleven alumni, and seven regional or national staff members. The racial/ethnic breakdown was twenty-five Asian Americans (41.6 percent), one Armenian (1.6 percent), twelve people who were black (20.0 percent), eight Latino/as (13.3 percent), three South Asian Americans (5.0 percent), and eleven people who were white (18.3 percent). Twenty-nine were men, and thirty-one were women. Looking back, I note that the higher number of Asian Americans reflected both the group’s racial/ethnic composition and my relative ease about approaching them for interviews. ( 156 )

How can you tell this is a convenience sample? What else do you note about the sample selection from this description?

Example 3. The last example is taken from an article published in the journal Research in Higher Education . Published articles tend to be more formal than books, at least when it comes to the presentation of qualitative research. In this article, Lawson ( 2021 ) is seeking to understand why female-identified college students drop out of majors that are dominated by male-identified students (e.g., engineering, computer science, music theory). Here is the entire relevant section of the article:

Method Participants Data were collected as part of a larger study designed to better understand the daily experiences of women in MDMs [male-dominated majors].…Participants included 120 students from a midsize, Midwestern University. This sample included 40 women and 40 men from MDMs—defined as any major where at least 2/3 of students are men at both the university and nationally—and 40 women from GNMs—defined as any may where 40–60% of students are women at both the university and nationally.… Procedure A multi-faceted approach was used to recruit participants; participants were sent targeted emails (obtained based on participants’ reported gender and major listings), campus-wide emails sent through the University’s Communication Center, flyers, and in-class presentations. Recruitment materials stated that the research focused on the daily experiences of college students, including classroom experiences, stressors, positive experiences, departmental contexts, and career aspirations. Interested participants were directed to email the study coordinator to verify eligibility (at least 18 years old, man/woman in MDM or woman in GNM, access to a smartphone). Sixteen interested individuals were not eligible for the study due to the gender/major combination. ( 482ff .)

What method of sample selection was used by Lawson? Why is it important to define “MDM” at the outset? How does this definition relate to sampling? Why were interested participants directed to the study coordinator to verify eligibility?

Final Words

I have found that students often find it difficult to be specific enough when defining and choosing their sample. It might help to think about your sample design and sample recruitment like a cookbook. You want all the details there so that someone else can pick up your study and conduct it as you intended. That person could be yourself, but this analogy might work better if you have someone else in mind. When I am writing down recipes, I often think of my sister and try to convey the details she would need to duplicate the dish. We share a grandmother whose recipes are full of handwritten notes in the margins, in spidery ink, that tell us what bowl to use when or where things could go wrong. Describe your sample clearly, convey the steps required accurately, and then add any other details that will help keep you on track and remind you why you have chosen to limit possible interviewees to those of a certain age or class or location. Imagine actually going out and getting your sample (making your dish). Do you have all the necessary details to get started?

Table 5.1. Sampling Type and Strategies

Further Readings

Fusch, Patricia I., and Lawrence R. Ness. 2015. “Are We There Yet? Data Saturation in Qualitative Research.” Qualitative Report 20(9):1408–1416.

Saunders, Benjamin, Julius Sim, Tom Kinstone, Shula Baker, Jackie Waterfield, Bernadette Bartlam, Heather Burroughs, and Clare Jinks. 2018. “Saturation in Qualitative Research: Exploring Its Conceptualization and Operationalization.”  Quality & Quantity  52(4):1893–1907.

  • Rubin ( 2021 ) suggests a minimum of twenty interviews (but safer with thirty) for an interview-based study and a minimum of three to six months in the field for ethnographic studies. For a content-based study, she suggests between five hundred and one thousand documents, although some will be “very small” ( 243–244 ). ↵

The process of selecting people or other units of analysis to represent a larger population. In quantitative research, this representation is taken quite literally, as statistically representative.  In qualitative research, in contrast, sample selection is often made based on potential to generate insight about a particular topic or phenomenon.

The actual list of individuals that the sample will be drawn from. Ideally, it should include the entire target population (and nobody who is not part of that population).  Sampling frames can differ from the larger population when specific exclusions are inherent, as in the case of pulling names randomly from voter registration rolls where not everyone is a registered voter.  This difference in frame and population can undercut the generalizability of quantitative results.

The specific group of individuals that you will collect data from.  Contrast population.

The large group of interest to the researcher.  Although it will likely be impossible to design a study that incorporates or reaches all members of the population of interest, this should be clearly defined at the outset of a study so that a reasonable sample of the population can be taken.  For example, if one is studying working-class college students, the sample may include twenty such students attending a particular college, while the population is “working-class college students.”  In quantitative research, clearly defining the general population of interest is a necessary step in generalizing results from a sample.  In qualitative research, defining the population is conceptually important for clarity.

A sampling strategy in which the sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the sample.  This is often done through a lottery or other chance mechanisms (e.g., a random selection of every twelfth name on an alphabetical list of voters).  Also known as random sampling .

The selection of research participants or other data sources based on availability or accessibility, in contrast to purposive sampling .

A sample generated non-randomly by asking participants to help recruit more participants the idea being that a person who fits your sampling criteria probably knows other people with similar criteria.

Broad codes that are assigned to the main issues emerging in the data; identifying themes is often part of initial coding . 

A form of case selection focusing on examples that do not fit the emerging patterns. This allows the researcher to evaluate rival explanations or to define the limitations of their research findings. While disconfirming cases are found (not sought out), researchers should expand their analysis or rethink their theories to include/explain them.

A methodological tradition of inquiry and approach to analyzing qualitative data in which theories emerge from a rigorous and systematic process of induction.  This approach was pioneered by the sociologists Glaser and Strauss (1967).  The elements of theory generated from comparative analysis of data are, first, conceptual categories and their properties and, second, hypotheses or generalized relations among the categories and their properties – “The constant comparing of many groups draws the [researcher’s] attention to their many similarities and differences.  Considering these leads [the researcher] to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.” (36).

The result of probability sampling, in which a sample is chosen to represent (numerically) the larger population from which it is drawn by random selection.  Each person in the population has an equal chance of making it into the random sample.  This is often done through a lottery or other chance mechanisms (e.g., the random selection of every twelfth name on an alphabetical list of voters).  This is typically not required in qualitative research but rather essential for the generalizability of quantitative research.

A form of case selection or purposeful sampling in which cases that are unusual or special in some way are chosen to highlight processes or to illuminate gaps in our knowledge of a phenomenon.   See also extreme case .

The point at which you can conclude data collection because every person you are interviewing, the interaction you are observing, or content you are analyzing merely confirms what you have already noted.  Achieving saturation is often used as the justification for the final sample size.

The accuracy with which results or findings can be transferred to situations or people other than those originally studied.  Qualitative studies generally are unable to use (and are uninterested in) statistical generalizability where the sample population is said to be able to predict or stand in for a larger population of interest.  Instead, qualitative researchers often discuss “theoretical generalizability,” in which the findings of a particular study can shed light on processes and mechanisms that may be at play in other settings.  See also statistical generalization and theoretical generalization .

A term used by IRBs to denote all materials aimed at recruiting participants into a research study (including printed advertisements, scripts, audio or video tapes, or websites).  Copies of this material are required in research protocols submitted to IRB.

Introduction to Qualitative Research Methods Copyright © 2023 by Allison Hurst is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License , except where otherwise noted.

Sampling Techniques for Qualitative Research

  • First Online: 27 October 2022

Cite this chapter

what is sampling methods in qualitative research

  • Heather Douglas 4  

2725 Accesses

3 Citations

This chapter explains how to design suitable sampling strategies for qualitative research. The focus of this chapter is purposive (or theoretical) sampling to produce credible and trustworthy explanations of a phenomenon (a specific aspect of society). A specific research question (RQ) guides the methodology (the study design or approach ). It defines the participants, location, and actions to be used to answer the question. Qualitative studies use specific tools and techniques ( methods ) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ. In this chapter, a fake example is used to demonstrate how to apply your sampling strategy in a developing country.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Douglas, H. (2010). Divergent orientations in social entrepreneurship organisations. In K. Hockerts, J. Robinson, & J. Mair (Eds.), Values and opportunities in social entrepreneurship (pp. 71–95). Palgrave Macmillan.

Chapter   Google Scholar  

Douglas, H., Eti-Tofinga, B., & Singh, G. (2018a). Contextualising social enterprise in Fiji. Social Enterprise Journal, 14 (2), 208–224. https://doi.org/10.1108/SEJ-05-2017-0032

Article   Google Scholar  

Douglas, H., Eti-Tofinga, B., & Singh, G. (2018b). Hybrid organisations contributing to wellbeing in small Pacific island countries. Sustainability Accounting, Management and Policy Journal, 9 (4), 490–514. https://doi.org/10.1108/SAMPJ-08-2017-0081

Douglas, H., & Borbasi, S. (2009). Parental perspectives on disability: The story of Sam, Anna, and Marcus. Disabilities: Insights from across fields and around the world, 2 , 201–217.

Google Scholar  

Douglas, H. (1999). Community transport in rural Queensland: Using community resources effectively in small communities. Paper presented at the 5th National Rural Health Conference, Adelaide, South Australia, pp. 14–17th March.

Douglas, H. (2006). Action, blastoff, chaos: ABC of successful youth participation. Child, Youth and Environments, 16 (1). Retrieved from http://www.colorado.edu/journals/cye

Douglas, H. (2007). Methodological sampling issues for researching new nonprofit organisations. Paper presented at the 52nd International Council for Small Business (ICSB) 13–15 June, Turku, Finland.

Draper, H., Wilson, S., Flanagan, S., & Ives, J. (2009). Offering payments, reimbursement and incentives to patients and family doctors to encourage participation in research. Family Practice, 26 (3), 231–238. https://doi.org/10.1093/fampra/cmp011

Puamua, P. Q. (1999). Understanding Fijian under-achievement: An integrated perspective. Directions, 21 (2), 100–112.

Download references

Author information

Authors and affiliations.

The University of Queensland, The Royal Society of Queensland, Activation Australia, Brisbane, Australia

Heather Douglas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Heather Douglas .

Editor information

Editors and affiliations.

Centre for Family and Child Studies, Research Institute of Humanities and Social Sciences, University of Sharjah, Sharjah, United Arab Emirates

M. Rezaul Islam

Department of Development Studies, University of Dhaka, Dhaka, Bangladesh

Niaz Ahmed Khan

Department of Social Work, School of Humanities, University of Johannesburg, Johannesburg, South Africa

Rajendra Baikady

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Douglas, H. (2022). Sampling Techniques for Qualitative Research. In: Islam, M.R., Khan, N.A., Baikady, R. (eds) Principles of Social Research Methodology. Springer, Singapore. https://doi.org/10.1007/978-981-19-5441-2_29

Download citation

DOI : https://doi.org/10.1007/978-981-19-5441-2_29

Published : 27 October 2022

Publisher Name : Springer, Singapore

Print ISBN : 978-981-19-5219-7

Online ISBN : 978-981-19-5441-2

eBook Packages : Social Sciences

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies

Introduction.

  • Sampling Strategies
  • Sample Size
  • Qualitative Design Considerations
  • Discipline Specific and Special Considerations
  • Sampling Strategies Unique to Mixed Methods Designs

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Mixed Methods Research
  • Qualitative Research Design
  • Quantitative Research Designs in Educational Research

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • English as an International Language for Academic Publishing
  • Girls' Education in the Developing World
  • History of Education in Europe
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies by Timothy C. Guetterman LAST REVIEWED: 26 February 2020 LAST MODIFIED: 26 February 2020 DOI: 10.1093/obo/9780199756810-0241

Sampling is a critical, often overlooked aspect of the research process. The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e., quantitatively) or go into sufficient depth (i.e., qualitatively). While quantitative research is generally concerned with probability-based approaches, qualitative research typically uses nonprobability purposeful sampling approaches. Scholars generally focus on two major sampling topics: sampling strategies and sample sizes. Or simply, researchers should think about who to include and how many; both of these concerns are key. Mixed methods studies have both qualitative and quantitative sampling considerations. However, mixed methods studies also have unique considerations based on the relationship of quantitative and qualitative research within the study.

Sampling in Qualitative Research

Sampling in qualitative research may be divided into two major areas: overall sampling strategies and issues around sample size. Sampling strategies refers to the process of sampling and how to design a sampling. Qualitative sampling typically follows a nonprobability-based approach, such as purposive or purposeful sampling where participants or other units of analysis are selected intentionally for their ability to provide information to address research questions. Sample size refers to how many participants or other units are needed to address research questions. The methodological literature about sampling tends to fall into these two broad categories, though some articles, chapters, and books cover both concepts. Others have connected sampling to the type of qualitative design that is employed. Additionally, researchers might consider discipline specific sampling issues as much research does tend to operate within disciplinary views and constraints. Scholars in many disciplines have examined sampling around specific topics, research problems, or disciplines and provide guidance to making sampling decisions, such as appropriate strategies and sample size.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Action Research in Education
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Blended Learning
  • Case Study in Education Research
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data Collection in Educational Research
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educational Statistics for Longitudinal Research
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender, Power and Politics in the Academy
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Grounded Theory
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Literature Reviews
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Meta-Analysis and Research Synthesis in Education
  • Methodological Approaches for Impact Evaluation in Educati...
  • Methodologies for Conducting Education Research
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Multivariate Research Methodology
  • Museums, Education, and Curriculum
  • Music Education
  • Narrative Research in Education
  • Native American Studies
  • Nonformal and Informal Environmental Education
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Program Evaluation
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Single-Subject Research Design
  • Social Context of Education
  • Social Justice
  • Social Network Analysis
  • Social Pedagogy
  • Social Science and Education Research
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Statistical Assumptions
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Using Ethnography in Educational Research
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|91.193.111.216]
  • 91.193.111.216

Logo for Open Library Publishing Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10.2 Sampling in qualitative research

Learning objectives.

  • Define nonprobability sampling, and describe instances in which a researcher might choose a nonprobability sampling technique
  • Describe the different types of nonprobability samples

Qualitative researchers typically make sampling choices that enable them to achieve a deep understanding of whatever phenomenon it is that they are studying. In this section, we’ll examine the techniques that qualitative researchers typically employ when sampling as well as the various types of samples that qualitative researchers are most likely to use in their work.

Nonprobability sampling

Nonprobability sampling refers to sampling techniques for which a person’s likelihood of being selected for membership in the sample is unknown. Because we don’t know the likelihood of selection, we don’t know with nonprobability samples whether a sample is truly representative of a larger population. But that’s okay. Generalizing to a larger population is not the goal with nonprobability samples or qualitative research. That said, the fact that nonprobability samples do not represent a larger population does not mean that they are drawn arbitrarily or without any specific purpose in mind (that would mean committing one of the errors of informal inquiry discussed in Chapter 1). We’ll take a closer look at the process of selecting research elements when drawing a nonprobability sample. But first, let’s consider why a researcher might choose to use a nonprobability sample.

two people filling out a clipboard survey in a crowd of people

When are nonprobability samples ideal? One instance might be when we’re starting a big research project. For example, if we’re conducting survey research, we may want to administer a draft of our survey to a few people who seem to resemble the folks we’re interested in studying in order to help work out kinks in the survey. We might also use a nonprobability sample if we’re conducting a pilot study or some exploratory research. This can be a quick way to gather some initial data and help us get some idea of the lay of the land before conducting a more extensive study. From these examples, we can see that nonprobability samples can be useful for setting up, framing, or beginning research, even quantitative research. But it isn’t just early stage research that relies on and benefits from nonprobability sampling techniques. Researchers also use nonprobability samples in full-blown research projects. These projects are usually qualitative in nature, where the researcher’s goal is in-depth, idiographic understanding rather than more general, nomothetic understanding.

Types of nonprobability samples

There are several types of nonprobability samples that researchers use. These include purposive samples, snowball samples, quota samples, and convenience samples. While the latter two strategies may be used by quantitative researchers from time to time, they are more typically employed in qualitative research, and because they are both nonprobability methods, we include them in this section of the chapter.

To draw a purposive sample , a researcher selects participants from their sampling frame because they have characteristics that the researcher desires. A researcher begins with specific characteristics in mind that she wishes to examine and then seeks out research participants who cover that full range of characteristics. For example, if you are studying mental health supports on your campus, you may want to be sure to include not only students, but mental health practitioners and student affairs administrators. You might also select students who currently use mental health supports, those who dropped out of supports, and those who are waiting to receive supports. The purposive part of purposive sampling comes from selecting specific participants on purpose because you already know they have characteristics—being an administrator, dropping out of mental health supports—that you need in your sample.

Note that these are different than inclusion criteria, which are more general requirements a person must possess to be a part of your sample. For example, one of the inclusion criteria for a study of your campus’ mental health supports might be that participants had to have visited the mental health center in the past year. That is different than purposive sampling. In purposive sampling, you know characteristics of individuals and recruit them because of those characteristics. For example, I might recruit Jane because she stopped seeking supports this month, JD because she has worked at the center for many years, and so forth.

Also, it’s important to recognize that purposive sampling requires you to have prior information about your participants before recruiting them because you need to know their perspectives or experiences before you know whether you want them in your sample. This is a common mistake that many students make. What I often hear is, “I’m using purposive sampling because I’m recruiting people from the health center,” or something like that. That’s not purposive sampling. Purposive sampling is recruiting specific people because of the various characteristics and perspectives they bring to your sample. Imagine we were creating a focus group. A purposive sample might gather clinicians, patients, administrators, staff, and former patients together so they can talk as a group. Purposive sampling would seek out people that have each of those attributes.

Quota sampling is another nonprobability sampling strategy that takes purposive sampling one step further. When conducting quota sampling, a researcher identifies categories that are important to the study and for which there is likely to be some variation. Subgroups are created based on each category, and the researcher decides how many people to include from each subgroup and collects data from that number for each subgroup. Let’s consider a study of student satisfaction with on-campus housing. Perhaps there are two types of housing on your campus: apartments that include full kitchens and dorm rooms where residents do not cook for themselves and instead eat in a dorm cafeteria. As a researcher, you might wish to understand how satisfaction varies across these two types of housing arrangements. Perhaps you have the time and resources to interview 20 campus residents, so you decide to interview 10 from each housing type. It is possible as well that your review of literature on the topic suggests that campus housing experiences vary by gender. If that is that case, perhaps you’ll decide on four important subgroups: men who live in apartments, women who live in apartments, men who live in dorm rooms, and women who live in dorm rooms. Your quota sample would include five people from each of the four subgroups.

In 1936, up-and-coming pollster George Gallup made history when he successfully predicted the outcome of the presidential election using quota sampling methods. The leading polling entity at the time, The Literary Digest, predicted that Alfred Landon would beat Franklin Roosevelt in the presidential election by a landslide, but Gallup’s polling disagreed. Gallup successfully predicted Roosevelt’s win and subsequent elections based on quota samples, but in 1948, Gallup incorrectly predicted that Dewey would beat Truman in the US presidential election.  [1] Among other problems, the fact that Gallup’s quota categories did not represent those who actually voted (Neuman, 2007)  [2] underscores the point that one should avoid attempting to make statistical generalizations from data collected using quota sampling methods.  [3] While quota sampling offers the strength of helping the researcher account for potentially relevant variation across study elements, it would be a mistake to think of this strategy as yielding statistically representative findings. For that, you need probability sampling, which we will discuss in the next section.

Qualitative researchers can also use snowball sampling techniques to identify study participants. In snowball sampling , a researcher identifies one or two people she’d like to include in her study but then relies on those initial participants to help identify additional study participants. Thus, the researcher’s sample builds and becomes larger as the study continues, much as a snowball builds and becomes larger as it rolls through the snow. Snowball sampling is an especially useful strategy when a researcher wishes to study a stigmatized group or behavior. For example, a researcher who wanted to study how people with genital herpes cope with their medical condition would be unlikely to find many participants by posting a call for interviewees in the newspaper or making an announcement about the study at some large social gathering. Instead, the researcher might know someone with the condition, interview that person, and ask the person to refer others they may know with the genital herpes to contact you to participate in the study. Having a previous participant vouch for the researcher may help new potential participants feel more comfortable about being included in the study.

a person pictured next to a network of associates and their interrelationships noted through lines connecting the photos

Snowball sampling is sometimes referred to as chain referral sampling. One research participant refers another, and that person refers another, and that person refers another—thus a chain of potential participants is identified. In addition to using this sampling strategy for potentially stigmatized populations, it is also a useful strategy to use when the researcher’s group of interest is likely to be difficult to find, not only because of some stigma associated with the group, but also because the group may be relatively rare. This was the case for Steven Kogan and colleagues (Kogan, Wejnert, Chen, Brody, & Slater, 2011)  [4] who wished to study the sexual behaviors of non-college-bound African American young adults who lived in high-poverty rural areas. The researchers first relied on their own networks to identify study participants, but because members of the study’s target population were not easy to find, access to the networks of initial study participants was very important for identifying additional participants. Initial participants were given coupons to pass on to others they knew who qualified for the study. Participants were given an added incentive for referring eligible study participants; they received $50 for participating in the study and an additional $20 for each person they recruited who also participated in the study. Using this strategy, Kogan and colleagues succeeded in recruiting 292 study participants.

Finally, convenience sampling is another nonprobability sampling strategy that is employed by both qualitative and quantitative researchers. To draw a convenience sample, a researcher simply collects data from those people or other relevant elements to which she has most convenient access. This method, also sometimes referred to as availability sampling, is most useful in exploratory research or in student projects in which probability sampling is too costly or difficult. If you’ve ever been interviewed by a fellow student for a class project, you have likely been a part of a convenience sample. While convenience samples offer one major benefit—convenience—they do not offer the rigor needed to make conclusions about larger populations. That is the subject of our next section on probability sampling.

Key Takeaways

  • Nonprobability samples might be used when researchers are conducting qualitative (or idiographic) research, exploratory research, student projects, or pilot studies.
  • There are several types of nonprobability samples including purposive samples, snowball samples, quota samples, and convenience samples.
  • Convenience sample- researcher gathers data from whatever cases happen to be convenient
  • Nonprobability sampling- sampling techniques for which a person’s likelihood of being selected for membership in the sample is unknown
  • Purposive sample- when a researcher seeks out participants with specific characteristics
  • Quota sample- when a researcher selects cases from within several different subgroups
  • Snowball sample- when a researcher relies on participant referrals to recruit new participants

Image attributions

business by helpsg CC-0

network by geralt CC-0

  • For more information about the 1948 election and other historically significant dates related to measurement, see the PBS timeline of “The first measured century” at http://www.pbs.org/fmc/timeline/e1948election.htm. ↵
  • Neuman, W. L. (2007). Basics of social research: Qualitative and quantitative approaches (2nd ed.). Boston, MA: Pearson. ↵
  • If you are interested in the history of polling, I recommend reading Fried, A. (2011). Pathways to polling: Crisis, cooperation, and the making of public opinion professions . New York, NY: Routledge. ↵
  • Kogan, S. M., Wejnert, C., Chen, Y., Brody, G. H., & Slater, L. M. (2011). Respondent-driven sampling with hard-to-reach emerging adults: An introduction and case study with rural African Americans. Journal of Adolescent Research , 26 , 30–60. ↵

Scientific Inquiry in Social Work Copyright © 2018 by Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

what is sampling methods in qualitative research

7.2 Sampling in Qualitative Research

Learning objectives.

  • Define nonprobability sampling, and describe instances in which a researcher might choose a nonprobability sampling technique.
  • Describe the different types of nonprobability samples.

Qualitative researchers typically make sampling choices that enable them to deepen understanding of whatever phenomenon it is that they are studying. In this section we’ll examine the strategies that qualitative researchers typically employ when sampling as well as the various types of samples that qualitative researchers are most likely to use in their work.

Nonprobability Sampling

Nonprobability sampling Sampling techniques for which a person’s likelihood of being selected for membership in the sample is unknown. refers to sampling techniques for which a person’s (or event’s or researcher’s focus’s) likelihood of being selected for membership in the sample is unknown. Because we don’t know the likelihood of selection, we don’t know with nonprobability samples whether a sample represents a larger population or not. But that’s OK, because representing the population is not the goal with nonprobability samples. That said, the fact that nonprobability samples do not represent a larger population does not mean that they are drawn arbitrarily or without any specific purpose in mind (once again, that would mean committing one of the errors of informal inquiry discussed in Chapter 1 "Introduction" ). In the following subsection, “Types of Nonprobability Samples,” we’ll take a closer look at the process of selecting research elements The individual unit that is the focus of a researcher’s investigation; possible elements in social science include people, documents, organizations, groups, beliefs, or behaviors. when drawing a nonprobability sample. But first, let’s consider why a researcher might choose to use a nonprobability sample.

So when are nonprobability samples ideal? One instance might be when we’re designing a research project. For example, if we’re conducting survey research, we may want to administer our survey to a few people who seem to resemble the folks we’re interested in studying in order to help work out kinks in the survey. We might also use a nonprobability sample at the early stages of a research project, if we’re conducting a pilot study or some exploratory research. This can be a quick way to gather some initial data and help us get some idea of the lay of the land before conducting a more extensive study. From these examples, we can see that nonprobability samples can be useful for setting up, framing, or beginning research. But it isn’t just early stage research that relies on and benefits from nonprobability sampling techniques.

Researchers also use nonprobability samples in full-blown research projects. These projects are usually qualitative in nature, where the researcher’s goal is in-depth, idiographic understanding rather than more general, nomothetic understanding. Evaluation researchers whose aim is to describe some very specific small group might use nonprobability sampling techniques, for example. Researchers interested in contributing to our theoretical understanding of some phenomenon might also collect data from nonprobability samples. Maren Klawiter (1999) Klawiter, M. (1999). Racing for the cure, walking women, and toxic touring: Mapping cultures of action within the Bay Area terrain of breast cancer. Social Problems, 46 , 104–126. relied on a nonprobability sample for her study of the role that culture plays in shaping social change. Klawiter conducted participant observation in three very different breast cancer organizations to understand “the bodily dimensions of cultural production and collective action.” Her intensive study of these three organizations allowed Klawiter to deeply understand each organization’s “culture of action” and, subsequently, to critique and contribute to broader theories of social change and social movement organization. Thus researchers interested in contributing to social theories, by either expanding on them, modifying them, or poking holes in their propositions, may use nonprobability sampling techniques to seek out cases that seem anomalous in order to understand how theories can be improved.

In sum, there are a number and variety of instances in which the use of nonprobability samples makes sense. We’ll examine several specific types of nonprobability samples in the next subsection.

Types of Nonprobability Samples

There are several types of nonprobability samples that researchers use. These include purposive samples, snowball samples, quota samples, and convenience samples. While the latter two strategies may be used by quantitative researchers from time to time, they are more typically employed in qualitative research, and because they are both nonprobability methods, we include them in this section of the chapter.

To draw a purposive sample A nonprobability sample type for which a researcher seeks out particular study elements that meet specific criteria that the researcher has identified. , a researcher begins with specific perspectives in mind that he or she wishes to examine and then seeks out research participants who cover that full range of perspectives. For example, if you are studying students’ satisfaction with their living quarters on campus, you’ll want to be sure to include students who stay in each of the different types or locations of on-campus housing in your study. If you only include students from 1 of 10 dorms on campus, you may miss important details about the experiences of students who live in the 9 dorms you didn’t include in your study. In my own interviews of young people about their workplace sexual harassment experiences, I and my coauthors used a purposive sampling strategy; we used participants’ prior responses on a survey to ensure that we included both men and women in the interviews and that we included participants who’d had a range of harassment experiences, from relatively minor experiences to much more severe harassment.

While purposive sampling is often used when one’s goal is to include participants who represent a broad range of perspectives, purposive sampling may also be used when a researcher wishes to include only people who meet very narrow or specific criteria. For example, in their study of Japanese women’s perceptions of intimate partner violence, Miyoko Nagae and Barbara L. Dancy (2010) Nagae, M., & Dancy, B. L. (2010). Japanese women’s perceptions of intimate partner violence (IPV). Journal of Interpersonal Violence, 25 , 753–766. limited their study only to participants who had experienced intimate partner violence themselves, were at least 18 years old, had been married and living with their spouse at the time that the violence occurred, were heterosexual, and were willing to be interviewed. In this case, the researchers’ goal was to find participants who had had very specific experiences rather than finding those who had had quite diverse experiences, as in the preceding example. In both cases, the researchers involved shared the goal of understanding the topic at hand in as much depth as possible.

Qualitative researchers sometimes rely on snowball sampling A nonprobability sample type for which a researcher recruits study participants by asking prior participants to refer others. techniques to identify study participants. In this case, a researcher might know of one or two people she’d like to include in her study but then relies on those initial participants to help identify additional study participants. Thus the researcher’s sample builds and becomes larger as the study continues, much as a snowball builds and becomes larger as it rolls through the snow.

Snowball sampling is an especially useful strategy when a researcher wishes to study some stigmatized group or behavior. For example, a researcher who wanted to study how people with genital herpes cope with their medical condition would be unlikely to find many participants by posting a call for interviewees in the newspaper or making an announcement about the study at some large social gathering. Instead, the researcher might know someone with the condition, interview that person, and then be referred by the first interviewee to another potential subject. Having a previous participant vouch for the trustworthiness of the researcher may help new potential participants feel more comfortable about being included in the study.

Snowball sampling is sometimes referred to as chain referral sampling. One research participant refers another, and that person refers another, and that person refers another—thus a chain of potential participants is identified. In addition to using this sampling strategy for potentially stigmatized populations, it is also a useful strategy to use when the researcher’s group of interest is likely to be difficult to find, not only because of some stigma associated with the group, but also because the group may be relatively rare. This was the case for Steven M. Kogan and colleagues (Kogan, Wejnert, Chen, Brody, & Slater, 2011) Kogan, S. M., Wejnert, C., Chen, Y., Brody, G. H., & Slater, L. M. (2011). Respondent-driven sampling with hard-to-reach emerging adults: An introduction and case study with rural African Americans. Journal of Adolescent Research, 26 , 30–60. who wished to study the sexual behaviors of non-college-bound African American young adults who lived in high-poverty rural areas. The researchers first relied on their own networks to identify study participants, but because members of the study’s target population were not easy to find, access to the networks of initial study participants was very important for identifying additional participants. Initial participants were given coupons to pass on to others they knew who qualified for the study. Participants were given an added incentive for referring eligible study participants; they received not only $50.00 for participating in the study but also $20.00 for each person they recruited who also participated in the study. Using this strategy, Kogan and colleagues succeeded in recruiting 292 study participants.

Quota sampling A nonprobability sample type for which a researcher identifies subgroups within a population of interest and then selects some predetermined number of elements from within each subgroup. is another nonprobability sampling strategy. This type of sampling is actually employed by both qualitative and quantitative researchers, but because it is a nonprobability method, we’ll discuss it in this section. When conducting quota sampling, a researcher identifies categories that are important to the study and for which there is likely to be some variation. Subgroups are created based on each category and the researcher decides how many people (or documents or whatever element happens to be the focus of the research) to include from each subgroup and collects data from that number for each subgroup.

Let’s go back to the example we considered previously of student satisfaction with on-campus housing. Perhaps there are two types of housing on your campus: apartments that include full kitchens and dorm rooms where residents do not cook for themselves but eat in a dorm cafeteria. As a researcher, you might wish to understand how satisfaction varies across these two types of housing arrangements. Perhaps you have the time and resources to interview 20 campus residents, so you decide to interview 10 from each housing type. It is possible as well that your review of literature on the topic suggests that campus housing experiences vary by gender. If that is that case, perhaps you’ll decide on four important subgroups: men who live in apartments, women who live in apartments, men who live in dorm rooms, and women who live in dorm rooms. Your quota sample would include five people from each subgroup.

In 1936, up-and-coming pollster George Gallup made history when he successfully predicted the outcome of the presidential election using quota sampling methods. The leading polling entity at the time, The Literary Digest , predicted that Alfred Landon would beat Franklin Roosevelt in the presidential election by a landslide. When Gallup’s prediction that Roosevelt would win, turned out to be correct, “the Gallup Poll was suddenly on the map” (Van Allen, 2011). Van Allen, S. (2011). Gallup corporate history. Retrieved from http://www.gallup.com/corporate/1357/Corporate-History.aspx#2 Gallup successfully predicted subsequent elections based on quota samples, but in 1948, Gallup incorrectly predicted that Dewey would beat Truman in the US presidential election. For more information about the 1948 election and other historically significant dates related to measurement, see the PBS timeline of “The first measured century” at http://www.pbs.org/fmc/timeline/e1948election.htm . Among other problems, the fact that Gallup’s quota categories did not represent those who actually voted (Neuman, 2007) Neuman, W. L. (2007). Basics of social research: Qualitative and quantitative approaches (2nd ed.). Boston, MA: Pearson. underscores the point that one should avoid attempting to make statistical generalizations from data collected using quota sampling methods. If you are interested in the history of polling, I recommend a recent book: Fried, A. (2011). Pathways to polling: Crisis, cooperation, and the making of public opinion professions . New York, NY: Routledge. While quota sampling offers the strength of helping the researcher account for potentially relevant variation across study elements, it would be a mistake to think of this strategy as yielding statistically representative findings.

Finally, convenience sampling A nonprobability sample type for which a researcher gathers data from the elements that happen to be convenient; also referred to as haphazard sampling. is another nonprobability sampling strategy that is employed by both qualitative and quantitative researchers. To draw a convenience sample, a researcher simply collects data from those people or other relevant elements to which he or she has most convenient access. This method, also sometimes referred to as haphazard sampling, is most useful in exploratory research. It is also often used by journalists who need quick and easy access to people from their population of interest. If you’ve ever seen brief interviews of people on the street on the news, you’ve probably seen a haphazard sample being interviewed. While convenience samples offer one major benefit—convenience—we should be cautious about generalizing from research that relies on convenience samples.

Table 7.1 Types of Nonprobability Samples

Key Takeaways

  • Nonprobability samples might be used when researchers are conducting exploratory research, by evaluation researchers, or by researchers whose aim is to make some theoretical contribution.
  • There are several types of nonprobability samples including purposive samples, snowball samples, quota samples, and convenience samples.
  • Imagine you are about to conduct a study of people’s use of the public parks in your hometown. Explain how you could employ each of the nonprobability sampling techniques described previously to recruit a sample for your study.
  • Of the four nonprobability sample types described, which seems strongest to you? Which seems weakest? Explain.

English Editing Research Services

what is sampling methods in qualitative research

Qualitative Research Sampling Methods: Pros and Cons to Help You Choose

qualitative sampling – Edanz

Your choice of sampling strategy can deeply impact your research findings, especially in qualitative studies, where every person counts.

There’s so much written on methods that it can sometimes feel overwhelming when you’re first discovering what’s out there. Even if you’re well into your research career, you may find yourself sticking with the same methodology again and again.

Many researchers focus on quantitative methodology. But they can greatly benefit from knowing qualitative methodology for use in mixed-methods studies and to better understand other studies.

This article aims to help you dive into the most widely recognized qualitative sampling strategies shortly and objectively.

What you ’ ll learn in this post

• All the most common types of qualitative research sampling methods.

• When to use each method.

• Pros and cons of each method.

• Specific examples of these qualitative sampling methods in use.

• Where to get your research both critiqued and edited, be it qualitative, quantitative, or mixed methods.

Your first step in choosing a qualitative sampling strategy

So, where do you start when you know you need to do more than grab students walking by your office? One of the first and most important decisions you must make about your sampling strategy is defining a clear sampling frame .

The cases you choose for your sample need to cover the various issues and variables you want to explore in your research. A fundamental aspect of your sample is that it should always contain the cases most likely to provide you with the richest data (Gray, 2004).

Owing to time and expense, qualitative research often works with small samples of people, cases, or phenomena in particular contexts. Therefore, unlike in quantitative research, samples tend to be more purposive (using your judgment) than they are random (Flick, 2009). This post will cover those main purposive sampling strategies.

It’s also important to keep in mind that qualitative samples are sometimes predetermined ­– what’s known as a priori determination, and other times follow more flexible determination (Flick, 2009).

So this article is organized based on those two parameters: a priori and more flexible determination.

And take note that in certain strategies it’s possible to start with a predetermined sample and end up extending it, or even varying it, for a valid reason.

Qualitative research is much more flexible than quantitative research. You iterate, you run another round, you seek saturation.

OK? Let’s see what’s on the qualitative menu. Hope you find something tasty.

A priori determination

Comprehensive sampling.

Comprehensive (or total population) sampling is a strategy that examines every case or instance of a given population that has specific characteristics (e.g., attributes, traits, experience, knowledge) you’re interested in for your study (Gray, 2004).

This sampling strategy is somewhat unusual because it’s often hard to sample the entire population of interest.

When to use it

It’s ideal for studies that focus on a specific organization or people with such specific characteristics that it’s possible to contact the whole population that has them (Gray, 2004).

Basically, two aspects are key to using this method

  • population size being somewhat small
  • having uncommon characteristics

One example would be studying perceptions about leadership within a small company (e.g., 10–30 people), where your sample could easily be every employee within the company.

  • Ideal for further analyzing, differentiating, and perhaps testing (Flick, 2009).
  • It might facilitate confidence in the validity of the results of research that use this method because it covers every case in a given population.
  • Reduced risk of missing valuable insights.
  • Only applicable to very specific studies because it requires the targeted population to be small and have uncommon characteristics.
  • Very limited potential for generalizability.

Practical example: Gerhard (as cited in Flick, 2009, p. 117) used this strategy to study the careers of patients with chronic renal failure. The sample was a complete collection of all patients with predetermined characteristics (male, married, age 30­–50 years, at the start of treatment at five hospitals in the UK).

Note that for this particular study, sampling was limited to several criteria: a specific sex, disease, marital status, age, region, and a limited period.

These predetermined characteristics were what allowed the researchers to achieve a comprehensive (total population) sample.

Extreme/deviant sampling

Extreme/deviant sampling is intentionally selecting extremes and trying to identify the factors that affect them (Gray, 2004).

It’s usually used to focus on special or uncommon cases such as noteworthy successes or failures. For instance, if you’re conducting a study about a reform program, you can include particularly successful examples and/or cases of big failures – these are two extremes, which is where the “extreme/deviant” name comes from (Flick, 2009).

It’s ideal for studying special/unusual cases in a particular context.

  • Allows you to collect focused information on a very particular phenomenon.
  • It’s sometimes regarded as producing the “purest” form of insight into a particular phenomenon.
  • Lets you collect insights from two very distinct perspectives, which will help you get an understanding of the phenomena as a whole.
  • The danger of mistakenly generalizing from extreme cases.
  • Selection bias

Practical example: Perhaps one of the most widely recognized studies that used this sampling method was Waterman and Peters’ In Search of Excellence: Lessons from America’s Best-Run Companies , published in 1982.

The researchers chose 62 companies based on their outstanding (extreme) success in terms of innovation and excellence (see Peters & Waterman [2004]).

Intensity sampling

Intensity sampling fundamentally involves the same logic as extreme/deviant case sampling, but it has less emphasis on the extremes.

Cases chosen for an intensity sample should be information-rich, manifesting the phenomenon intensely but not extremely; therefore capturing more typical cases compared with those at the extremes (Patton, 2002; Gray, 2004; Benoot, Hannes & Bilsen, 2016).

Patton (2002) argues that ideally, you should use this when you already have prior information about the variation of the subject you want to study. Some exploratory research might be needed depending on what you are researching.

  • Great for heuristic research/inquiry (Patton, 2002).
  • By choosing intensive cases that aren’t extreme/deviant, you can avoid the distortion that extreme cases sometimes bring (Patton, 2002).
  • Involves some prior information and considerable judgment. The researcher must do some exploratory work to grasp the nature of the variation of the specific situation he is researching about (Patton, 2002)
  • It requires an extended knowledge of the phenomena being studied to not mix cases that have sufficient intensity with the ones at the extremes (Patton, 2002).

Practical example: Researching above average/below average students would be a time to use this sampling method. This is because they experience the educational system intensely but aren’t extreme cases.

Maximum variation sampling

The maximum variation sampling strategy aims at capturing and describing a wide range of variations and that cut across what you want to research (Patton, 2002; Gray, 2004). How can you proceed to guarantee that you capture a high level of variation?

You can start by setting specific characteristics where you’ll look for variation that the literature (or you) identify as relevant for the phenomenon you’re researching. These may be education level, ethnicity, age, or socioeconomic status.

For small samples, having too much heterogeneity can be a problem because each case may be very different from the other.

But according to Patton (2002), this method might turn that weakness into a strength.

It does so by applying this logic: any common pattern that emerges from this kind of sample is of particular interest and value in capturing the core experiences and central, shared dimensions of a setting or phenomenon.

When to use it: Whenever you want to explore the variation of perceptions/practices concerning a broad phenomenon.

  • Allows the researcher to capture all variations of a phenomenon (Patton, 2002; Schreier, 2018).
  • Finds detailed insights about each variation (Patton, 2002; Schreier, 2018).
  • In small samples, sometimes cases are so different from one another that no common patterns emerge (Patton, 2002).

Practical example: Ziebland et al. (2004) was about how the internet affects patients’ experiences with cancer. It used a maximum variation sample to maximize the variety of insights.

The researchers purposively looked for people that differed in: type of cancer they had, stage of cancer, age, and sex.

Homogenous sampling

The homogenous sampling strategy can be seen as the exact opposite of maximum variation sampling because it seeks homogenous groups of people, settings, or contexts to be studied in-depth.

With this kind of sample, using focus group interviewing might prove extremely productive (Gray, 2004).

Use it if your research aims to specifically focus on a group with shared characteristics.

  • Produces highly detailed insights regarding a specific group (Patton, 2002).
  • Highly compatible with focus group interviews (Patton, 2002).
  • Can simplify the analysis (Patton, 2002).
  • Doesn’t let the researcher capture much variation (Patton, 2002).

Practical example: Nestbitt et al. (2012) was a study about Canadian adolescent mothers’ perceptions of influences on breastfeeding decisions. The researchers purposefully collected 16 homogenous cases of adolescent mothers (15­–19 years) that lived in the Durham region and had children up to 12 months old.

Other criteria included speaking English fluently and breastfeeding their infant at least once.

The aim of the researchers by using this method was to produce an in-depth look at this very specific group.

qualitative sampling – Edanz

Theory-based sampling

Theory-based sampling is basically a more formal type of criterion sampling, it’s more conceptually oriented, and the cases are chosen on the basis that they represent a theoretical construct (Patton, 2002; Gray, 2004).

The researcher samples incidents, periods of someone’s life, time periods, or people based on the potential manifestation or representation of important theoretical constructs.

Use this one when you want to study a pre-existing theory-derived concept that is of interest to your research.

  • Elaborating on previous theoretical and established concepts can facilitate the analysis.
  • Working on established theoretical concepts allows you to contribute new insights for an established theory.
  • The odds of finding out something entirely “new” are somewhat limited.
  • It might be harder to determine the population of interest because it’s hard to find people, programs, organizations, or communities of interest to a specific theoretical construct. This is unlike what happens when sampling based on determined people’s characteristics (Patton, 2002).

Practical example: Buckhold (as cited in Patton [2002, p. 238]) researched people who met specific theory-derived criteria for being “resilient.” She aimed to analyze the resilience of women who were victims of abuse and were able to survive.

Stratified purposive sampling

In stratified purposive sampling, decisions about the sample’s composition are made before data collection .

Schreier (2018) notes that it can be done in four steps:

  • Deciding which factors are known or likely to cause variation in the phenomenon of interest.
  • Selecting from two to a maximum of four factors for constructing a sampling guide.
  • Combining the factors of choice in a cross-table, though when picking more than two factors, it might be impossible to conduct sampling for all factor combinations.
  • Deciding on how many units for each cell/or factor combination.

Use this method when you want to explore known factors that influence the phenomenon of your interest.

These might be hypothesized in theory while having no empirical data supporting them. You can also purpose a factor and by including it on your sampling you might grasp its importance regarding the phenomena you’re researching.

  • Allows you to focus on several known factors that of interest for your research (Schreier, 2018).
  • Predetermining the composition of your sample might facilitate finding the cases/people/groups to research.
  • Sticking to the predetermined composition might have trouble with new factors discovered from your first cases that are left unresearched.
  • Finding the cases with the factors that are of most interest for your research might be challenging.

Practical example: Palacic (2017) examined entrepreneurial leadership and business performance in “gazelles” and “MICE” (business/market terms to describe a type of company). The sample was purposively constituted to contain cases from both types of companies that were involved in three major industrial sectors – manufacturing, sales, and services.

More flexible determination

Theoretical sampling.

Theoretical sampling was developed in the context of grounded theory methodology.

Fundamentally, it’s a process of data collection that aims to generate theory. It takes place in a constant interrelation between data collection and data analysis, and it’s guided by the concepts and/or theory emerging from the research process (Gray, 2004; Flick, 2009).

The sample is usually composed of heterogeneous cases that allow comparison of different instantiations (Schreier, 2018).

You can use this when you’re aiming to generate a new theory about a certain phenomenon.

  • May bring more innovation to your research (Schreier, 2018).
  • Your sample is more flexible compared with many other methods because there are no “static” criteria for your sample’s population.
  • Not ideal for inexperienced researchers because generating a new theory is very challenging.
  • Very time-consuming and complex.

Practical example: Glaser and Strauss (as cited in Flick, 2009, pp. 118–119) famously used this method to research awareness of dying in hospitals.

The researchers chose to conduct participant observation in different hospitals to develop a new theory about the way dying in a hospital is organized as a social process.

They built their sample through a step-by-step process while in direct contact with the field. First they studied awareness of dying in conditions that minimized patient awareness (e.g., comatose). Then they moved to situations where staff’s and patients’ awareness was high and death often was quick (e.g., intensive care). Then to situations where staff expectations of terminality were high, but dying tended to be slow (e.g., cancer). And ultimately to situations where death was unforeseen and rapid (e.g., emergency services).

Snowball sampling

Snowball sampling (or, chain referral sampling) is a method widely used in qualitative sociological research (Biernacki & Waldorf, 1981; Gray, 2004; Flick, 2009; Heckathorn, 2011). It’s used a lot because it’s effective at getting numbers. It’s premised on the idea that people know people similar to themselves.

Snowballing especially useful for studying hard-to-reach populations. Snowball sampling has been most applicable in studies where the focus relies on a sensitive issue, something that might be a private matter that requires knowing insiders so you can locate, contact, and receive consent from the true target population (Biernacki & Waldorf, 1981; Heckathorn, 2011).

The researcher forms a study sample through referrals made among people who are acquainted with others who have the characteristics of interest for the research. It begins through a convenience sample of someone of a hard-to-reach population.

qualitative sampling - snowball sampling

After successfully interviewing/communicating with this person, the researcher will ask them to introduce other people with the same characteristics. After acquiring contacts, the research proceeds in the same way (Heckathorn, 2011).

As hard-to-reach groups are, well, hard to reach, snowball sampling is effective when you need an inroad and cannot easily recruit and sample.

  • Ideal for studying hard-to-reach groups (Biernacki & Waldorf, 1981; Gray, 2004; Flick, 2009; Heckathorn, 2011).
  • Able to produce highly detailed insights regarding a specific group through the sampling of, in principle, information-rich cases (Patton, 2002).
  • If the researcher is studying a topic that involves moral, legal, or socially sensitive issues (e.g., prostitution, drug addiction) and does not know anyone from this group, it might be hard to start the first “chain” that bring in more recruits.
  • Very limited generalization potential.

Practical example: Cloud and Granfield (1994) used snowball sampling to study drug and alcohol addicts who beat their addictions without resorting to a treatment.

Using the snowballing method was fundamental to the authors because they were researching a widely distributed population (unlike those who participate in self-help groups or in treatment), and because the participants did not wish to expose their past as former drug addicts (i.e., sensitive issue).

Convenience sampling

Convenience sampling is a strategy that involves simply choosing cases in a way that is fast and convenient.

It’s probably the most common sampling strategy and, according to Patton (2002), the least desirable because it can’t be regarded as purposeful or strategic.

Many researchers choose this method thinking that their sample size is too small to generalize anyway, so they might as well pick cases that are easy to access and inexpensive to study (Patton, 2002).

This is a very common strategy among master’s students ­– asking fellow students to be part of the sample of their dissertation. That’s convenience sampling (Schreier, 2018). Also notable is that online surveying makes convenience sampling even simpler, beyond geographic limitations.

When you have few resources (mainly time and money) for your qualitative research, this is the go-to method. This is why so many studies are conducted on university students – they’re literally all over the place, whether you’re a student or researcher. As students, they’re also easier to incentivize with small compensation and they often are in the same boat.

  • Saves time, money, and effort (Patton, 2002).
  • Might be optimal for unfinanced and strictly timed qualitative research (often in master’s theses and in many doctoral dissertations).
  • Something of a “bad reputation” (Schreier, 2018).
  • Lowest credibility (Patton, 2002).
  • Might yield information-poor cases (Patton, 2002).

Practical example: Augusto and Simões (2017) used a convenience sampling strategy to capture perceptions and prevention strategies on Facebook surveillance.

As the original fieldwork was part of a master’s dissertation, convenience sampling was chosen because of the main author’s limited time and resources. This is in no way to discredit the study and findings – it was simply the most feasible way to get the research done.

Confirming and disconfirming cases

Confirming and disconfirming cases is frequently a second-stage sampling strategy.

Cases are chosen on the premise that they can confirm or disconfirm emerging patterns from the first stage of sampling (Gray, 2004).

After an exploratory process, one might consider testing ideas, confirming the importance and/or meaning of eventual patterns, and ultimately the viability of the findings through collecting new data and/or sampling additional cases (Patton, 2002).

As the name indicates, generally, it’s ideal for testing emergent findings from your data.

  • Strengthens emergent findings.
  • Allows you to identify possible “exceptions that prove the rule” or exceptions that might disconfirm a finding (Patton, 2002).
  • Usually requires a “first stage” of sampling.
  • While definitely useful, one can certainly make an argument about quantitative research being better able to test certain findings.

Practical example: If you were researching students’ motives for applying for college, and on the first interviews you found out the interviewees’ main reason for pursuing their education was to avoid having a routine day-job, this might be a good sampling method to use. The findings, however, would have to carefully look at trends and check for outliers.

So, how’s your research going?

Here’s hoping you find the right qualitative sampling method(s) that work for you. Putting this together was a lesson for me as well.

And when you’re ready for a professional edit or scientific review, check out Edanz’s author-guidance services , which have been leading the way since 1995. Good luck with your research!

This is a guest post from Adam Goulston, PsyD, MBA, MS, MISD, ELS. Adam runs science marketing firm Scize and has worked an in-house Senior Language Editor, as well as a manuscript editor, with Edanz.

Augusto, F. R., & Simões, M. J. (2017). To see and be seen, to know and be known : Perceptions and prevention strategies on Facebook surveillance. Social Science Information , 56 (4), 596–618. https://doi.org/10.1177/0539018417734974

Benoot, C., Hannes, K., & Bilsen, J. (2016). The use of purposeful sampling in a qualitative evidence synthesis : A worked example on sexual adjustment to a cancer trajectory. BMC Medical Research Methodology, 16 (21), 1–12. https://doi.org/10.1186/s12874-016-0114-6

Biernacki, P., & Waldorf, D. (1981). Snowball sampling: Problems and techniques of chain referral sampling. Sociological Methods & Research, 10 (2), 141–163.

Cloud, W., & Granfield, R. (1994). Terminating Addiction Naturally : Post-Addict Identity and the Avoidance of Treatment Terminating Addiction Naturally : Post-Addict Identity and the Avoidance of Treatment. Clinical Sociology Review , 12 (1), 159–174.

Flick, U. (2009). An Introduction To Qualitative Research . SAGE Publications (4th ed.). London: Sage Publications, Inc. https://doi.org/978-1-84787-323-1

Gray, D. E. (2004). Doing Research in the Real World . London: Sage Publications, Inc.

Heckathorn, D. D. (2011). Comment: snowball versus respondent-driven sampling, 355–366. https://doi.org/10.1111/j.1467-9531.2011.01244.x

Nesbitt, S. A., Campbell, K. A., Jack, S. M., Robinson, H., Piehl, K., & Bogdan, J. C. (2012). Canadian adolescent mothers’ perceptions of influences on breastfeeding decisions: a qualitative descriptive study, 1–14.

Palacic, R. (2017). The phenomenon of entrepreneurial leadership in gazelles and mice : a qualitative study from Bosnia and Herzegovina. World Review of Entrepreneurship, Management and Sustainable Development , 13 (2/3).

Patton, M. Q. (2002). Qualitative Research & Evaluation Methods (3rd ed.). California: Sage Publications, Inc.

Peters, T. J., & Waterman, R. (2004). In Search of Excellence: Lessons from America’s Best-Run Companies . New York: First Harper Business Essentials.

Schreier, M. (2018). Sampling and Generalization In U. Flick (Ed.), The SAGE Handbook of Qualitative Data Collection (pp. 84­­­–98). London, Sage Publications, Inc.

Ziebland, S., Chapple, A., Dumelow, C., Evans, J., Prinjha, S., & Rozmovits, L. (2004). Information in practice study: How the internet affects patients’ experience of cancer: A qualitative study. The BMJ, 328 (7434).

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Purposeful sampling for qualitative data collection and analysis in mixed method implementation research

Lawrence a. palinkas.

1 School of Social Work, University of Southern California, Los Angeles, CA 90089-0411

Sarah M. Horwitz

2 Department of Child and Adolescent Psychiatry, New York University, New York, NY

Carla A. Green

3 Center for Health Research, Kaiser Permanente Northwest, Portland, OR

Jennifer P. Wisdom

4 George Washington University, Washington DC

Naihua Duan

5 New York State Neuropsychiatric Institute and Department of Psychiatry, Columbia University, New York, NY

Kimberly Hoagwood

Purposeful sampling is widely used in qualitative research for the identification and selection of information-rich cases related to the phenomenon of interest. Although there are several different purposeful sampling strategies, criterion sampling appears to be used most commonly in implementation research. However, combining sampling strategies may be more appropriate to the aims of implementation research and more consistent with recent developments in quantitative methods. This paper reviews the principles and practice of purposeful sampling in implementation research, summarizes types and categories of purposeful sampling strategies and provides a set of recommendations for use of single strategy or multistage strategy designs, particularly for state implementation research.

Recently there have been several calls for the use of mixed method designs in implementation research ( Proctor et al., 2009 ; Landsverk et al., 2012 ; Palinkas et al. 2011 ; Aarons et al., 2012). This has been precipitated by the realization that the challenges of implementing evidence-based and other innovative practices, treatments, interventions and programs are sufficiently complex that a single methodological approach is often inadequate. This is particularly true of efforts to implement evidence-based practices (EBPs) in statewide systems where relationships among key stakeholders extend both vertically (from state to local organizations) and horizontally (between organizations located in different parts of a state). As in other areas of research, mixed method designs are viewed as preferable in implementation research because they provide a better understanding of research issues than either qualitative or quantitative approaches alone ( Palinkas et al., 2011 ). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation ( Teddlie & Tashakkori, 2003 ).

Sampling strategies for quantitative methods used in mixed methods designs in implementation research are generally well-established and based on probability theory. In contrast, sampling strategies for qualitative methods in implementation studies are less explicit and often less evident. Although the samples for qualitative inquiry are generally assumed to be selected purposefully to yield cases that are “information rich” (Patton, 2001), there are no clear guidelines for conducting purposeful sampling in mixed methods implementation studies, particularly when studies have more than one specific objective. Moreover, it is not entirely clear what forms of purposeful sampling are most appropriate for the challenges of using both quantitative and qualitative methods in the mixed methods designs used in implementation research. Such a consideration requires a determination of the objectives of each methodology and the potential impact of selecting one strategy to achieve one objective on the selection of other strategies to achieve additional objectives.

In this paper, we present different approaches to the use of purposeful sampling strategies in implementation research. We begin with a review of the principles and practice of purposeful sampling in implementation research, a summary of the types and categories of purposeful sampling strategies, and a set of recommendations for matching the appropriate single strategy or multistage strategy to study aims and quantitative method designs.

Principles of Purposeful Sampling

Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources ( Patton, 2002 ). This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ( Cresswell & Plano Clark, 2011 ). In addition to knowledge and experience, Bernard (2002) and Spradley (1979) note the importance of availability and willingness to participate, and the ability to communicate experiences and opinions in an articulate, expressive, and reflective manner. In contrast, probabilistic or random sampling is used to ensure the generalizability of findings by minimizing the potential for bias in selection and to control for the potential influence of known and unknown confounders.

As Morse and Niehaus (2009) observe, whether the methodology employed is quantitative or qualitative, sampling methods are intended to maximize efficiency and validity. Nevertheless, sampling must be consistent with the aims and assumptions inherent in the use of either method. Qualitative methods are, for the most part, intended to achieve depth of understanding while quantitative methods are intended to achieve breadth of understanding ( Patton, 2002 ). Qualitative methods place primary emphasis on saturation (i.e., obtaining a comprehensive understanding by continuing to sample until no new substantive information is acquired) ( Miles & Huberman, 1994 ). Quantitative methods place primary emphasis on generalizability (i.e., ensuring that the knowledge gained is representative of the population from which the sample was drawn). Each methodology, in turn, has different expectations and standards for determining the number of participants required to achieve its aims. Quantitative methods rely on established formulae for avoiding Type I and Type II errors, while qualitative methods often rely on precedents for determining number of participants based on type of analysis proposed (e.g., 3-6 participants interviewed multiple times in a phenomenological study versus 20-30 participants interviewed once or twice in a grounded theory study), level of detail required, and emphasis of homogeneity (requiring smaller samples) versus heterogeneity (requiring larger samples) ( Guest, Bunce & Johnson., 2006 ; Morse & Niehaus, 2009 ; Padgett, 2008 ).

Types of purposeful sampling designs

There exist numerous purposeful sampling designs. Examples include the selection of extreme or deviant (outlier) cases for the purpose of learning from an unusual manifestations of phenomena of interest; the selection of cases with maximum variation for the purpose of documenting unique or diverse variations that have emerged in adapting to different conditions, and to identify important common patterns that cut across variations; and the selection of homogeneous cases for the purpose of reducing variation, simplifying analysis, and facilitating group interviewing. A list of some of these strategies and examples of their use in implementation research is provided in Table 1 .

Purposeful sampling strategies in implementation research

Embedded in each strategy is the ability to compare and contrast, to identify similarities and differences in the phenomenon of interest. Nevertheless, some of these strategies (e.g., maximum variation sampling, extreme case sampling, intensity sampling, and purposeful random sampling) are used to identify and expand the range of variation or differences, similar to the use of quantitative measures to describe the variability or dispersion of values for a particular variable or variables, while other strategies (e.g., homogeneous sampling, typical case sampling, criterion sampling, and snowball sampling) are used to narrow the range of variation and focus on similarities. The latter are similar to the use of quantitative central tendency measures (e.g., mean, median, and mode). Moreover, certain strategies, like stratified purposeful sampling or opportunistic or emergent sampling, are designed to achieve both goals. As Patton (2002 , p. 240) explains, “the purpose of a stratified purposeful sample is to capture major variations rather than to identify a common core, although the latter may also emerge in the analysis. Each of the strata would constitute a fairly homogeneous sample.”

Challenges to use of purposeful sampling

Despite its wide use, there are numerous challenges in identifying and applying the appropriate purposeful sampling strategy in any study. For instance, the range of variation in a sample from which purposive sample is to be taken is often not really known at the outset of a study. To set as the goal the sampling of information-rich informants that cover the range of variation assumes one knows that range of variation. Consequently, an iterative approach of sampling and re-sampling to draw an appropriate sample is usually recommended to make certain the theoretical saturation occurs ( Miles & Huberman, 1994 ). However, that saturation may be determined a-priori on the basis of an existing theory or conceptual framework, or it may emerge from the data themselves, as in a grounded theory approach ( Glaser & Strauss, 1967 ). Second, there are a not insignificant number in the qualitative methods field who resist or refuse systematic sampling of any kind and reject the limiting nature of such realist, systematic, or positivist approaches. This includes critics of interventions and “bottom up” case studies and critiques. However, even those who equate purposeful sampling with systematic sampling must offer a rationale for selecting study participants that is linked with the aims of the investigation (i.e., why recruit these individuals for this particular study? What qualifies them to address the aims of the study?). While systematic sampling may be associated with a post-positivist tradition of qualitative data collection and analysis, such sampling is not inherently limited to such analyses and the need for such sampling is not inherently limited to post-positivist qualitative approaches ( Patton, 2002 ).

Purposeful Sampling in Implementation Research

Characteristics of implementation research.

In implementation research, quantitative and qualitative methods often play important roles, either simultaneously or sequentially, for the purpose of answering the same question through convergence of results from different sources, answering related questions in a complementary fashion, using one set of methods to expand or explain the results obtained from use of the other set of methods, using one set of methods to develop questionnaires or conceptual models that inform the use of the other set, and using one set of methods to identify the sample for analysis using the other set of methods ( Palinkas et al., 2011 ). A review of mixed method designs in implementation research conducted by Palinkas and colleagues (2011) revealed seven different sequential and simultaneous structural arrangements, five different functions of mixed methods, and three different ways of linking quantitative and qualitative data together. However, this review did not consider the sampling strategies involved in the types of quantitative and qualitative methods common to implementation research, nor did it consider the consequences of the sampling strategy selected for one method or set of methods on the choice of sampling strategy for the other method or set of methods. For instance, one of the most significant challenges to sampling in sequential mixed method designs lies in the limitations the initial method may place on sampling for the subsequent method. As Morse and Neihaus (2009) observe, when the initial method is qualitative, the sample selected may be too small and lack randomization necessary to fulfill the assumptions for a subsequent quantitative analysis. On the other hand, when the initial method is quantitative, the sample selected may be too large for each individual to be included in qualitative inquiry and lack purposeful selection to reduce the sample size to one more appropriate for qualitative research. The fact that potential participants were recruited and selected at random does not necessarily make them information rich.

A re-examination of the 22 studies and an additional 6 studies published since 2009 revealed that only 5 studies ( Aarons & Palinkas, 2007 ; Bachman et al., 2009 ; Palinkas et al., 2011 ; Palinkas et al., 2012 ; Slade et al., 2003) made a specific reference to purposeful sampling. An additional three studies ( Henke et al., 2008 ; Proctor et al., 2007 ; Swain et al., 2010 ) did not make explicit reference to purposeful sampling but did provide a rationale for sample selection. The remaining 20 studies provided no description of the sampling strategy used to identify participants for qualitative data collection and analysis; however, a rationale could be inferred based on a description of who were recruited and selected for participation. Of the 28 studies, 3 used more than one sampling strategy. Twenty-one of the 28 studies (75%) used some form of criterion sampling. In most instances, the criterion used is related to the individual’s role, either in the research project (i.e., trainer, team leader), or the agency (program director, clinical supervisor, clinician); in other words, criterion of inclusion in a certain category (criterion-i), in contrast to cases that are external to a specific criterion (criterion-e). For instance, in a series of studies based on the National Implementing Evidence-Based Practices Project, participants included semi-structured interviews with consultant trainers and program leaders at each study site ( Brunette et al., 2008 ; Marshall et al., 2008 ; Marty et al., 2007; Rapp et al., 2010 ; Woltmann et al., 2008 ). Six studies used some form of maximum variation sampling to ensure representativeness and diversity of organizations and individual practitioners. Two studies used intensity sampling to make contrasts. Aarons and Palinkas (2007) , for example, purposefully selected 15 child welfare case managers representing those having the most positive and those having the most negative views of SafeCare, an evidence-based prevention intervention, based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare. Kramer and Burns (2008) recruited and interviewed clinicians providing usual care and clinicians who dropped out of a study prior to consent to contrast with clinicians who provided the intervention under investigation. One study ( Hoagwood et al., 2007 ), used a typical case approach to identify participants for a qualitative assessment of the challenges faced in implementing a trauma-focused intervention for youth. One study ( Green & Aarons, 2011 ) used a combined snowball sampling/criterion-i strategy by asking recruited program managers to identify clinicians, administrative support staff, and consumers for project recruitment. County mental directors, agency directors, and program managers were recruited to represent the policy interests of implementation while clinicians, administrative support staff and consumers were recruited to represent the direct practice perspectives of EBP implementation.

Table 2 below provides a description of the use of different purposeful sampling strategies in mixed methods implementation studies. Criterion-i sampling was most frequently used in mixed methods implementation studies that employed a simultaneous design where the qualitative method was secondary to the quantitative method or studies that employed a simultaneous structure where the qualitative and quantitative methods were assigned equal priority. These mixed method designs were used to complement the depth of understanding afforded by the qualitative methods with the breadth of understanding afforded by the quantitative methods (n = 13), to explain or elaborate upon the findings of one set of methods (usually quantitative) with the findings from the other set of methods (n = 10), or to seek convergence through triangulation of results or quantifying qualitative data (n = 8). The process of mixing methods in the large majority (n = 18) of these studies involved embedding the qualitative study within the larger quantitative study. In one study (Goia & Dziadosz, 2008), criterion sampling was used in a simultaneous design where quantitative and qualitative data were merged together in a complementary fashion, and in two studies (Aarons et al., 2012; Zazelli et al., 2008 ), quantitative and qualitative data were connected together, one in sequential design for the purpose of developing a conceptual model ( Zazelli et al., 2008 ), and one in a simultaneous design for the purpose of complementing one another (Aarons et al., 2012). Three of the six studies that used maximum variation sampling used a simultaneous structure with quantitative methods taking priority over qualitative methods and a process of embedding the qualitative methods in a larger quantitative study ( Henke et al., 2008 ; Palinkas et al., 2010; Slade et al., 2008 ). Two of the six studies used maximum variation sampling in a sequential design ( Aarons et al., 2009 ; Zazelli et al., 2008 ) and one in a simultaneous design (Henke et al., 2010) for the purpose of development, and three used it in a simultaneous design for complementarity ( Bachman et al., 2009 ; Henke et al., 2008; Palinkas, Ell, Hansen, Cabassa, & Wells, 2011 ). The two studies relying upon intensity sampling used a simultaneous structure for the purpose of either convergence or expansion, and both studies involved a qualitative study embedded in a larger quantitative study ( Aarons & Palinkas, 2007 ; Kramer & Burns, 2008 ). The single typical case study involved a simultaneous design where the qualitative study was embedded in a larger quantitative study for the purpose of complementarity ( Hoagwood et al., 2007 ). The snowball/maximum variation study involved a sequential design where the qualitative study was merged into the quantitative data for the purpose of convergence and conceptual model development ( Green & Aarons, 2011 ). Although not used in any of the 28 implementation studies examined here, another common sequential sampling strategy is using criteria sampling of the larger quantitative sample to produce a second-stage qualitative sample in a manner similar to maximum variation sampling, except that the former narrows the range of variation while the latter expands the range.

Purposeful sampling strategies and mixed method designs in implementation research

Criterion-i sampling as a purposeful sampling strategy shares many characteristics with random probability sampling, despite having different aims and different procedures for identifying and selecting potential participants. In both instances, study participants are drawn from agencies, organizations or systems involved in the implementation process. Individuals are selected based on the assumption that they possess knowledge and experience with the phenomenon of interest (i.e., the implementation of an EBP) and thus will be able to provide information that is both detailed (depth) and generalizable (breadth). Participants for a qualitative study, usually service providers, consumers, agency directors, or state policy-makers, are drawn from the larger sample of participants in the quantitative study. They are selected from the larger sample because they meet the same criteria, in this case, playing a specific role in the organization and/or implementation process. To some extent, they are assumed to be “representative” of that role, although implementation studies rarely explain the rationale for selecting only some and not all of the available role representatives (i.e., recruiting 15 providers from an agency for semi-structured interviews out of an available sample of 25 providers). From the perspective of qualitative methodology, participants who meet or exceed a specific criterion or criteria possess intimate (or, at the very least, greater) knowledge of the phenomenon of interest by virtue of their experience, making them information-rich cases.

However, criterion sampling may not be the most appropriate strategy for implementation research because by attempting to capture both breadth and depth of understanding, it may actually be inadequate to the task of accomplishing either. Although qualitative methods are often contrasted with quantitative methods on the basis of depth versus breadth, they actually require elements of both in order to provide a comprehensive understanding of the phenomenon of interest. Ideally, the goal of achieving theoretical saturation by providing as much detail as possible involves selection of individuals or cases that can ensure all aspects of that phenomenon are included in the examination and that any one aspect is thoroughly examined. This goal, therefore, requires an approach that sequentially or simultaneously expands and narrows the field of view, respectively. By selecting only individuals who meet a specific criterion defined on the basis of their role in the implementation process or who have a specific experience (e.g., engaged only in an implementation defined as successful or only in one defined as unsuccessful), one may fail to capture the experiences or activities of other groups playing other roles in the process. For instance, a focus only on practitioners may fail to capture the insights, experiences, and activities of consumers, family members, agency directors, administrative staff, or state policy leaders in the implementation process, thus limiting the breadth of understanding of that process. On the other hand, selecting participants on the basis of whether they were a practitioner, consumer, director, staff, or any of the above, may fail to identify those with the greatest experience or most knowledgeable or most able to communicate what they know and/or have experienced, thus limiting the depth of understanding of the implementation process.

To address the potential limitations of criterion sampling, other purposeful sampling strategies should be considered and possibly adopted in implementation research ( Figure 1 ). For instance, strategies placing greater emphasis on breadth and variation such as maximum variation, extreme case, confirming and disconfirming case sampling are better suited for an examination of differences, while strategies placing greater emphasis on depth and similarity such as homogeneous, snowball, and typical case sampling are better suited for an examination of commonalities or similarities, even though both types of sampling strategies include a focus on both differences and similarities. Alternatives to criterion sampling may be more appropriate to the specific functions of mixed methods, however. For instance, using qualitative methods for the purpose of complementarity may require that a sampling strategy emphasize similarity if it is to achieve depth of understanding or explore and develop hypotheses that complement a quantitative probability sampling strategy achieving breadth of understanding and testing hypotheses ( Kemper et al., 2003 ). Similarly, mixed methods that address related questions for the purpose of expanding or explaining results or developing new measures or conceptual models may require a purposeful sampling strategy aiming for similarity that complements probability sampling aiming for variation or dispersion. A narrowly focused purposeful sampling strategy for qualitative analysis that “complements” a broader focused probability sample for quantitative analysis may help to achieve a balance between increasing inference quality/trustworthiness (internal validity) and generalizability/transferability (external validity). A single method that focuses only on a broad view may decrease internal validity at the expense of external validity ( Kemper et al., 2003 ). On the other hand, the aim of convergence (answering the same question with either method) may suggest use of a purposeful sampling strategy that aims for breadth that parallels the quantitative probability sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0001.jpg

Purposeful and Random Sampling Strategies for Mixed Method Implementation Studies

  • (1) Priority and sequencing of Qualitative (QUAL) and Quantitative (QUAN) can be reversed.
  • (2) Refers to emphasis of sampling strategy.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-ig0002.jpg

Furthermore, the specific nature of implementation research suggests that a multistage purposeful sampling strategy be used. Three different multistage sampling strategies are illustrated in Figure 1 below. Several qualitative methodologists recommend sampling for variation (breadth) before sampling for commonalities (depth) ( Glaser, 1978 ; Bernard, 2002 ) (Multistage I). Also known as a “funnel approach”, this strategy is often recommended when conducting semi-structured interviews ( Spradley, 1979 ) or focus groups ( Morgan, 1997 ). This approach begins with a broad view of the topic and then proceeds to narrow down the conversation to very specific components of the topic. However, as noted earlier, the lack of a clear understanding of the nature of the range may require an iterative approach where each stage of data analysis helps to determine subsequent means of data collection and analysis ( Denzen, 1978 ; Patton, 2001) (Multistage II). Similarly, multistage purposeful sampling designs like opportunistic or emergent sampling, allow the option of adding to a sample to take advantage of unforeseen opportunities after data collection has been initiated (Patton, 2001, p. 240) (Multistage III). Multistage I models generally involve two stages, while a Multistage II model requires a minimum of 3 stages, alternating from sampling for variation to sampling for similarity. A Multistage III model begins with sampling for variation and ends with sampling for similarity, but may involve one or more intervening stages of sampling for variation or similarity as the need or opportunity arises.

Multistage purposeful sampling is also consistent with the use of hybrid designs to simultaneously examine intervention effectiveness and implementation. An extension of the concept of “practical clinical trials” ( Tunis, Stryer & Clancey, 2003 ), effectiveness-implementation hybrid designs provide benefits such as more rapid translational gains in clinical intervention uptake, more effective implementation strategies, and more useful information for researchers and decision makers ( Curran et al., 2012 ). Such designs may give equal priority to the testing of clinical treatments and implementation strategies (Hybrid Type 2) or give priority to the testing of treatment effectiveness (Hybrid Type 1) or implementation strategy (Hybrid Type 3). Curran and colleagues (2012) suggest that evaluation of the intervention’s effectiveness will require or involve use of quantitative measures while evaluation of the implementation process will require or involve use of mixed methods. When conducting a Hybrid Type 1 design (conducting a process evaluation of implementation in the context of a clinical effectiveness trial), the qualitative data could be used to inform the findings of the effectiveness trial. Thus, an effectiveness trial that finds substantial variation might purposefully select participants using a broader strategy like sampling for disconfirming cases to account for the variation. For instance, group randomized trials require knowledge of the contexts and circumstances similar and different across sites to account for inevitable site differences in interventions and assist local implementations of an intervention ( Bloom & Michalopoulos, 2013 ; Raudenbush & Liu, 2000 ). Alternatively, a narrow strategy may be used to account for the lack of variation. In either instance, the choice of a purposeful sampling strategy is determined by the outcomes of the quantitative analysis that is based on a probability sampling strategy. In Hybrid Type 2 and Type 3 designs where the implementation process is given equal or greater priority than the effectiveness trial, the purposeful sampling strategy must be first and foremost consistent with the aims of the implementation study, which may be to understand variation, central tendencies, or both. In all three instances, the sampling strategy employed for the implementation study may vary based on the priority assigned to that study relative to the effectiveness trial. For instance, purposeful sampling for a Hybrid Type 1 design may give higher priority to variation and comparison to understand the parameters of implementation processes or context as a contribution to an understanding of effectiveness outcomes (i.e., using qualitative data to expand upon or explain the results of the effectiveness trial), In effect, these process measures could be seen as modifiers of innovation/EBP outcome. In contrast, purposeful sampling for a Hybrid Type 3 design may give higher priority to similarity and depth to understand the core features of successful outcomes only.

Finally, multistage sampling strategies may be more consistent with innovations in experimental designs representing alternatives to the classic randomized controlled trial in community-based settings that have greater feasibility, acceptability, and external validity. While RCT designs provide the highest level of evidence, “in many clinical and community settings, and especially in studies with underserved populations and low resource settings, randomization may not be feasible or acceptable” ( Glasgow, et al., 2005 , p. 554). Randomized trials are also “relatively poor in assessing the benefit from complex public health or medical interventions that account for individual preferences for or against certain interventions, differential adherence or attrition, or varying dosage or tailoring of an intervention to individual needs” ( Brown et al., 2009 , p. 2). Several alternatives to the randomized design have been proposed, such as “interrupted time series,” “multiple baseline across settings” or “regression-discontinuity” designs. Optimal designs represent one such alternative to the classic RCT and are addressed in detail by Duan and colleagues (this issue) . Like purposeful sampling, optimal designs are intended to capture information-rich cases, usually identified as individuals most likely to benefit from the experimental intervention. The goal here is not to identify the typical or average patient, but patients who represent one end of the variation in an extreme case, intensity sampling, or criterion sampling strategy. Hence, a sampling strategy that begins by sampling for variation at the first stage and then sampling for homogeneity within a specific parameter of that variation (i.e., one end or the other of the distribution) at the second stage would seem the best approach for identifying an “optimal” sample for the clinical trial.

Another alternative to the classic RCT are the adaptive designs proposed by Brown and colleagues ( Brown et al, 2006 ; Brown et al., 2008 ; Brown et al., 2009 ). Adaptive designs are a sequence of trials that draw on the results of existing studies to determine the next stage of evaluation research. They use cumulative knowledge of current treatment successes or failures to change qualities of the ongoing trial. An adaptive intervention modifies what an individual subject (or community for a group-based trial) receives in response to his or her preferences or initial responses to an intervention. Consistent with multistage sampling in qualitative research, the design is somewhat iterative in nature in the sense that information gained from analysis of data collected at the first stage influences the nature of the data collected, and the way they are collected, at subsequent stages ( Denzen, 1978 ). Furthermore, many of these adaptive designs may benefit from a multistage purposeful sampling strategy at early phases of the clinical trial to identify the range of variation and core characteristics of study participants. This information can then be used for the purposes of identifying optimal dose of treatment, limiting sample size, randomizing participants into different enrollment procedures, determining who should be eligible for random assignment (as in the optimal design) to maximize treatment adherence and minimize dropout, or identifying incentives and motives that may be used to encourage participation in the trial itself.

Alternatives to the classic RCT design may also be desirable in studies that adopt a community-based participatory research framework ( Minkler & Wallerstein, 2003 ), considered to be an important tool on conducting implementation research ( Palinkas & Soydan, 2012 ). Such frameworks suggest that identification and recruitment of potential study participants will place greater emphasis on the priorities and “local knowledge” of community partners than on the need to sample for variation or uniformity. In this instance, the first stage of sampling may approximate the strategy of sampling politically important cases ( Patton, 2002 ) at the first stage, followed by other sampling strategies intended to maximize variations in stakeholder opinions or experience.

On the basis of this review, the following recommendations are offered for the use of purposeful sampling in mixed method implementation research. First, many mixed methods studies in health services research and implementation science do not clearly identify or provide a rationale for the sampling procedure for either quantitative or qualitative components of the study ( Wisdom et al., 2011 ), so a primary recommendation is for researchers to clearly describe their sampling strategies and provide the rationale for the strategy.

Second, use of a single stage strategy for purposeful sampling for qualitative portions of a mixed methods implementation study should adhere to the same general principles that govern all forms of sampling, qualitative or quantitative. Kemper and colleagues (2003) identify seven such principles: 1) the sampling strategy should stem logically from the conceptual framework as well as the research questions being addressed by the study; 2) the sample should be able to generate a thorough database on the type of phenomenon under study; 3) the sample should at least allow the possibility of drawing clear inferences and credible explanations from the data; 4) the sampling strategy must be ethical; 5) the sampling plan should be feasible; 6) the sampling plan should allow the researcher to transfer/generalize the conclusions of the study to other settings or populations; and 7) the sampling scheme should be as efficient as practical.

Third, the field of implementation research is at a stage itself where qualitative methods are intended primarily to explore the barriers and facilitators of EBP implementation and to develop new conceptual models of implementation process and outcomes. This is especially important in state implementation research, where fiscal necessities are driving policy reforms for which knowledge about EBP implementation barriers and facilitators are urgently needed. Thus a multistage strategy for purposeful sampling should begin first with a broader view with an emphasis on variation or dispersion and move to a narrow view with an emphasis on similarity or central tendencies. Such a strategy is necessary for the task of finding the optimal balance between internal and external validity.

Fourth, if we assume that probability sampling will be the preferred strategy for the quantitative components of most implementation research, the selection of a single or multistage purposeful sampling strategy should be based, in part, on how it relates to the probability sample, either for the purpose of answering the same question (in which case a strategy emphasizing variation and dispersion is preferred) or the for answering related questions (in which case, a strategy emphasizing similarity and central tendencies is preferred).

Fifth, it should be kept in mind that all sampling procedures, whether purposeful or probability, are designed to capture elements of both similarity and differences, of both centrality and dispersion, because both elements are essential to the task of generating new knowledge through the processes of comparison and contrast. Selecting a strategy that gives emphasis to one does not mean that it cannot be used for the other. Having said that, our analysis has assumed at least some degree of concordance between breadth of understanding associated with quantitative probability sampling and purposeful sampling strategies that emphasize variation on the one hand, and between the depth of understanding and purposeful sampling strategies that emphasize similarity on the other hand. While there may be some merit to that assumption, depth of understanding requires both an understanding of variation and common elements.

Finally, it should also be kept in mind that quantitative data can be generated from a purposeful sampling strategy and qualitative data can be generated from a probability sampling strategy. Each set of data is suited to a specific objective and each must adhere to a specific set of assumptions and requirements. Nevertheless, the promise of mixed methods, like the promise of implementation science, lies in its ability to move beyond the confines of existing methodological approaches and develop innovative solutions to important and complex problems. For states engaged in EBP implementation, the need for these solutions is urgent.

An external file that holds a picture, illustration, etc.
Object name is nihms-538401-f0004.jpg

Multistage Purposeful Sampling Strategies

Acknowledgments

This study was funded through a grant from the National Institute of Mental Health (P30-MH090322: K. Hoagwood, PI).

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Writing Survey Questions

Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire.

Questionnaire design is a multistage process that requires attention to many details at once. Designing the questionnaire is complicated because surveys can ask about topics in varying degrees of detail, questions can be asked in different ways, and questions asked earlier in a survey may influence how people respond to later questions. Researchers are also often interested in measuring change over time and therefore must be attentive to how opinions or behaviors have been measured in prior surveys.

Surveyors may conduct pilot tests or focus groups in the early stages of questionnaire development in order to better understand how people think about an issue or comprehend a question. Pretesting a survey is an essential step in the questionnaire design process to evaluate how people respond to the overall questionnaire and specific questions, especially when questions are being introduced for the first time.

For many years, surveyors approached questionnaire design as an art, but substantial research over the past forty years has demonstrated that there is a lot of science involved in crafting a good survey questionnaire. Here, we discuss the pitfalls and best practices of designing questionnaires.

Question development

There are several steps involved in developing a survey questionnaire. The first is identifying what topics will be covered in the survey. For Pew Research Center surveys, this involves thinking about what is happening in our nation and the world and what will be relevant to the public, policymakers and the media. We also track opinion on a variety of issues over time so we often ensure that we update these trends on a regular basis to better understand whether people’s opinions are changing.

At Pew Research Center, questionnaire development is a collaborative and iterative process where staff meet to discuss drafts of the questionnaire several times over the course of its development. We frequently test new survey questions ahead of time through qualitative research methods such as  focus groups , cognitive interviews, pretesting (often using an  online, opt-in sample ), or a combination of these approaches. Researchers use insights from this testing to refine questions before they are asked in a production survey, such as on the ATP.

Measuring change over time

Many surveyors want to track changes over time in people’s attitudes, opinions and behaviors. To measure change, questions are asked at two or more points in time. A cross-sectional design surveys different people in the same population at multiple points in time. A panel, such as the ATP, surveys the same people over time. However, it is common for the set of people in survey panels to change over time as new panelists are added and some prior panelists drop out. Many of the questions in Pew Research Center surveys have been asked in prior polls. Asking the same questions at different points in time allows us to report on changes in the overall views of the general public (or a subset of the public, such as registered voters, men or Black Americans), or what we call “trending the data”.

When measuring change over time, it is important to use the same question wording and to be sensitive to where the question is asked in the questionnaire to maintain a similar context as when the question was asked previously (see  question wording  and  question order  for further information). All of our survey reports include a topline questionnaire that provides the exact question wording and sequencing, along with results from the current survey and previous surveys in which we asked the question.

The Center’s transition from conducting U.S. surveys by live telephone interviewing to an online panel (around 2014 to 2020) complicated some opinion trends, but not others. Opinion trends that ask about sensitive topics (e.g., personal finances or attending religious services ) or that elicited volunteered answers (e.g., “neither” or “don’t know”) over the phone tended to show larger differences than other trends when shifting from phone polls to the online ATP. The Center adopted several strategies for coping with changes to data trends that may be related to this change in methodology. If there is evidence suggesting that a change in a trend stems from switching from phone to online measurement, Center reports flag that possibility for readers to try to head off confusion or erroneous conclusions.

Open- and closed-ended questions

One of the most significant decisions that can affect how people answer questions is whether the question is posed as an open-ended question, where respondents provide a response in their own words, or a closed-ended question, where they are asked to choose from a list of answer choices.

For example, in a poll conducted after the 2008 presidential election, people responded very differently to two versions of the question: “What one issue mattered most to you in deciding how you voted for president?” One was closed-ended and the other open-ended. In the closed-ended version, respondents were provided five options and could volunteer an option not on the list.

When explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8%) provided a response other than the five they were read. By contrast, fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question. All of the other issues were chosen at least slightly more often when explicitly offered in the closed-ended version than in the open-ended version. (Also see  “High Marks for the Campaign, a High Bar for Obama”  for more information.)

what is sampling methods in qualitative research

Researchers will sometimes conduct a pilot study using open-ended questions to discover which answers are most common. They will then develop closed-ended questions based off that pilot study that include the most common responses as answer choices. In this way, the questions may better reflect what the public is thinking, how they view a particular issue, or bring certain issues to light that the researchers may not have been aware of.

When asking closed-ended questions, the choice of options provided, how each option is described, the number of response options offered, and the order in which options are read can all influence how people respond. One example of the impact of how categories are defined can be found in a Pew Research Center poll conducted in January 2002. When half of the sample was asked whether it was “more important for President Bush to focus on domestic policy or foreign policy,” 52% chose domestic policy while only 34% said foreign policy. When the category “foreign policy” was narrowed to a specific aspect – “the war on terrorism” – far more people chose it; only 33% chose domestic policy while 52% chose the war on terrorism.

In most circumstances, the number of answer choices should be kept to a relatively small number – just four or perhaps five at most – especially in telephone surveys. Psychological research indicates that people have a hard time keeping more than this number of choices in mind at one time. When the question is asking about an objective fact and/or demographics, such as the religious affiliation of the respondent, more categories can be used. In fact, they are encouraged to ensure inclusivity. For example, Pew Research Center’s standard religion questions include more than 12 different categories, beginning with the most common affiliations (Protestant and Catholic). Most respondents have no trouble with this question because they can expect to see their religious group within that list in a self-administered survey.

In addition to the number and choice of response options offered, the order of answer categories can influence how people respond to closed-ended questions. Research suggests that in telephone surveys respondents more frequently choose items heard later in a list (a “recency effect”), and in self-administered surveys, they tend to choose items at the top of the list (a “primacy” effect).

Because of concerns about the effects of category order on responses to closed-ended questions, many sets of response options in Pew Research Center’s surveys are programmed to be randomized to ensure that the options are not asked in the same order for each respondent. Rotating or randomizing means that questions or items in a list are not asked in the same order to each respondent. Answers to questions are sometimes affected by questions that precede them. By presenting questions in a different order to each respondent, we ensure that each question gets asked in the same context as every other question the same number of times (e.g., first, last or any position in between). This does not eliminate the potential impact of previous questions on the current question, but it does ensure that this bias is spread randomly across all of the questions or items in the list. For instance, in the example discussed above about what issue mattered most in people’s vote, the order of the five issues in the closed-ended version of the question was randomized so that no one issue appeared early or late in the list for all respondents. Randomization of response items does not eliminate order effects, but it does ensure that this type of bias is spread randomly.

Questions with ordinal response categories – those with an underlying order (e.g., excellent, good, only fair, poor OR very favorable, mostly favorable, mostly unfavorable, very unfavorable) – are generally not randomized because the order of the categories conveys important information to help respondents answer the question. Generally, these types of scales should be presented in order so respondents can easily place their responses along the continuum, but the order can be reversed for some respondents. For example, in one of Pew Research Center’s questions about abortion, half of the sample is asked whether abortion should be “legal in all cases, legal in most cases, illegal in most cases, illegal in all cases,” while the other half of the sample is asked the same question with the response categories read in reverse order, starting with “illegal in all cases.” Again, reversing the order does not eliminate the recency effect but distributes it randomly across the population.

Question wording

The choice of words and phrases in a question is critical in expressing the meaning and intent of the question to the respondent and ensuring that all respondents interpret the question the same way. Even small wording differences can substantially affect the answers people provide.

[View more Methods 101 Videos ]

An example of a wording difference that had a significant impact on responses comes from a January 2003 Pew Research Center survey. When people were asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” 68% said they favored military action while 25% said they opposed military action. However, when asked whether they would “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule  even if it meant that U.S. forces might suffer thousands of casualties, ” responses were dramatically different; only 43% said they favored military action, while 48% said they opposed it. The introduction of U.S. casualties altered the context of the question and influenced whether people favored or opposed military action in Iraq.

There has been a substantial amount of research to gauge the impact of different ways of asking questions and how to minimize differences in the way respondents interpret what is being asked. The issues related to question wording are more numerous than can be treated adequately in this short space, but below are a few of the important things to consider:

First, it is important to ask questions that are clear and specific and that each respondent will be able to answer. If a question is open-ended, it should be evident to respondents that they can answer in their own words and what type of response they should provide (an issue or problem, a month, number of days, etc.). Closed-ended questions should include all reasonable responses (i.e., the list of options is exhaustive) and the response categories should not overlap (i.e., response options should be mutually exclusive). Further, it is important to discern when it is best to use forced-choice close-ended questions (often denoted with a radio button in online surveys) versus “select-all-that-apply” lists (or check-all boxes). A 2019 Center study found that forced-choice questions tend to yield more accurate responses, especially for sensitive questions.  Based on that research, the Center generally avoids using select-all-that-apply questions.

It is also important to ask only one question at a time. Questions that ask respondents to evaluate more than one concept (known as double-barreled questions) – such as “How much confidence do you have in President Obama to handle domestic and foreign policy?” – are difficult for respondents to answer and often lead to responses that are difficult to interpret. In this example, it would be more effective to ask two separate questions, one about domestic policy and another about foreign policy.

In general, questions that use simple and concrete language are more easily understood by respondents. It is especially important to consider the education level of the survey population when thinking about how easy it will be for respondents to interpret and answer a question. Double negatives (e.g., do you favor or oppose  not  allowing gays and lesbians to legally marry) or unfamiliar abbreviations or jargon (e.g., ANWR instead of Arctic National Wildlife Refuge) can result in respondent confusion and should be avoided.

Similarly, it is important to consider whether certain words may be viewed as biased or potentially offensive to some respondents, as well as the emotional reaction that some words may provoke. For example, in a 2005 Pew Research Center survey, 51% of respondents said they favored “making it legal for doctors to give terminally ill patients the means to end their lives,” but only 44% said they favored “making it legal for doctors to assist terminally ill patients in committing suicide.” Although both versions of the question are asking about the same thing, the reaction of respondents was different. In another example, respondents have reacted differently to questions using the word “welfare” as opposed to the more generic “assistance to the poor.” Several experiments have shown that there is much greater public support for expanding “assistance to the poor” than for expanding “welfare.”

We often write two versions of a question and ask half of the survey sample one version of the question and the other half the second version. Thus, we say we have two  forms  of the questionnaire. Respondents are assigned randomly to receive either form, so we can assume that the two groups of respondents are essentially identical. On questions where two versions are used, significant differences in the answers between the two forms tell us that the difference is a result of the way we worded the two versions.

what is sampling methods in qualitative research

One of the most common formats used in survey questions is the “agree-disagree” format. In this type of question, respondents are asked whether they agree or disagree with a particular statement. Research has shown that, compared with the better educated and better informed, less educated and less informed respondents have a greater tendency to agree with such statements. This is sometimes called an “acquiescence bias” (since some kinds of respondents are more likely to acquiesce to the assertion than are others). This behavior is even more pronounced when there’s an interviewer present, rather than when the survey is self-administered. A better practice is to offer respondents a choice between alternative statements. A Pew Research Center experiment with one of its routinely asked values questions illustrates the difference that question format can make. Not only does the forced choice format yield a very different result overall from the agree-disagree format, but the pattern of answers between respondents with more or less formal education also tends to be very different.

One other challenge in developing questionnaires is what is called “social desirability bias.” People have a natural tendency to want to be accepted and liked, and this may lead people to provide inaccurate answers to questions that deal with sensitive subjects. Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias. They also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election. Researchers attempt to account for this potential bias in crafting questions about these topics. For instance, when Pew Research Center surveys ask about past voting behavior, it is important to note that circumstances may have prevented the respondent from voting: “In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?” The choice of response options can also make it easier for people to be honest. For example, a question about church attendance might include three of six response options that indicate infrequent attendance. Research has also shown that social desirability bias can be greater when an interviewer is present (e.g., telephone and face-to-face surveys) than when respondents complete the survey themselves (e.g., paper and web surveys).

Lastly, because slight modifications in question wording can affect responses, identical question wording should be used when the intention is to compare results to those from earlier surveys. Similarly, because question wording and responses can vary based on the mode used to survey respondents, researchers should carefully evaluate the likely effects on trend measurements if a different survey mode will be used to assess change in opinion over time.

Question order

Once the survey questions are developed, particular attention should be paid to how they are ordered in the questionnaire. Surveyors must be attentive to how questions early in a questionnaire may have unintended effects on how respondents answer subsequent questions. Researchers have demonstrated that the order in which questions are asked can influence how people respond; earlier questions can unintentionally provide context for the questions that follow (these effects are called “order effects”).

One kind of order effect can be seen in responses to open-ended questions. Pew Research Center surveys generally ask open-ended questions about national problems, opinions about leaders and similar topics near the beginning of the questionnaire. If closed-ended questions that relate to the topic are placed before the open-ended question, respondents are much more likely to mention concepts or considerations raised in those earlier questions when responding to the open-ended question.

For closed-ended opinion questions, there are two main types of order effects: contrast effects ( where the order results in greater differences in responses), and assimilation effects (where responses are more similar as a result of their order).

what is sampling methods in qualitative research

An example of a contrast effect can be seen in a Pew Research Center poll conducted in October 2003, a dozen years before same-sex marriage was legalized in the U.S. That poll found that people were more likely to favor allowing gays and lesbians to enter into legal agreements that give them the same rights as married couples when this question was asked after one about whether they favored or opposed allowing gays and lesbians to marry (45% favored legal agreements when asked after the marriage question, but 37% favored legal agreements without the immediate preceding context of a question about same-sex marriage). Responses to the question about same-sex marriage, meanwhile, were not significantly affected by its placement before or after the legal agreements question.

what is sampling methods in qualitative research

Another experiment embedded in a December 2008 Pew Research Center poll also resulted in a contrast effect. When people were asked “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” immediately after having been asked “Do you approve or disapprove of the way George W. Bush is handling his job as president?”; 88% said they were dissatisfied, compared with only 78% without the context of the prior question.

Responses to presidential approval remained relatively unchanged whether national satisfaction was asked before or after it. A similar finding occurred in December 2004 when both satisfaction and presidential approval were much higher (57% were dissatisfied when Bush approval was asked first vs. 51% when general satisfaction was asked first).

Several studies also have shown that asking a more specific question before a more general question (e.g., asking about happiness with one’s marriage before asking about one’s overall happiness) can result in a contrast effect. Although some exceptions have been found, people tend to avoid redundancy by excluding the more specific question from the general rating.

Assimilation effects occur when responses to two questions are more consistent or closer together because of their placement in the questionnaire. We found an example of an assimilation effect in a Pew Research Center poll conducted in November 2008 when we asked whether Republican leaders should work with Obama or stand up to him on important issues and whether Democratic leaders should work with Republican leaders or stand up to them on important issues. People were more likely to say that Republican leaders should work with Obama when the question was preceded by the one asking what Democratic leaders should do in working with Republican leaders (81% vs. 66%). However, when people were first asked about Republican leaders working with Obama, fewer said that Democratic leaders should work with Republican leaders (71% vs. 82%).

The order questions are asked is of particular importance when tracking trends over time. As a result, care should be taken to ensure that the context is similar each time a question is asked. Modifying the context of the question could call into question any observed changes over time (see  measuring change over time  for more information).

A questionnaire, like a conversation, should be grouped by topic and unfold in a logical order. It is often helpful to begin the survey with simple questions that respondents will find interesting and engaging. Throughout the survey, an effort should be made to keep the survey interesting and not overburden respondents with several difficult questions right after one another. Demographic questions such as income, education or age should not be asked near the beginning of a survey unless they are needed to determine eligibility for the survey or for routing respondents through particular sections of the questionnaire. Even then, it is best to precede such items with more interesting and engaging questions. One virtue of survey panels like the ATP is that demographic questions usually only need to be asked once a year, not in each survey.

U.S. Surveys

Other research methods, sign up for our weekly newsletter.

Fresh data delivered Saturday mornings

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Qualitative Sampling Methods

Affiliation.

  • 1 14742 School of Nursing, University of Texas Health Science Center, San Antonio, TX, USA.
  • PMID: 32813616
  • DOI: 10.1177/0890334420949218

Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros and cons of each. Sample size and data saturation are discussed.

Keywords: breastfeeding; qualitative methods; sampling; sampling methods.

  • Evaluation Studies as Topic*
  • Research Design / standards
  • Sample Size*

COMMENTS

  1. Sampling Methods

    Sampling methods are crucial for conducting reliable research. In this article, you will learn about the types, techniques and examples of sampling methods, and how to choose the best one for your study. Scribbr also offers free tools and guides for other aspects of academic writing, such as citation, bibliography, and fallacy.

  2. Different Types of Sampling Techniques in Qualitative Research

    Key Takeaways: Sampling techniques in qualitative research include purposive, convenience, snowball, and theoretical sampling. Choosing the right sampling technique significantly impacts the accuracy and reliability of the research results. It's crucial to consider the potential impact on the bias, sample diversity, and generalizability when ...

  3. Big enough? Sampling in qualitative inquiry

    Any senior researcher, or seasoned mentor, has a practiced response to the 'how many' question. Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects (Staller, 2013).As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies."(p.537).

  4. Chapter 5. Sampling

    You might remember studying sampling in a quantitative research course. Sampling is important here too, but it works a bit differently. Unlike quantitative research, qualitative research involves nonprobability sampling. This chapter explains why this is so and what qualities instead make a good sample for qualitative research.

  5. Series: Practical guidance to qualitative research. Part 3: Sampling

    A sampling plan is a formal plan specifying a sampling method, a sample size, and procedure for recruiting participants (Box 1) . A qualitative sampling plan describes how many observations, interviews, focus-group discussions or cases are needed to ensure that the findings will contribute rich data.

  6. PDF Sampling Strategies in Qualitative Research

    Sampling Strategies in Qualitative Research In: The SAGE Handbook of Qualitative Data Analysis By: Tim Rapley Edited by: Uwe Flick Pub. Date: 2013 ... SAGE Research Methods. Page 2 of 21. Sampling Strategies in Qualitative Research. 1. 1. Sampling can be divided in a number of different ways. At a basic level, with the exception

  7. Qualitative Sampling Methods

    Abstract. Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros ...

  8. Sampling in Qualitative Research

    Abstract. In gerontology the most recognized and elaborate discourse about sampling is generally thought to be in quantitative research associated with survey research and medical research. But sampling has long been a central concern in the social and humanistic inquiry, albeit in a different guise suited to the different goals.

  9. Sampling Techniques for Qualitative Research

    Purposive Sampling. Purposive (or purposeful) sampling is a non-probability technique used to deliberately select the best sources of data to meet the purpose of the study. Purposive sampling is sometimes referred to as theoretical or selective or specific sampling. Theoretical sampling is used in qualitative research when a study is designed ...

  10. Sampling in Qualitative Research: Rationale, Issues, and Methods

    There is a need for more explicit discussion of qualitative sampling issues. This article will outline the guiding principles and rationales, features, and practices of sampling in qualitative research. It then describes common questions about sampling in qualitative research.

  11. (PDF) Sampling in Qualitative Research

    Answer 1: In qualitative research, samples are selected subjectively according to. the pur pose of the study, whereas in quantitative researc h probability sampling. technique are used to select ...

  12. PDF Sampling Techniques for Qualitative Research

    Qualitative studies use specific tools and techniques (methods) to sample people, organizations, or whatever is to be examined. The methodology guides the selection of tools and techniques for sampling, data analysis, quality assurance, etc. These all vary according to the purpose and design of the study and the RQ.

  13. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  14. Qualitative, Quantitative, and Mixed Methods Research Sampling

    The importance of sampling extends to the ability to draw accurate inferences, and it is an integral part of qualitative guidelines across research methods. Sampling considerations are important in quantitative and qualitative research when considering a target population and when drawing a sample that will either allow us to generalize (i.e ...

  15. 10.2 Sampling in qualitative research

    Qualitative researchers typically make sampling choices that enable them to achieve a deep understanding of whatever phenomenon it is that they are studying. In this section, we'll examine the techniques that qualitative researchers typically employ when sampling as well as the various types of samples that qualitative researchers are most ...

  16. Sampling in Qualitative Research

    Qualitative researchers sometimes rely on snowball sampling techniques to identify study participants. In this case, a researcher might know of one or two people she'd like to include in her study but then relies on those initial participants to help identify additional study participants. Thus the researcher's sample builds and becomes ...

  17. Qualitative Research Sampling Methods: Pros and Cons to Help You Choose

    Snowball sampling (or, chain referral sampling) is a method widely used in qualitative sociological research (Biernacki & Waldorf, 1981; Gray, 2004; Flick, 2009; Heckathorn, 2011). It's used a lot because it's effective at getting numbers. It's premised on the idea that people know people similar to themselves.

  18. Purposeful sampling for qualitative data collection and analysis in

    Principles of Purposeful Sampling. Purposeful sampling is a technique widely used in qualitative research for the identification and selection of information-rich cases for the most effective use of limited resources (Patton, 2002).This involves identifying and selecting individuals or groups of individuals that are especially knowledgeable about or experienced with a phenomenon of interest ...

  19. PDF Module 1 Qualitative Research Methods Overview

    Qualitative research is a type of scientific research. In general terms, scientific research consists of an investigation that: • seeks answers to a question. • systematically uses a predefined set of procedures to answer the question. • collects evidence. • produces findings that were not determined in advance.

  20. Sampling Methods

    Sampling methods refer to the techniques used to select a subset of individuals or units from a larger population for the purpose of conducting statistical analysis or research. Sampling is an essential part of the Research because it allows researchers to draw conclusions about a population without having to collect data from every member of ...

  21. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  22. Writing Survey Questions

    Accurate random sampling will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions. Creating good measures involves both writing good questions and organizing them to form the questionnaire. ... We frequently test new survey questions ahead of time through qualitative research methods such as ...

  23. Sampling Methods

    Abstract. Knowledge of sampling methods is essential to design quality research. Critical questions are provided to help researchers choose a sampling method. This article reviews probability and non-probability sampling methods, lists and defines specific sampling techniques, and provides pros and cons for consideration.

  24. Qualitative Sampling Methods

    Qualitative sampling methods differ from quantitative sampling methods. It is important that one understands those differences, as well as, appropriate qualitative sampling techniques. Appropriate sampling choices enhance the rigor of qualitative research studies. These types of sampling strategies are presented, along with the pros and cons of ...