News alert: UC Berkeley has announced its next university librarian

Secondary menu

  • Log in to your Library account
  • Hours and Maps
  • Connect from Off Campus
  • UC Berkeley Home

Search form

Research methods--quantitative, qualitative, and more: overview.

  • Quantitative Research
  • Qualitative Research
  • Data Science Methods (Machine Learning, AI, Big Data)
  • Text Mining and Computational Text Analysis
  • Evidence Synthesis/Systematic Reviews
  • Get Data, Get Help!

About Research Methods

This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. 

As Patten and Newhart note in the book Understanding Research Methods , "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge. The accumulation of knowledge through research is by its nature a collective endeavor. Each well-designed study provides evidence that may support, amend, refute, or deepen the understanding of existing knowledge...Decisions are important throughout the practice of research and are designed to help researchers collect evidence that includes the full spectrum of the phenomenon under study, to maintain logical rules, and to mitigate or account for possible sources of bias. In many ways, learning research methods is learning how to see and make these decisions."

The choice of methods varies by discipline, by the kind of phenomenon being studied and the data being used to study it, by the technology available, and more.  This guide is an introduction, but if you don't see what you need here, always contact your subject librarian, and/or take a look to see if there's a library research guide that will answer your question. 

Suggestions for changes and additions to this guide are welcome! 

START HERE: SAGE Research Methods

Without question, the most comprehensive resource available from the library is SAGE Research Methods.  HERE IS THE ONLINE GUIDE  to this one-stop shopping collection, and some helpful links are below:

  • SAGE Research Methods
  • Little Green Books  (Quantitative Methods)
  • Little Blue Books  (Qualitative Methods)
  • Dictionaries and Encyclopedias  
  • Case studies of real research projects
  • Sample datasets for hands-on practice
  • Streaming video--see methods come to life
  • Methodspace- -a community for researchers
  • SAGE Research Methods Course Mapping

Library Data Services at UC Berkeley

Library Data Services Program and Digital Scholarship Services

The LDSP offers a variety of services and tools !  From this link, check out pages for each of the following topics:  discovering data, managing data, collecting data, GIS data, text data mining, publishing data, digital scholarship, open science, and the Research Data Management Program.

Be sure also to check out the visual guide to where to seek assistance on campus with any research question you may have!

Library GIS Services

Other Data Services at Berkeley

D-Lab Supports Berkeley faculty, staff, and graduate students with research in data intensive social science, including a wide range of training and workshop offerings Dryad Dryad is a simple self-service tool for researchers to use in publishing their datasets. It provides tools for the effective publication of and access to research data. Geospatial Innovation Facility (GIF) Provides leadership and training across a broad array of integrated mapping technologies on campu Research Data Management A UC Berkeley guide and consulting service for research data management issues

General Research Methods Resources

Here are some general resources for assistance:

  • Assistance from ICPSR (must create an account to access): Getting Help with Data , and Resources for Students
  • Wiley Stats Ref for background information on statistics topics
  • Survey Documentation and Analysis (SDA) .  Program for easy web-based analysis of survey data.

Consultants

  • D-Lab/Data Science Discovery Consultants Request help with your research project from peer consultants.
  • Research data (RDM) consulting Meet with RDM consultants before designing the data security, storage, and sharing aspects of your qualitative project.
  • Statistics Department Consulting Services A service in which advanced graduate students, under faculty supervision, are available to consult during specified hours in the Fall and Spring semesters.

Related Resourcex

  • IRB / CPHS Qualitative research projects with human subjects often require that you go through an ethics review.
  • OURS (Office of Undergraduate Research and Scholarships) OURS supports undergraduates who want to embark on research projects and assistantships. In particular, check out their "Getting Started in Research" workshops
  • Sponsored Projects Sponsored projects works with researchers applying for major external grants.
  • Next: Quantitative Research >>
  • Last Updated: Apr 3, 2023 3:14 PM
  • URL: https://guides.lib.berkeley.edu/researchmethods
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

About the author.

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Grad Coach

How To Choose Your Research Methodology

Qualitative vs quantitative vs mixed methods.

By: Derek Jansen (MBA). Expert Reviewed By: Dr Eunice Rautenbach | June 2021

Without a doubt, one of the most common questions we receive at Grad Coach is “ How do I choose the right methodology for my research? ”. It’s easy to see why – with so many options on the research design table, it’s easy to get intimidated, especially with all the complex lingo!

In this post, we’ll explain the three overarching types of research – qualitative, quantitative and mixed methods – and how you can go about choosing the best methodological approach for your research.

Overview: Choosing Your Methodology

Understanding the options – Qualitative research – Quantitative research – Mixed methods-based research

Choosing a research methodology – Nature of the research – Research area norms – Practicalities

Free Webinar: Research Methodology 101

1. Understanding the options

Before we jump into the question of how to choose a research methodology, it’s useful to take a step back to understand the three overarching types of research – qualitative , quantitative and mixed methods -based research. Each of these options takes a different methodological approach.

Qualitative research utilises data that is not numbers-based. In other words, qualitative research focuses on words , descriptions , concepts or ideas – while quantitative research makes use of numbers and statistics. Qualitative research investigates the “softer side” of things to explore and describe, while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them.

Importantly, qualitative research methods are typically used to explore and gain a deeper understanding of the complexity of a situation – to draw a rich picture . In contrast to this, quantitative methods are usually used to confirm or test hypotheses . In other words, they have distinctly different purposes. The table below highlights a few of the key differences between qualitative and quantitative research – you can learn more about the differences here.

  • Uses an inductive approach
  • Is used to build theories
  • Takes a subjective approach
  • Adopts an open and flexible approach
  • The researcher is close to the respondents
  • Interviews and focus groups are oftentimes used to collect word-based data.
  • Generally, draws on small sample sizes
  • Uses qualitative data analysis techniques (e.g. content analysis , thematic analysis , etc)
  • Uses a deductive approach
  • Is used to test theories
  • Takes an objective approach
  • Adopts a closed, highly planned approach
  • The research is disconnected from respondents
  • Surveys or laboratory equipment are often used to collect number-based data.
  • Generally, requires large sample sizes
  • Uses statistical analysis techniques to make sense of the data

Mixed methods -based research, as you’d expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data. Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that model empirically.

In other words, while qualitative and quantitative methods (and the philosophies that underpin them) are completely different, they are not at odds with each other. It’s not a competition of qualitative vs quantitative. On the contrary, they can be used together to develop a high-quality piece of research. Of course, this is easier said than done, so we usually recommend that first-time researchers stick to a single approach , unless the nature of their study truly warrants a mixed-methods approach.

The key takeaway here, and the reason we started by looking at the three options, is that it’s important to understand that each methodological approach has a different purpose – for example, to explore and understand situations (qualitative), to test and measure (quantitative) or to do both. They’re not simply alternative tools for the same job. 

Right – now that we’ve got that out of the way, let’s look at how you can go about choosing the right methodology for your research.

Methodology choices in research

2. How to choose a research methodology

To choose the right research methodology for your dissertation or thesis, you need to consider three important factors . Based on these three factors, you can decide on your overarching approach – qualitative, quantitative or mixed methods. Once you’ve made that decision, you can flesh out the finer details of your methodology, such as the sampling , data collection methods and analysis techniques (we discuss these separately in other posts ).

The three factors you need to consider are:

  • The nature of your research aims, objectives and research questions
  • The methodological approaches taken in the existing literature
  • Practicalities and constraints

Let’s take a look at each of these.

Factor #1: The nature of your research

As I mentioned earlier, each type of research (and therefore, research methodology), whether qualitative, quantitative or mixed, has a different purpose and helps solve a different type of question. So, it’s logical that the key deciding factor in terms of which research methodology you adopt is the nature of your research aims, objectives and research questions .

But, what types of research exist?

Broadly speaking, research can fall into one of three categories:

  • Exploratory – getting a better understanding of an issue and potentially developing a theory regarding it
  • Confirmatory – confirming a potential theory or hypothesis by testing it empirically
  • A mix of both – building a potential theory or hypothesis and then testing it

As a rule of thumb, exploratory research tends to adopt a qualitative approach , whereas confirmatory research tends to use quantitative methods . This isn’t set in stone, but it’s a very useful heuristic. Naturally then, research that combines a mix of both, or is seeking to develop a theory from the ground up and then test that theory, would utilize a mixed-methods approach.

Exploratory vs confirmatory research

Let’s look at an example in action.

If your research aims were to understand the perspectives of war veterans regarding certain political matters, you’d likely adopt a qualitative methodology, making use of interviews to collect data and one or more qualitative data analysis methods to make sense of the data.

If, on the other hand, your research aims involved testing a set of hypotheses regarding the link between political leaning and income levels, you’d likely adopt a quantitative methodology, using numbers-based data from a survey to measure the links between variables and/or constructs .

So, the first (and most important thing) thing you need to consider when deciding which methodological approach to use for your research project is the nature of your research aims , objectives and research questions. Specifically, you need to assess whether your research leans in an exploratory or confirmatory direction or involves a mix of both.

The importance of achieving solid alignment between these three factors and your methodology can’t be overstated. If they’re misaligned, you’re going to be forcing a square peg into a round hole. In other words, you’ll be using the wrong tool for the job, and your research will become a disjointed mess.

If your research is a mix of both exploratory and confirmatory, but you have a tight word count limit, you may need to consider trimming down the scope a little and focusing on one or the other. One methodology executed well has a far better chance of earning marks than a poorly executed mixed methods approach. So, don’t try to be a hero, unless there is a very strong underpinning logic.

Need a helping hand?

methods of research to be used

Factor #2: The disciplinary norms

Choosing the right methodology for your research also involves looking at the approaches used by other researchers in the field, and studies with similar research aims and objectives to yours. Oftentimes, within a discipline, there is a common methodological approach (or set of approaches) used in studies. While this doesn’t mean you should follow the herd “just because”, you should at least consider these approaches and evaluate their merit within your context.

A major benefit of reviewing the research methodologies used by similar studies in your field is that you can often piggyback on the data collection techniques that other (more experienced) researchers have developed. For example, if you’re undertaking a quantitative study, you can often find tried and tested survey scales with high Cronbach’s alphas. These are usually included in the appendices of journal articles, so you don’t even have to contact the original authors. By using these, you’ll save a lot of time and ensure that your study stands on the proverbial “shoulders of giants” by using high-quality measurement instruments .

Of course, when reviewing existing literature, keep point #1 front of mind. In other words, your methodology needs to align with your research aims, objectives and questions. Don’t fall into the trap of adopting the methodological “norm” of other studies just because it’s popular. Only adopt that which is relevant to your research.

Factor #3: Practicalities

When choosing a research methodology, there will always be a tension between doing what’s theoretically best (i.e., the most scientifically rigorous research design ) and doing what’s practical , given your constraints . This is the nature of doing research and there are always trade-offs, as with anything else.

But what constraints, you ask?

When you’re evaluating your methodological options, you need to consider the following constraints:

  • Data access
  • Equipment and software
  • Your knowledge and skills

Let’s look at each of these.

Constraint #1: Data access

The first practical constraint you need to consider is your access to data . If you’re going to be undertaking primary research , you need to think critically about the sample of respondents you realistically have access to. For example, if you plan to use in-person interviews , you need to ask yourself how many people you’ll need to interview, whether they’ll be agreeable to being interviewed, where they’re located, and so on.

If you’re wanting to undertake a quantitative approach using surveys to collect data, you’ll need to consider how many responses you’ll require to achieve statistically significant results. For many statistical tests, a sample of a few hundred respondents is typically needed to develop convincing conclusions.

So, think carefully about what data you’ll need access to, how much data you’ll need and how you’ll collect it. The last thing you want is to spend a huge amount of time on your research only to find that you can’t get access to the required data.

Constraint #2: Time

The next constraint is time. If you’re undertaking research as part of a PhD, you may have a fairly open-ended time limit, but this is unlikely to be the case for undergrad and Masters-level projects. So, pay attention to your timeline, as the data collection and analysis components of different methodologies have a major impact on time requirements . Also, keep in mind that these stages of the research often take a lot longer than originally anticipated.

Another practical implication of time limits is that it will directly impact which time horizon you can use – i.e. longitudinal vs cross-sectional . For example, if you’ve got a 6-month limit for your entire research project, it’s quite unlikely that you’ll be able to adopt a longitudinal time horizon. 

Constraint #3: Money

As with so many things, money is another important constraint you’ll need to consider when deciding on your research methodology. While some research designs will cost near zero to execute, others may require a substantial budget .

Some of the costs that may arise include:

  • Software costs – e.g. survey hosting services, analysis software, etc.
  • Promotion costs – e.g. advertising a survey to attract respondents
  • Incentive costs – e.g. providing a prize or cash payment incentive to attract respondents
  • Equipment rental costs – e.g. recording equipment, lab equipment, etc.
  • Travel costs
  • Food & beverages

These are just a handful of costs that can creep into your research budget. Like most projects, the actual costs tend to be higher than the estimates, so be sure to err on the conservative side and expect the unexpected. It’s critically important that you’re honest with yourself about these costs, or you could end up getting stuck midway through your project because you’ve run out of money.

Budgeting for your research

Constraint #4: Equipment & software

Another practical consideration is the hardware and/or software you’ll need in order to undertake your research. Of course, this variable will depend on the type of data you’re collecting and analysing. For example, you may need lab equipment to analyse substances, or you may need specific analysis software to analyse statistical data. So, be sure to think about what hardware and/or software you’ll need for each potential methodological approach, and whether you have access to these.

Constraint #5: Your knowledge and skillset

The final practical constraint is a big one. Naturally, the research process involves a lot of learning and development along the way, so you will accrue knowledge and skills as you progress. However, when considering your methodological options, you should still consider your current position on the ladder.

Some of the questions you should ask yourself are:

  • Am I more of a “numbers person” or a “words person”?
  • How much do I know about the analysis methods I’ll potentially use (e.g. statistical analysis)?
  • How much do I know about the software and/or hardware that I’ll potentially use?
  • How excited am I to learn new research skills and gain new knowledge?
  • How much time do I have to learn the things I need to learn?

Answering these questions honestly will provide you with another set of criteria against which you can evaluate the research methodology options you’ve shortlisted.

So, as you can see, there is a wide range of practicalities and constraints that you need to take into account when you’re deciding on a research methodology. These practicalities create a tension between the “ideal” methodology and the methodology that you can realistically pull off. This is perfectly normal, and it’s your job to find the option that presents the best set of trade-offs.

Recap: Choosing a methodology

In this post, we’ve discussed how to go about choosing a research methodology. The three major deciding factors we looked at were:

  • Exploratory
  • Confirmatory
  • Combination
  • Research area norms
  • Hardware and software
  • Your knowledge and skillset

If you have any questions, feel free to leave a comment below. If you’d like a helping hand with your research methodology, check out our 1-on-1 research coaching service , or book a free consultation with a friendly Grad Coach.

methods of research to be used

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research methodology example

Very useful and informative especially for beginners

Goudi

Nice article! I’m a beginner in the field of cybersecurity research. I am a Telecom and Network Engineer and Also aiming for PhD scholarship.

Margaret Mutandwa

I find the article very informative especially for my decitation it has been helpful and an eye opener.

Anna N Namwandi

Hi I am Anna ,

I am a PHD candidate in the area of cyber security, maybe we can link up

Tut Gatluak Doar

The Examples shows by you, for sure they are really direct me and others to knows and practices the Research Design and prepration.

Tshepo Ngcobo

I found the post very informative and practical.

Joyce

I’m the process of constructing my research design and I want to know if the data analysis I plan to present in my thesis defense proposal possibly change especially after I gathered the data already.

Janine Grace Baldesco

Thank you so much this site is such a life saver. How I wish 1-1 coaching is available in our country but sadly it’s not.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Methods | Definition, Types, Examples

Research methods are specific procedures for collecting and analysing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs quantitative : Will your data take the form of words or numbers?
  • Primary vs secondary : Will you collect original data yourself, or will you use data that have already been collected by someone else?
  • Descriptive vs experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyse the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analysing data, examples of data analysis methods, frequently asked questions about methodology.

Data are the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

You can also take a mixed methods approach, where you use both qualitative and quantitative research methods.

Primary vs secondary data

Primary data are any original information that you collect for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary data are information that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data. But if you want to synthesise existing knowledge, analyse historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Descriptive vs experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Prevent plagiarism, run a free check.

Your data analysis methods will depend on the type of data you collect and how you prepare them for analysis.

Data can often be analysed both quantitatively and qualitatively. For example, survey responses could be analysed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that were collected:

  • From open-ended survey and interview questions, literature reviews, case studies, and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions.

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that were collected either:

  • During an experiment.
  • Using probability sampling methods .

Because the data are collected and analysed in a statistically valid way, the results of quantitative analysis can be easily standardised and shared among researchers.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to test a hypothesis by systematically collecting and analysing data, while qualitative methods allow you to explore ideas and experiences in depth.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyse data (e.g. experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

More interesting articles.

  • A Quick Guide to Experimental Design | 5 Steps & Examples
  • Between-Subjects Design | Examples, Pros & Cons
  • Case Study | Definition, Examples & Methods
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | A Step-by-Step Guide with Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Controlled Experiments | Methods & Examples of Control
  • Correlation vs Causation | Differences, Designs & Examples
  • Correlational Research | Guide, Design & Examples
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definitions, Uses & Examples
  • Data Cleaning | A Guide with Examples & Steps
  • Data Collection Methods | Step-by-Step Guide & Examples
  • Descriptive Research Design | Definition, Methods & Examples
  • Doing Survey Research | A Step-by-Step Guide & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Explanatory vs Response Variables | Definitions & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Types, Threats & Examples
  • Extraneous Variables | Examples, Types, Controls
  • Face Validity | Guide with Definition & Examples
  • How to Do Thematic Analysis | Guide & Examples
  • How to Write a Strong Hypothesis | Guide & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs Deductive Research Approach (with Examples)
  • Internal Validity | Definition, Threats & Examples
  • Internal vs External Validity | Understanding Differences & Examples
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide, & Examples
  • Multistage Sampling | An Introductory Guide with Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalisation | A Guide with Examples, Pros & Cons
  • Population vs Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs Quantitative Research | Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Reliability vs Validity in Research | Differences, Types & Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Research Design | Step-by-Step Guide with Examples
  • Sampling Methods | Types, Techniques, & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Stratified Sampling | A Step-by-Step Guide with Examples
  • Structured Interview | Definition, Guide & Examples
  • Systematic Review | Definition, Examples & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity | Types, Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Examples
  • Types of Variables in Research | Definitions & Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Are Control Variables | Definition & Examples
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Double-Barrelled Question?
  • What Is a Double-Blind Study? | Introduction & Examples
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What is a Literature Review? | Guide, Template, & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Meaning, Guide & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition & Methods
  • What Is Quota Sampling? | Definition & Examples
  • What is Secondary Research? | Definition, Types, & Examples
  • What Is Snowball Sampling? | Definition & Examples
  • Within-Subjects Design | Explanation, Approaches, Examples

Banner

Research Methodology: Overview of Research Methodology

  • Overview of Research Methodology
  • General Encyclopedias on Research Methodology
  • General Handbooks on Research Methodology
  • Focus Groups
  • Case Studies
  • Cost Benefit Analysis
  • Participatory Action Research
  • Archival Research
  • Data Analysis

Research Methods Overview

If you are planning to do research - whether you are doing a student research project,  IQP,  MQP, GPS project, thesis, or dissertation, you need to use valid approaches and tools to set up your study, gather your data, and make sense of your findings. This research methods guide will help you choose a methodology and launch into your research project. 

Data collection and data analysis are  research methods  that can be applied to many disciplines. There is Qualitative research and Quantitative Research. The focus of this guide, includes most popular methods including: 

focus groups

case studies

We are happy to answer questions about research methods and assist with choosing a method that is right for your research in person or online. below is a video on how to book a research consultation

"How-To": Booking a Research Consultation

methods of research to be used

" Research Data Management " by  Peter Neish  is marked with  CC0 1.0 .

Research Design vs Research Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

Research Data Management

Research Data Management (RDM) refers to how you are going to keep and share your data over longer time frame - like after you graduate. It is defined as the organization, documentation, storage, and  preservation  of the  data  resulting from the research process, where data can be broadly defined as the outcome of experiments or observations that validate research findings, and can take a variety of forms including numerical output ( quantitative data ),  qualitative data , documentation, images, audio, and video.

"Research Design"  by  George C Gordon Library  is licensed under  CC BY 4.0  / A derivative from the  original work

  • Next: General Encyclopedias on Research Methodology >>
  • Last Updated: Jul 31, 2023 3:07 PM
  • URL: https://libguides.wpi.edu/researchmethod

How can we help?

Research Methods: What are research methods?

  • What are research methods?
  • Searching specific databases

What are research methods

Research methods are the strategies, processes or techniques utilized in the collection of data or evidence for analysis in order to uncover new information or create better understanding of a topic.

There are different types of research methods which use different tools for data collection.

Types of research

  • Qualitative Research
  • Quantitative Research
  • Mixed Methods Research

Qualitative Research gathers data about lived experiences, emotions or behaviours, and the meanings individuals attach to them. It assists in enabling researchers to gain a better understanding of complex concepts, social interactions or cultural phenomena. This type of research is useful in the exploration of how or why things have occurred, interpreting events and describing actions.

Quantitative Research gathers numerical data which can be ranked, measured or categorised through statistical analysis. It assists with uncovering patterns or relationships, and for making generalisations. This type of research is useful for finding out how many, how much, how often, or to what extent.

Mixed Methods Research integrates both Q ualitative and Quantitative Research . It provides a holistic approach combining and analysing the statistical data with deeper contextualised insights. Using Mixed Methods also enables Triangulation,  or verification, of the data from two or more sources.

Finding Mixed Methods research in the Databases 

“mixed model*” OR “mixed design*” OR “multiple method*” OR multimethod* OR triangulat*

Data collection tools

Sage research methods.

  • SAGE research methods online This link opens in a new window Research methods tool to help researchers gather full-text resources, design research projects, understand a particular method and write up their research. Includes access to collections of video, business cases and eBooks,

Help and Information

Help and information

  • Next: Finding qualitative research >>
  • Last Updated: Apr 5, 2024 2:16 PM
  • URL: https://libguides.newcastle.edu.au/researchmethods

helpful professor logo

15 Types of Research Methods

types of research methods, explained below

Research methods refer to the strategies, tools, and techniques used to gather and analyze data in a structured way in order to answer a research question or investigate a hypothesis (Hammond & Wellington, 2020).

Generally, we place research methods into two categories: quantitative and qualitative. Each has its own strengths and weaknesses, which we can summarize as:

  • Quantitative research can achieve generalizability through scrupulous statistical analysis applied to large sample sizes.
  • Qualitative research achieves deep, detailed, and nuance accounts of specific case studies, which are not generalizable.

Some researchers, with the aim of making the most of both quantitative and qualitative research, employ mixed methods, whereby they will apply both types of research methods in the one study, such as by conducting a statistical survey alongside in-depth interviews to add context to the quantitative findings.

Below, I’ll outline 15 common research methods, and include pros, cons, and examples of each .

Types of Research Methods

Research methods can be broadly categorized into two types: quantitative and qualitative.

  • Quantitative methods involve systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques, providing an in-depth understanding of a specific concept or phenomenon (Schweigert, 2021). The strengths of this approach include its ability to produce reliable results that can be generalized to a larger population, although it can lack depth and detail.
  • Qualitative methods encompass techniques that are designed to provide a deep understanding of a complex issue, often in a specific context, through collection of non-numerical data (Tracy, 2019). This approach often provides rich, detailed insights but can be time-consuming and its findings may not be generalizable.

These can be further broken down into a range of specific research methods and designs:

Combining the two methods above, mixed methods research mixes elements of both qualitative and quantitative research methods, providing a comprehensive understanding of the research problem . We can further break these down into:

  • Sequential Explanatory Design (QUAN→QUAL): This methodology involves conducting quantitative analysis first, then supplementing it with a qualitative study.
  • Sequential Exploratory Design (QUAL→QUAN): This methodology goes in the other direction, starting with qualitative analysis and ending with quantitative analysis.

Let’s explore some methods and designs from both quantitative and qualitative traditions, starting with qualitative research methods.

Qualitative Research Methods

Qualitative research methods allow for the exploration of phenomena in their natural settings, providing detailed, descriptive responses and insights into individuals’ experiences and perceptions (Howitt, 2019).

These methods are useful when a detailed understanding of a phenomenon is sought.

1. Ethnographic Research

Ethnographic research emerged out of anthropological research, where anthropologists would enter into a setting for a sustained period of time, getting to know a cultural group and taking detailed observations.

Ethnographers would sometimes even act as participants in the group or culture, which many scholars argue is a weakness because it is a step away from achieving objectivity (Stokes & Wall, 2017).

In fact, at its most extreme version, ethnographers even conduct research on themselves, in a fascinating methodology call autoethnography .

The purpose is to understand the culture, social structure, and the behaviors of the group under study. It is often useful when researchers seek to understand shared cultural meanings and practices in their natural settings.

However, it can be time-consuming and may reflect researcher biases due to the immersion approach.

Example of Ethnography

Liquidated: An Ethnography of Wall Street  by Karen Ho involves an anthropologist who embeds herself with Wall Street firms to study the culture of Wall Street bankers and how this culture affects the broader economy and world.

2. Phenomenological Research

Phenomenological research is a qualitative method focused on the study of individual experiences from the participant’s perspective (Tracy, 2019).

It focuses specifically on people’s experiences in relation to a specific social phenomenon ( see here for examples of social phenomena ).

This method is valuable when the goal is to understand how individuals perceive, experience, and make meaning of particular phenomena. However, because it is subjective and dependent on participants’ self-reports, findings may not be generalizable, and are highly reliant on self-reported ‘thoughts and feelings’.

Example of Phenomenological Research

A phenomenological approach to experiences with technology  by Sebnem Cilesiz represents a good starting-point for formulating a phenomenological study. With its focus on the ‘essence of experience’, this piece presents methodological, reliability, validity, and data analysis techniques that phenomenologists use to explain how people experience technology in their everyday lives.

3. Historical Research

Historical research is a qualitative method involving the examination of past events to draw conclusions about the present or make predictions about the future (Stokes & Wall, 2017).

As you might expect, it’s common in the research branches of history departments in universities.

This approach is useful in studies that seek to understand the past to interpret present events or trends. However, it relies heavily on the availability and reliability of source materials, which may be limited.

Common data sources include cultural artifacts from both material and non-material culture , which are then examined, compared, contrasted, and contextualized to test hypotheses and generate theories.

Example of Historical Research

A historical research example might be a study examining the evolution of gender roles over the last century. This research might involve the analysis of historical newspapers, advertisements, letters, and company documents, as well as sociocultural contexts.

4. Content Analysis

Content analysis is a research method that involves systematic and objective coding and interpreting of text or media to identify patterns, themes, ideologies, or biases (Schweigert, 2021).

A content analysis is useful in analyzing communication patterns, helping to reveal how texts such as newspapers, movies, films, political speeches, and other types of ‘content’ contain narratives and biases.

However, interpretations can be very subjective, which often requires scholars to engage in practices such as cross-comparing their coding with peers or external researchers.

Content analysis can be further broken down in to other specific methodologies such as semiotic analysis, multimodal analysis , and discourse analysis .

Example of Content Analysis

How is Islam Portrayed in Western Media?  by Poorebrahim and Zarei (2013) employs a type of content analysis called critical discourse analysis (common in poststructuralist and critical theory research ). This study by Poorebrahum and Zarei combs through a corpus of western media texts to explore the language forms that are used in relation to Islam and Muslims, finding that they are overly stereotyped, which may represent anti-Islam bias or failure to understand the Islamic world.

5. Grounded Theory Research

Grounded theory involves developing a theory  during and after  data collection rather than beforehand.

This is in contrast to most academic research studies, which start with a hypothesis or theory and then testing of it through a study, where we might have a null hypothesis (disproving the theory) and an alternative hypothesis (supporting the theory).

Grounded Theory is useful because it keeps an open mind to what the data might reveal out of the research. It can be time-consuming and requires rigorous data analysis (Tracy, 2019).

Grounded Theory Example

Developing a Leadership Identity   by Komives et al (2005) employs a grounded theory approach to develop a thesis based on the data rather than testing a hypothesis. The researchers studied the leadership identity of 13 college students taking on leadership roles. Based on their interviews, the researchers theorized that the students’ leadership identities shifted from a hierarchical view of leadership to one that embraced leadership as a collaborative concept.

6. Action Research

Action research is an approach which aims to solve real-world problems and bring about change within a setting. The study is designed to solve a specific problem – or in other words, to take action (Patten, 2017).

This approach can involve mixed methods, but is generally qualitative because it usually involves the study of a specific case study wherein the researcher works, e.g. a teacher studying their own classroom practice to seek ways they can improve.

Action research is very common in fields like education and nursing where practitioners identify areas for improvement then implement a study in order to find paths forward.

Action Research Example

Using Digital Sandbox Gaming to Improve Creativity Within Boys’ Writing   by Ellison and Drew was a research study one of my research students completed in his own classroom under my supervision. He implemented a digital game-based approach to literacy teaching with boys and interviewed his students to see if the use of games as stimuli for storytelling helped draw them into the learning experience.

7. Natural Observational Research

Observational research can also be quantitative (see: experimental research), but in naturalistic settings for the social sciences, researchers tend to employ qualitative data collection methods like interviews and field notes to observe people in their day-to-day environments.

This approach involves the observation and detailed recording of behaviors in their natural settings (Howitt, 2019). It can provide rich, in-depth information, but the researcher’s presence might influence behavior.

While observational research has some overlaps with ethnography (especially in regard to data collection techniques), it tends not to be as sustained as ethnography, e.g. a researcher might do 5 observations, every second Monday, as opposed to being embedded in an environment.

Observational Research Example

A researcher might use qualitative observational research to study the behaviors and interactions of children at a playground. The researcher would document the behaviors observed, such as the types of games played, levels of cooperation , and instances of conflict.

8. Case Study Research

Case study research is a qualitative method that involves a deep and thorough investigation of a single individual, group, or event in order to explore facets of that phenomenon that cannot be captured using other methods (Stokes & Wall, 2017).

Case study research is especially valuable in providing contextualized insights into specific issues, facilitating the application of abstract theories to real-world situations (Patten, 2017).

However, findings from a case study may not be generalizable due to the specific context and the limited number of cases studied (Walliman, 2021).

See More: Case Study Advantages and Disadvantages

Example of a Case Study

Scholars conduct a detailed exploration of the implementation of a new teaching method within a classroom setting. The study focuses on how the teacher and students adapt to the new method, the challenges encountered, and the outcomes on student performance and engagement. While the study provides specific and detailed insights of the teaching method in that classroom, it cannot be generalized to other classrooms, as statistical significance has not been established through this qualitative approach.

Quantitative Research Methods

Quantitative research methods involve the systematic empirical investigation of observable phenomena via statistical, mathematical, or computational techniques (Pajo, 2022). The focus is on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

9. Experimental Research

Experimental research is a quantitative method where researchers manipulate one variable to determine its effect on another (Walliman, 2021).

This is common, for example, in high-school science labs, where students are asked to introduce a variable into a setting in order to examine its effect.

This type of research is useful in situations where researchers want to determine causal relationships between variables. However, experimental conditions may not reflect real-world conditions.

Example of Experimental Research

A researcher may conduct an experiment to determine the effects of a new educational approach on student learning outcomes. Students would be randomly assigned to either the control group (traditional teaching method) or the experimental group (new educational approach).

10. Surveys and Questionnaires

Surveys and questionnaires are quantitative methods that involve asking research participants structured and predefined questions to collect data about their attitudes, beliefs, behaviors, or characteristics (Patten, 2017).

Surveys are beneficial for collecting data from large samples, but they depend heavily on the honesty and accuracy of respondents.

They tend to be seen as more authoritative than their qualitative counterparts, semi-structured interviews, because the data is quantifiable (e.g. a questionnaire where information is presented on a scale from 1 to 10 can allow researchers to determine and compare statistical means, averages, and variations across sub-populations in the study).

Example of a Survey Study

A company might use a survey to gather data about employee job satisfaction across its offices worldwide. Employees would be asked to rate various aspects of their job satisfaction on a Likert scale. While this method provides a broad overview, it may lack the depth of understanding possible with other methods (Stokes & Wall, 2017).

11. Longitudinal Studies

Longitudinal studies involve repeated observations of the same variables over extended periods (Howitt, 2019). These studies are valuable for tracking development and change but can be costly and time-consuming.

With multiple data points collected over extended periods, it’s possible to examine continuous changes within things like population dynamics or consumer behavior. This makes a detailed analysis of change possible.

a visual representation of a longitudinal study demonstrating that data is collected over time on one sample so researchers can examine how variables change over time

Perhaps the most relatable example of a longitudinal study is a national census, which is taken on the same day every few years, to gather comparative demographic data that can show how a nation is changing over time.

While longitudinal studies are commonly quantitative, there are also instances of qualitative ones as well, such as the famous 7 Up study from the UK, which studies 14 individuals every 7 years to explore their development over their lives.

Example of a Longitudinal Study

A national census, taken every few years, uses surveys to develop longitudinal data, which is then compared and analyzed to present accurate trends over time. Trends a census can reveal include changes in religiosity, values and attitudes on social issues, and much more.

12. Cross-Sectional Studies

Cross-sectional studies are a quantitative research method that involves analyzing data from a population at a specific point in time (Patten, 2017). They provide a snapshot of a situation but cannot determine causality.

This design is used to measure and compare the prevalence of certain characteristics or outcomes in different groups within the sampled population.

A visual representation of a cross-sectional group of people, demonstrating that the data is collected at a single point in time and you can compare groups within the sample

The major advantage of cross-sectional design is its ability to measure a wide range of variables simultaneously without needing to follow up with participants over time.

However, cross-sectional studies do have limitations . This design can only show if there are associations or correlations between different variables, but cannot prove cause and effect relationships, temporal sequence, changes, and trends over time.

Example of a Cross-Sectional Study

Our longitudinal study example of a national census also happens to contain cross-sectional design. One census is cross-sectional, displaying only data from one point in time. But when a census is taken once every few years, it becomes longitudinal, and so long as the data collection technique remains unchanged, identification of changes will be achievable, adding another time dimension on top of a basic cross-sectional study.

13. Correlational Research

Correlational research is a quantitative method that seeks to determine if and to what degree a relationship exists between two or more quantifiable variables (Schweigert, 2021).

This approach provides a fast and easy way to make initial hypotheses based on either positive or  negative correlation trends  that can be observed within dataset.

While correlational research can reveal relationships between variables, it cannot establish causality.

Methods used for data analysis may include statistical correlations such as Pearson’s or Spearman’s.

Example of Correlational Research

A team of researchers is interested in studying the relationship between the amount of time students spend studying and their academic performance. They gather data from a high school, measuring the number of hours each student studies per week and their grade point averages (GPAs) at the end of the semester. Upon analyzing the data, they find a positive correlation, suggesting that students who spend more time studying tend to have higher GPAs.

14. Quasi-Experimental Design Research

Quasi-experimental design research is a quantitative research method that is similar to experimental design but lacks the element of random assignment to treatment or control.

Instead, quasi-experimental designs typically rely on certain other methods to control for extraneous variables.

The term ‘quasi-experimental’ implies that the experiment resembles a true experiment, but it is not exactly the same because it doesn’t meet all the criteria for a ‘true’ experiment, specifically in terms of control and random assignment.

Quasi-experimental design is useful when researchers want to study a causal hypothesis or relationship, but practical or ethical considerations prevent them from manipulating variables and randomly assigning participants to conditions.

Example of Quasi-Experimental Design

A researcher wants to study the impact of a new math tutoring program on student performance. However, ethical and practical constraints prevent random assignment to the “tutoring” and “no tutoring” groups. Instead, the researcher compares students who chose to receive tutoring (experimental group) to similar students who did not choose to receive tutoring (control group), controlling for other variables like grade level and previous math performance.

Related: Examples and Types of Random Assignment in Research

15. Meta-Analysis Research

Meta-analysis statistically combines the results of multiple studies on a specific topic to yield a more precise estimate of the effect size. It’s the gold standard of secondary research .

Meta-analysis is particularly useful when there are numerous studies on a topic, and there is a need to integrate the findings to draw more reliable conclusions.

Some meta-analyses can identify flaws or gaps in a corpus of research, when can be highly influential in academic research, despite lack of primary data collection.

However, they tend only to be feasible when there is a sizable corpus of high-quality and reliable studies into a phenomenon.

Example of a Meta-Analysis

The power of feedback revisited (Wisniewski, Zierer & Hattie, 2020) is a meta-analysis that examines 435 empirical studies research on the effects of feedback on student learning. They use a random-effects model to ascertain whether there is a clear effect size across the literature. The authors find that feedback tends to impact cognitive and motor skill outcomes but has less of an effect on motivational and behavioral outcomes.

Choosing a research method requires a lot of consideration regarding what you want to achieve, your research paradigm, and the methodology that is most valuable for what you are studying. There are multiple types of research methods, many of which I haven’t been able to present here. Generally, it’s recommended that you work with an experienced researcher or research supervisor to identify a suitable research method for your study at hand.

Hammond, M., & Wellington, J. (2020). Research methods: The key concepts . New York: Routledge.

Howitt, D. (2019). Introduction to qualitative research methods in psychology . London: Pearson UK.

Pajo, B. (2022). Introduction to research methods: A hands-on approach . New York: Sage Publications.

Patten, M. L. (2017). Understanding research methods: An overview of the essentials . New York: Sage

Schweigert, W. A. (2021). Research methods in psychology: A handbook . Los Angeles: Waveland Press.

Stokes, P., & Wall, T. (2017). Research methods . New York: Bloomsbury Publishing.

Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact . London: John Wiley & Sons.

Walliman, N. (2021). Research methods: The basics. London: Routledge.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 5 Top Tips for Succeeding at University
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 50 Durable Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 100 Consumer Goods Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 30 Globalization Pros and Cons

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

Elsevier QRcode Wechat

  • Research Process

Choosing the Right Research Methodology: A Guide for Researchers

  • 3 minute read
  • 35.6K views

Table of Contents

Choosing an optimal research methodology is crucial for the success of any research project. The methodology you select will determine the type of data you collect, how you collect it, and how you analyse it. Understanding the different types of research methods available along with their strengths and weaknesses, is thus imperative to make an informed decision.

Understanding different research methods:

There are several research methods available depending on the type of study you are conducting, i.e., whether it is laboratory-based, clinical, epidemiological, or survey based . Some common methodologies include qualitative research, quantitative research, experimental research, survey-based research, and action research. Each method can be opted for and modified, depending on the type of research hypotheses and objectives.

Qualitative vs quantitative research:

When deciding on a research methodology, one of the key factors to consider is whether your research will be qualitative or quantitative. Qualitative research is used to understand people’s experiences, concepts, thoughts, or behaviours . Quantitative research, on the contrary, deals with numbers, graphs, and charts, and is used to test or confirm hypotheses, assumptions, and theories. 

Qualitative research methodology:

Qualitative research is often used to examine issues that are not well understood, and to gather additional insights on these topics. Qualitative research methods include open-ended survey questions, observations of behaviours described through words, and reviews of literature that has explored similar theories and ideas. These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. 

Quantitative research methodology:

The goal of quantitative research is to test hypotheses, confirm assumptions and theories, and determine cause-and-effect relationships. Quantitative research methods include experiments, close-ended survey questions, and countable and numbered observations. Data analysis for quantitative research relies heavily on statistical methods.

Analysing qualitative vs quantitative data:

The methods used for data analysis also differ for qualitative and quantitative research. As mentioned earlier, quantitative data is generally analysed using statistical methods and does not leave much room for speculation. It is more structured and follows a predetermined plan. In quantitative research, the researcher starts with a hypothesis and uses statistical methods to test it. Contrarily, methods used for qualitative data analysis can identify patterns and themes within the data, rather than provide statistical measures of the data. It is an iterative process, where the researcher goes back and forth trying to gauge the larger implications of the data through different perspectives and revising the analysis if required.

When to use qualitative vs quantitative research:

The choice between qualitative and quantitative research will depend on the gap that the research project aims to address, and specific objectives of the study. If the goal is to establish facts about a subject or topic, quantitative research is an appropriate choice. However, if the goal is to understand people’s experiences or perspectives, qualitative research may be more suitable. 

Conclusion:

In conclusion, an understanding of the different research methods available, their applicability, advantages, and disadvantages is essential for making an informed decision on the best methodology for your project. If you need any additional guidance on which research methodology to opt for, you can head over to Elsevier Author Services (EAS). EAS experts will guide you throughout the process and help you choose the perfect methodology for your research goals.

Why is data validation important in research

Why is data validation important in research?

Importance-of-Data-Collection

When Data Speak, Listen: Importance of Data Collection and Analysis Methods

You may also like.

what is a descriptive research design

Descriptive Research Design and Its Myriad Uses

Doctor doing a Biomedical Research Paper

Five Common Mistakes to Avoid When Writing a Biomedical Research Paper

methods of research to be used

Making Technical Writing in Environmental Engineering Accessible

Risks of AI-assisted Academic Writing

To Err is Not Human: The Dangers of AI-assisted Academic Writing

Importance-of-Data-Collection

Writing a good review article

Scholarly Sources What are They and Where can You Find Them

Scholarly Sources: What are They and Where can You Find Them?

Input your search keywords and press Enter.

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

methods of research to be used

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Research Methods In Psychology

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

research methods3

Hypotheses are statements about the prediction of the results, that can be verified or disproved by some investigation.

There are four types of hypotheses :
  • Null Hypotheses (H0 ) – these predict that no difference will be found in the results between the conditions. Typically these are written ‘There will be no difference…’
  • Alternative Hypotheses (Ha or H1) – these predict that there will be a significant difference in the results between the two conditions. This is also known as the experimental hypothesis.
  • One-tailed (directional) hypotheses – these state the specific direction the researcher expects the results to move in, e.g. higher, lower, more, less. In a correlation study, the predicted direction of the correlation can be either positive or negative.
  • Two-tailed (non-directional) hypotheses – these state that a difference will be found between the conditions of the independent variable but does not state the direction of a difference or relationship. Typically these are always written ‘There will be a difference ….’

All research has an alternative hypothesis (either a one-tailed or two-tailed) and a corresponding null hypothesis.

Once the research is conducted and results are found, psychologists must accept one hypothesis and reject the other. 

So, if a difference is found, the Psychologist would accept the alternative hypothesis and reject the null.  The opposite applies if no difference is found.

Sampling techniques

Sampling is the process of selecting a representative group from the population under study.

Sample Target Population

A sample is the participants you select from a target population (the group you are interested in) to make generalizations about.

Representative means the extent to which a sample mirrors a researcher’s target population and reflects its characteristics.

Generalisability means the extent to which their findings can be applied to the larger population of which their sample was a part.

  • Volunteer sample : where participants pick themselves through newspaper adverts, noticeboards or online.
  • Opportunity sampling : also known as convenience sampling , uses people who are available at the time the study is carried out and willing to take part. It is based on convenience.
  • Random sampling : when every person in the target population has an equal chance of being selected. An example of random sampling would be picking names out of a hat.
  • Systematic sampling : when a system is used to select participants. Picking every Nth person from all possible participants. N = the number of people in the research population / the number of people needed for the sample.
  • Stratified sampling : when you identify the subgroups and select participants in proportion to their occurrences.
  • Snowball sampling : when researchers find a few participants, and then ask them to find participants themselves and so on.
  • Quota sampling : when researchers will be told to ensure the sample fits certain quotas, for example they might be told to find 90 participants, with 30 of them being unemployed.

Experiments always have an independent and dependent variable .

  • The independent variable is the one the experimenter manipulates (the thing that changes between the conditions the participants are placed into). It is assumed to have a direct effect on the dependent variable.
  • The dependent variable is the thing being measured, or the results of the experiment.

variables

Operationalization of variables means making them measurable/quantifiable. We must use operationalization to ensure that variables are in a form that can be easily tested.

For instance, we can’t really measure ‘happiness’, but we can measure how many times a person smiles within a two-hour period. 

By operationalizing variables, we make it easy for someone else to replicate our research. Remember, this is important because we can check if our findings are reliable.

Extraneous variables are all variables which are not independent variable but could affect the results of the experiment.

It can be a natural characteristic of the participant, such as intelligence levels, gender, or age for example, or it could be a situational feature of the environment such as lighting or noise.

Demand characteristics are a type of extraneous variable that occurs if the participants work out the aims of the research study, they may begin to behave in a certain way.

For example, in Milgram’s research , critics argued that participants worked out that the shocks were not real and they administered them as they thought this was what was required of them. 

Extraneous variables must be controlled so that they do not affect (confound) the results.

Randomly allocating participants to their conditions or using a matched pairs experimental design can help to reduce participant variables. 

Situational variables are controlled by using standardized procedures, ensuring every participant in a given condition is treated in the same way

Experimental Design

Experimental design refers to how participants are allocated to each condition of the independent variable, such as a control or experimental group.
  • Independent design ( between-groups design ): each participant is selected for only one group. With the independent design, the most common way of deciding which participants go into which group is by means of randomization. 
  • Matched participants design : each participant is selected for only one group, but the participants in the two groups are matched for some relevant factor or factors (e.g. ability; sex; age).
  • Repeated measures design ( within groups) : each participant appears in both groups, so that there are exactly the same participants in each group.
  • The main problem with the repeated measures design is that there may well be order effects. Their experiences during the experiment may change the participants in various ways.
  • They may perform better when they appear in the second group because they have gained useful information about the experiment or about the task. On the other hand, they may perform less well on the second occasion because of tiredness or boredom.
  • Counterbalancing is the best way of preventing order effects from disrupting the findings of an experiment, and involves ensuring that each condition is equally likely to be used first and second by the participants.

If we wish to compare two groups with respect to a given independent variable, it is essential to make sure that the two groups do not differ in any other important way. 

Experimental Methods

All experimental methods involve an iv (independent variable) and dv (dependent variable)..

  • Field experiments are conducted in the everyday (natural) environment of the participants. The experimenter still manipulates the IV, but in a real-life setting. It may be possible to control extraneous variables, though such control is more difficult than in a lab experiment.
  • Natural experiments are when a naturally occurring IV is investigated that isn’t deliberately manipulated, it exists anyway. Participants are not randomly allocated, and the natural event may only occur rarely.

Case studies are in-depth investigations of a person, group, event, or community. It uses information from a range of sources, such as from the person concerned and also from their family and friends.

Many techniques may be used such as interviews, psychological tests, observations and experiments. Case studies are generally longitudinal: in other words, they follow the individual or group over an extended period of time. 

Case studies are widely used in psychology and among the best-known ones carried out were by Sigmund Freud . He conducted very detailed investigations into the private lives of his patients in an attempt to both understand and help them overcome their illnesses.

Case studies provide rich qualitative data and have high levels of ecological validity. However, it is difficult to generalize from individual cases as each one has unique characteristics.

Correlational Studies

Correlation means association; it is a measure of the extent to which two variables are related. One of the variables can be regarded as the predictor variable with the other one as the outcome variable.

Correlational studies typically involve obtaining two different measures from a group of participants, and then assessing the degree of association between the measures. 

The predictor variable can be seen as occurring before the outcome variable in some sense. It is called the predictor variable, because it forms the basis for predicting the value of the outcome variable.

Relationships between variables can be displayed on a graph or as a numerical score called a correlation coefficient.

types of correlation. Scatter plot. Positive negative and no correlation

  • If an increase in one variable tends to be associated with an increase in the other, then this is known as a positive correlation .
  • If an increase in one variable tends to be associated with a decrease in the other, then this is known as a negative correlation .
  • A zero correlation occurs when there is no relationship between variables.

After looking at the scattergraph, if we want to be sure that a significant relationship does exist between the two variables, a statistical test of correlation can be conducted, such as Spearman’s rho.

The test will give us a score, called a correlation coefficient . This is a value between 0 and 1, and the closer to 1 the score is, the stronger the relationship between the variables. This value can be both positive e.g. 0.63, or negative -0.63.

Types of correlation. Strong, weak, and perfect positive correlation, strong, weak, and perfect negative correlation, no correlation. Graphs or charts ...

A correlation between variables, however, does not automatically mean that the change in one variable is the cause of the change in the values of the other variable. A correlation only shows if there is a relationship between variables.

Correlation does not always prove causation, as a third variable may be involved. 

causation correlation

Interview Methods

Interviews are commonly divided into two types: structured and unstructured.

A fixed, predetermined set of questions is put to every participant in the same order and in the same way. 

Responses are recorded on a questionnaire, and the researcher presets the order and wording of questions, and sometimes the range of alternative answers.

The interviewer stays within their role and maintains social distance from the interviewee.

There are no set questions, and the participant can raise whatever topics he/she feels are relevant and ask them in their own way. Questions are posed about participants’ answers to the subject

Unstructured interviews are most useful in qualitative research to analyze attitudes and values.

Though they rarely provide a valid basis for generalization, their main advantage is that they enable the researcher to probe social actors’ subjective point of view. 

Questionnaire Method

Questionnaires can be thought of as a kind of written interview. They can be carried out face to face, by telephone, or post.

The choice of questions is important because of the need to avoid bias or ambiguity in the questions, ‘leading’ the respondent or causing offense.

  • Open questions are designed to encourage a full, meaningful answer using the subject’s own knowledge and feelings. They provide insights into feelings, opinions, and understanding. Example: “How do you feel about that situation?”
  • Closed questions can be answered with a simple “yes” or “no” or specific information, limiting the depth of response. They are useful for gathering specific facts or confirming details. Example: “Do you feel anxious in crowds?”

Its other practical advantages are that it is cheaper than face-to-face interviews and can be used to contact many respondents scattered over a wide area relatively quickly.

Observations

There are different types of observation methods :
  • Covert observation is where the researcher doesn’t tell the participants they are being observed until after the study is complete. There could be ethical problems or deception and consent with this particular observation method.
  • Overt observation is where a researcher tells the participants they are being observed and what they are being observed for.
  • Controlled : behavior is observed under controlled laboratory conditions (e.g., Bandura’s Bobo doll study).
  • Natural : Here, spontaneous behavior is recorded in a natural setting.
  • Participant : Here, the observer has direct contact with the group of people they are observing. The researcher becomes a member of the group they are researching.  
  • Non-participant (aka “fly on the wall): The researcher does not have direct contact with the people being observed. The observation of participants’ behavior is from a distance

Pilot Study

A pilot  study is a small scale preliminary study conducted in order to evaluate the feasibility of the key s teps in a future, full-scale project.

A pilot study is an initial run-through of the procedures to be used in an investigation; it involves selecting a few people and trying out the study on them. It is possible to save time, and in some cases, money, by identifying any flaws in the procedures designed by the researcher.

A pilot study can help the researcher spot any ambiguities (i.e. unusual things) or confusion in the information given to participants or problems with the task devised.

Sometimes the task is too hard, and the researcher may get a floor effect, because none of the participants can score at all or can complete the task – all performances are low.

The opposite effect is a ceiling effect, when the task is so easy that all achieve virtually full marks or top performances and are “hitting the ceiling”.

Research Design

In cross-sectional research , a researcher compares multiple segments of the population at the same time

Sometimes, we want to see how people change over time, as in studies of human development and lifespan. Longitudinal research is a research design in which data-gathering is administered repeatedly over an extended period of time.

In cohort studies , the participants must share a common factor or characteristic such as age, demographic, or occupation. A cohort study is a type of longitudinal study in which researchers monitor and observe a chosen population over an extended period.

Triangulation means using more than one research method to improve the study’s validity.

Reliability

Reliability is a measure of consistency, if a particular measurement is repeated and the same result is obtained then it is described as being reliable.

  • Test-retest reliability :  assessing the same person on two different occasions which shows the extent to which the test produces the same answers.
  • Inter-observer reliability : the extent to which there is an agreement between two or more observers.

Meta-Analysis

A meta-analysis is a systematic review that involves identifying an aim and then searching for research studies that have addressed similar aims/hypotheses.

This is done by looking through various databases, and then decisions are made about what studies are to be included/excluded.

Strengths: Increases the conclusions’ validity as they’re based on a wider range.

Weaknesses: Research designs in studies can vary, so they are not truly comparable.

Peer Review

A researcher submits an article to a journal. The choice of the journal may be determined by the journal’s audience or prestige.

The journal selects two or more appropriate experts (psychologists working in a similar field) to peer review the article without payment. The peer reviewers assess: the methods and designs used, originality of the findings, the validity of the original research findings and its content, structure and language.

Feedback from the reviewer determines whether the article is accepted. The article may be: Accepted as it is, accepted with revisions, sent back to the author to revise and re-submit or rejected without the possibility of submission.

The editor makes the final decision whether to accept or reject the research report based on the reviewers comments/ recommendations.

Peer review is important because it prevent faulty data from entering the public domain, it provides a way of checking the validity of findings and the quality of the methodology and is used to assess the research rating of university departments.

Peer reviews may be an ideal, whereas in practice there are lots of problems. For example, it slows publication down and may prevent unusual, new work being published. Some reviewers might use it as an opportunity to prevent competing researchers from publishing work.

Some people doubt whether peer review can really prevent the publication of fraudulent research.

The advent of the internet means that a lot of research and academic comment is being published without official peer reviews than before, though systems are evolving on the internet where everyone really has a chance to offer their opinions and police the quality of research.

Types of Data

  • Quantitative data is numerical data e.g. reaction time or number of mistakes. It represents how much or how long, how many there are of something. A tally of behavioral categories and closed questions in a questionnaire collect quantitative data.
  • Qualitative data is virtually any type of information that can be observed and recorded that is not numerical in nature and can be in the form of written or verbal communication. Open questions in questionnaires and accounts from observational studies collect qualitative data.
  • Primary data is first-hand data collected for the purpose of the investigation.
  • Secondary data is information that has been collected by someone other than the person who is conducting the research e.g. taken from journals, books or articles.

Validity means how well a piece of research actually measures what it sets out to, or how well it reflects the reality it claims to represent.

Validity is whether the observed effect is genuine and represents what is actually out there in the world.

  • Concurrent validity is the extent to which a psychological measure relates to an existing similar measure and obtains close results. For example, a new intelligence test compared to an established test.
  • Face validity : does the test measure what it’s supposed to measure ‘on the face of it’. This is done by ‘eyeballing’ the measuring or by passing it to an expert to check.
  • Ecological validit y is the extent to which findings from a research study can be generalized to other settings / real life.
  • Temporal validity is the extent to which findings from a research study can be generalized to other historical times.

Features of Science

  • Paradigm – A set of shared assumptions and agreed methods within a scientific discipline.
  • Paradigm shift – The result of the scientific revolution: a significant change in the dominant unifying theory within a scientific discipline.
  • Objectivity – When all sources of personal bias are minimised so not to distort or influence the research process.
  • Empirical method – Scientific approaches that are based on the gathering of evidence through direct observation and experience.
  • Replicability – The extent to which scientific procedures and findings can be repeated by other researchers.
  • Falsifiability – The principle that a theory cannot be considered scientific unless it admits the possibility of being proved untrue.

Statistical Testing

A significant result is one where there is a low probability that chance factors were responsible for any observed difference, correlation, or association in the variables tested.

If our test is significant, we can reject our null hypothesis and accept our alternative hypothesis.

If our test is not significant, we can accept our null hypothesis and reject our alternative hypothesis. A null hypothesis is a statement of no effect.

In Psychology, we use p < 0.05 (as it strikes a balance between making a type I and II error) but p < 0.01 is used in tests that could cause harm like introducing a new drug.

A type I error is when the null hypothesis is rejected when it should have been accepted (happens when a lenient significance level is used, an error of optimism).

A type II error is when the null hypothesis is accepted when it should have been rejected (happens when a stringent significance level is used, an error of pessimism).

Ethical Issues

  • Informed consent is when participants are able to make an informed judgment about whether to take part. It causes them to guess the aims of the study and change their behavior.
  • To deal with it, we can gain presumptive consent or ask them to formally indicate their agreement to participate but it may invalidate the purpose of the study and it is not guaranteed that the participants would understand.
  • Deception should only be used when it is approved by an ethics committee, as it involves deliberately misleading or withholding information. Participants should be fully debriefed after the study but debriefing can’t turn the clock back.
  • All participants should be informed at the beginning that they have the right to withdraw if they ever feel distressed or uncomfortable.
  • It causes bias as the ones that stayed are obedient and some may not withdraw as they may have been given incentives or feel like they’re spoiling the study. Researchers can offer the right to withdraw data after participation.
  • Participants should all have protection from harm . The researcher should avoid risks greater than those experienced in everyday life and they should stop the study if any harm is suspected. However, the harm may not be apparent at the time of the study.
  • Confidentiality concerns the communication of personal information. The researchers should not record any names but use numbers or false names though it may not be possible as it is sometimes possible to work out who the researchers were.

Print Friendly, PDF & Email

Pfeiffer Library

Research Methodologies

  • What are research designs?

What are research methodologies?

Quantitative research methodologies, qualitative research methodologies, mixed method methodologies, selecting a methodology.

  • What are research methods?
  • Additional Sources

According to Dawson (2019),a research methodology is the primary principle that will guide your research.  It becomes the general approach in conducting research on your topic and determines what research method you will use. A research methodology is different from a research method because research methods are the tools you use to gather your data (Dawson, 2019).  You must consider several issues when it comes to selecting the most appropriate methodology for your topic.  Issues might include research limitations and ethical dilemmas that might impact the quality of your research.  Descriptions of each type of methodology are included below.

Quantitative research methodologies are meant to create numeric statistics by using survey research to gather data (Dawson, 2019).  This approach tends to reach a larger amount of people in a shorter amount of time.  According to Labaree (2020), there are three parts that make up a quantitative research methodology:

  • Sample population
  • How you will collect your data (this is the research method)
  • How you will analyze your data

Once you decide on a methodology, you can consider the method to which you will apply your methodology.

Qualitative research methodologies examine the behaviors, opinions, and experiences of individuals through methods of examination (Dawson, 2019).  This type of approach typically requires less participants, but more time with each participant.  It gives research subjects the opportunity to provide their own opinion on a certain topic.

Examples of Qualitative Research Methodologies

  • Action research:  This is when the researcher works with a group of people to improve something in a certain environment.  It is a common approach for research in organizational management, community development, education, and agriculture (Dawson, 2019).
  • Ethnography:  The process of organizing and describing cultural behaviors (Dawson, 2019).  Researchers may immerse themselves into another culture to receive in "inside look" into the group they are studying.  It is often a time consuming process because the researcher will do this for a long period of time.  This can also be called "participant observation" (Dawson, 2019).
  • Feminist research:  The goal of this methodology is to study topics that have been dominated by male test subjects.  It aims to study females and compare the results to previous studies that used male participants (Dawson, 2019).
  • Grounded theory:  The process of developing a theory to describe a phenomenon strictly through the data results collected in a study.  It is different from other research methodologies where the researcher attempts to prove a hypothesis that they create before collecting data.  Popular research methods for this approach include focus groups and interviews (Dawson, 2019).

A mixed methodology allows you to implement the strengths of both qualitative and quantitative research methods.  In some cases, you may find that your research project would benefit from this.  This approach is beneficial because it allows each methodology to counteract the weaknesses of the other (Dawson, 2019).  You should consider this option carefully, as it can make your research complicated if not planned correctly.

What should you do to decide on a research methodology?  The most logical way to determine your methodology is to decide whether you plan on conducting qualitative or qualitative research.  You also have the option to implement a mixed methods approach.  Looking back on Dawson's (2019) five "W's" on the previous page , may help you with this process.  You should also look for key words that indicate a specific type of research methodology in your hypothesis or proposal.  Some words may lean more towards one methodology over another.

Quantitative Research Key Words

  • How satisfied

Qualitative Research Key Words

  • Experiences
  • Thoughts/Think
  • Relationship
  • << Previous: What are research designs?
  • Next: What are research methods? >>
  • Last Updated: Aug 2, 2022 2:36 PM
  • URL: https://library.tiffin.edu/researchmethodologies

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of springeropen

Language: English | German

How to Construct a Mixed Methods Research Design

Wie man ein mixed methods-forschungs-design konstruiert, judith schoonenboom.

1 Institut für Bildungswissenschaft, Universität Wien, Sensengasse 3a, 1090 Wien, Austria

R. Burke Johnson

2 Department of Professional Studies, University of South Alabama, UCOM 3700, 36688-0002 Mobile, AL USA

This article provides researchers with knowledge of how to design a high quality mixed methods research study. To design a mixed study, researchers must understand and carefully consider each of the dimensions of mixed methods design, and always keep an eye on the issue of validity. We explain the seven major design dimensions: purpose, theoretical drive, timing (simultaneity and dependency), point of integration, typological versus interactive design approaches, planned versus emergent design, and design complexity. There also are multiple secondary dimensions that need to be considered during the design process. We explain ten secondary dimensions of design to be considered for each research study. We also provide two case studies showing how the mixed designs were constructed.

Zusammenfassung

Der Beitrag gibt einen Überblick darüber, wie das Forschungsdesign bei Mixed Methods-Studien angelegt sein sollte. Um ein Mixed Methods-Forschungsdesign aufzustellen, müssen Forschende sorgfältig alle Dimensionen von Methodenkombinationen abwägen und von Anfang an auf die Güte und damit verbundene etwaige Probleme achten. Wir erklären und diskutieren die für Forschungsdesigns relevanten sieben Dimensionen von Methodenkombinationen: Untersuchungsziel, Rolle von Theorie im Forschungsprozess, Timing (Simultanität und Abhängigkeit), Schnittstellen, an denen Integration stattfindet, systematische vs. interaktive Design-Ansätze, geplante vs. emergente Designs und Komplexität des Designs. Es gibt außerdem zahlreiche sekundäre Dimensionen, die bei der Aufstellung des Forschungsdesigns berücksichtigt werden müssen, von denen wir zehn erklären. Der Beitrag schließt mit zwei Fallbeispielen ab, anhand derer konkret gezeigt wird, wie Mixed Methods-Forschungsdesigns aufgestellt werden können.

What is a mixed methods design?

This article addresses the process of selecting and constructing mixed methods research (MMR) designs. The word “design” has at least two distinct meanings in mixed methods research (Maxwell 2013 ). One meaning focuses on the process of design; in this meaning, design is often used as a verb. Someone can be engaged in designing a study (in German: “eine Studie konzipieren” or “eine Studie designen”). Another meaning is that of a product, namely the result of designing. The result of designing as a verb is a mixed methods design as a noun (in German: “das Forschungsdesign” or “Design”), as it has, for example, been described in a journal article. In mixed methods design, both meanings are relevant. To obtain a strong design as a product, one needs to carefully consider a number of rules for designing as an activity. Obeying these rules is not a guarantee of a strong design, but it does contribute to it. A mixed methods design is characterized by the combination of at least one qualitative and one quantitative research component. For the purpose of this article, we use the following definition of mixed methods research (Johnson et al. 2007 , p. 123):

Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e. g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration.

Mixed methods research (“Mixed Methods” or “MM”) is the sibling of multimethod research (“Methodenkombination”) in which either solely multiple qualitative approaches or solely multiple quantitative approaches are combined.

In a commonly used mixed methods notation system (Morse 1991 ), the components are indicated as qual and quan (or QUAL and QUAN to emphasize primacy), respectively, for qualitative and quantitative research. As discussed below, plus (+) signs refer to concurrent implementation of components (“gleichzeitige Durchführung der Teilstudien” or “paralleles Mixed Methods-Design”) and arrows (→) refer to sequential implementation (“Sequenzielle Durchführung der Teilstudien” or “sequenzielles Mixed Methods-Design”) of components. Note that each research tradition receives an equal number of letters (four) in its abbreviation for equity. In this article, this notation system is used in some depth.

A mixed methods design as a product has several primary characteristics that should be considered during the design process. As shown in Table  1 , the following primary design “dimensions” are emphasized in this article: purpose of mixing, theoretical drive, timing, point of integration, typological use, and degree of complexity. These characteristics are discussed below. We also provide some secondary dimensions to consider when constructing a mixed methods design (Johnson and Christensen 2017 ).

List of Primary and Secondary Design Dimensions

On the basis of these dimensions, mixed methods designs can be classified into a mixed methods typology or taxonomy. In the mixed methods literature, various typologies of mixed methods designs have been proposed (for an overview see Creswell and Plano Clark 2011 , p. 69–72).

The overall goal of mixed methods research, of combining qualitative and quantitative research components, is to expand and strengthen a study’s conclusions and, therefore, contribute to the published literature. In all studies, the use of mixed methods should contribute to answering one’s research questions.

Ultimately, mixed methods research is about heightened knowledge and validity. The design as a product should be of sufficient quality to achieve multiple validities legitimation (Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ), which refers to the mixed methods research study meeting the relevant combination or set of quantitative, qualitative, and mixed methods validities in each research study.

Given this goal of answering the research question(s) with validity, a researcher can nevertheless have various reasons or purposes for wanting to strengthen the research study and its conclusions. Following is the first design dimension for one to consider when designing a study: Given the research question(s), what is the purpose of the mixed methods study?

A popular classification of purposes of mixed methods research was first introduced in 1989 by Greene, Caracelli, and Graham, based on an analysis of published mixed methods studies. This classification is still in use (Greene 2007 ). Greene et al. ( 1989 , p. 259) distinguished the following five purposes for mixing in mixed methods research:

1.  Triangulation seeks convergence, corroboration, correspondence of results from different methods; 2.  Complementarity seeks elaboration, enhancement, illustration, clarification of the results from one method with the results from the other method; 3.  Development seeks to use the results from one method to help develop or inform the other method, where development is broadly construed to include sampling and implementation, as well as measurement decisions; 4.  Initiation seeks the discovery of paradox and contradiction, new perspectives of frameworks, the recasting of questions or results from one method with questions or results from the other method; 5.  Expansion seeks to extend the breadth and range of inquiry by using different methods for different inquiry components.

In the past 28 years, this classification has been supplemented by several others. On the basis of a review of the reasons for combining qualitative and quantitative research mentioned by the authors of mixed methods studies, Bryman ( 2006 ) formulated a list of more concrete rationales for performing mixed methods research (see Appendix). Bryman’s classification breaks down Greene et al.’s ( 1989 ) categories into several aspects, and he adds a number of additional aspects, such as the following:

(a)  Credibility – refers to suggestions that employing both approaches enhances the integrity of findings. (b)  Context – refers to cases in which the combination is justified in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey. (c)  Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings. (d)  Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others. (e)  Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project. (f)  Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research. (Bryman, p. 106)

Views can be diverse (f) in various ways. Some examples of mixed methods design that include a diversity of views are:

  • Iteratively/sequentially connecting local/idiographic knowledge with national/general/nomothetic knowledge;
  • Learning from different perspectives on teams and in the field and literature;
  • Achieving multiple participation, social justice, and action;
  • Determining what works for whom and the relevance/importance of context;
  • Producing interdisciplinary substantive theory, including/comparing multiple perspectives and data regarding a phenomenon;
  • Juxtaposition-dialogue/comparison-synthesis;
  • Breaking down binaries/dualisms (some of both);
  • Explaining interaction between/among natural and human systems;
  • Explaining complexity.

The number of possible purposes for mixing is very large and is increasing; hence, it is not possible to provide an exhaustive list. Greene et al.’s ( 1989 ) purposes, Bryman’s ( 2006 ) rationales, and our examples of a diversity of views were formulated as classifications on the basis of examination of many existing research studies. They indicate how the qualitative and quantitative research components of a study relate to each other. These purposes can be used post hoc to classify research or a priori in the design of a new study. When designing a mixed methods study, it is sometimes helpful to list the purpose in the title of the study design.

The key point of this section is for the researcher to begin a study with at least one research question and then carefully consider what the purposes for mixing are. One can use mixed methods to examine different aspects of a single research question, or one can use separate but related qualitative and quantitative research questions. In all cases, the mixing of methods, methodologies, and/or paradigms will help answer the research questions and make improvements over a more basic study design. Fuller and richer information will be obtained in the mixed methods study.

Theoretical drive

In addition to a mixing purpose, a mixed methods research study might have an overall “theoretical drive” (Morse and Niehaus 2009 ). When designing a mixed methods study, it is occasionally helpful to list the theoretical drive in the title of the study design. An investigation, in Morse and Niehaus’s ( 2009 ) view, is focused primarily on either exploration-and-description or on testing-and-prediction. In the first case, the theoretical drive is called “inductive” or “qualitative”; in the second case, it is called “deductive” or “quantitative”. In the case of mixed methods, the component that corresponds to the theoretical drive is referred to as the “core” component (“Kernkomponente”), and the other component is called the “supplemental” component (“ergänzende Komponente”). In Morse’s notation system, the core component is written in capitals and the supplemental component is written in lowercase letters. For example, in a QUAL → quan design, more weight is attached to the data coming from the core qualitative component. Due to the decisive character of the core component, the core component must be able to stand on its own, and should be implemented rigorously. The supplemental component does not have to stand on its own.

Although this distinction is useful in some circumstances, we do not advise to apply it to every mixed methods design. First, Morse and Niehaus contend that the supplemental component can be done “less rigorously” but do not explain which aspects of rigor can be dropped. In addition, the idea of decreased rigor is in conflict with one key theme of the present article, namely that mixed methods designs should always meet the criterion of multiple validities legitimation (Onwuegbuzie and Johnson 2006 ).

The idea of theoretical drive as explicated by Morse and Niehaus has been criticized. For example, we view a theoretical drive as a feature not of a whole study, but of a research question, or, more precisely, of an interpretation of a research question. For example, if one study includes multiple research questions, it might include several theoretical drives (Schoonenboom 2016 ).

Another criticism of Morse and Niehaus’ conceptualization of theoretical drive is that it does not allow for equal-status mixed methods research (“Mixed Methods Forschung, bei der qualitative und quantitative Methoden die gleiche Bedeutung haben” or “gleichrangige Mixed Methods-Designs”), in which both the qualitative and quantitative component are of equal value and weight; this same criticism applies to Morgan’s ( 2014 ) set of designs. We agree with Greene ( 2015 ) that mixed methods research can be integrated at the levels of method, methodology, and paradigm. In this view, equal-status mixed methods research designs are possible, and they result when both the qualitative and the quantitative components, approaches, and thinking are of equal value, they take control over the research process in alternation, they are in constant interaction, and the outcomes they produce are integrated during and at the end of the research process. Therefore, equal-status mixed methods research (that we often advocate) is also called “interactive mixed methods research”.

Mixed methods research can have three different drives, as formulated by Johnson et al. ( 2007 , p. 123):

Qualitative dominant [or qualitatively driven] mixed methods research is the type of mixed research in which one relies on a qualitative, constructivist-poststructuralist-critical view of the research process, while concurrently recognizing that the addition of quantitative data and approaches are likely to benefit most research projects. Quantitative dominant [or quantitatively driven] mixed methods research is the type of mixed research in which one relies on a quantitative, postpositivist view of the research process, while concurrently recognizing that the addition of qualitative data and approaches are likely to benefit most research projects. (p. 124) The area around the center of the [qualitative-quantitative] continuum, equal status , is the home for the person that self-identifies as a mixed methods researcher. This researcher takes as his or her starting point the logic and philosophy of mixed methods research. These mixed methods researchers are likely to believe that qualitative and quantitative data and approaches will add insights as one considers most, if not all, research questions.

We leave it to the reader to decide if he or she desires to conduct a qualitatively driven study, a quantitatively driven study, or an equal-status/“interactive” study. According to the philosophies of pragmatism (Johnson and Onwuegbuzie 2004 ) and dialectical pluralism (Johnson 2017 ), interactive mixed methods research is very much a possibility. By successfully conducting an equal-status study, the pragmatist researcher shows that paradigms can be mixed or combined, and that the incompatibility thesis does not always apply to research practice. Equal status research is most easily conducted when a research team is composed of qualitative, quantitative, and mixed researchers, interacts continually, and conducts a study to address one superordinate goal.

Timing: simultaneity and dependence

Another important distinction when designing a mixed methods study relates to the timing of the two (or more) components. When designing a mixed methods study, it is usually helpful to include the word “concurrent” (“parallel”) or “sequential” (“sequenziell”) in the title of the study design; a complex design can be partially concurrent and partially sequential. Timing has two aspects: simultaneity and dependence (Guest 2013 ).

Simultaneity (“Simultanität”) forms the basis of the distinction between concurrent and sequential designs. In a  sequential design , the quantitative component precedes the qualitative component, or vice versa. In a  concurrent design , both components are executed (almost) simultaneously. In the notation of Morse ( 1991 ), concurrence is indicated by a “+” between components (e. g., QUAL + quan), while sequentiality is indicated with a “→” (QUAL → quan). Note that the use of capital letters for one component and lower case letters for another component in the same design suggest that one component is primary and the other is secondary or supplemental.

Some designs are sequential by nature. For example, in a  conversion design, qualitative categories and themes might be first obtained by collection and analysis of qualitative data, and then subsequently quantitized (Teddlie and Tashakkori 2009 ). Likewise, with Greene et al.’s ( 1989 ) initiation purpose, the initiation strand follows the unexpected results that it is supposed to explain. In other cases, the researcher has a choice. It is possible, e. g., to collect interview data and survey data of one inquiry simultaneously; in that case, the research activities would be concurrent. It is also possible to conduct the interviews after the survey data have been collected (or vice versa); in that case, research activities are performed sequentially. Similarly, a study with the purpose of expansion can be designed in which data on an effect and the intervention process are collected simultaneously, or they can be collected sequentially.

A second aspect of timing is dependence (“Abhängigkeit”) . We call two research components dependent if the implementation of the second component depends on the results of data analysis in the first component. Two research components are independent , if their implementation does not depend on the results of data analysis in the other component. Often, a researcher has a choice to perform data analysis independently or not. A researcher could analyze interview data and questionnaire data of one inquiry independently; in that case, the research activities would be independent. It is also possible to let the interview questions depend upon the outcomes of the analysis of the questionnaire data (or vice versa); in that case, research activities are performed dependently. Similarly, the empirical outcome/effect and process in a study with the purpose of expansion might be investigated independently, or the process study might take the effect/outcome as given (dependent).

In the mixed methods literature, the distinction between sequential and concurrent usually refers to the combination of concurrent/independent and sequential/dependent, and to the combination of data collection and data analysis. It is said that in a concurrent design, the data collection and data analysis of both components occurs (almost) simultaneously and independently, while in a sequential design, the data collection and data analysis of one component take place after the data collection and data analysis of the other component and depends on the outcomes of the other component.

In our opinion, simultaneity and dependence are two separate dimensions. Simultaneity indicates whether data collection is done concurrent or sequentially. Dependence indicates whether the implementation of one component depends upon the results of data analysis of the other component. As we will see in the example case studies, a concurrent design could include dependent data analysis, and a sequential design could include independent data analysis. It is conceivable that one simultaneously conducts interviews and collects questionnaire data (concurrent), while allowing the analysis focus of the interviews to depend on what emerges from the survey data (dependence).

Dependent research activities include a redirection of subsequent research inquiry. Using the outcomes of the first research component, the researcher decides what to do in the second component. Depending on the outcomes of the first research component, the researcher will do something else in the second component. If this is so, the research activities involved are said to be sequential-dependent, and any component preceded by another component should appropriately build on the previous component (see sequential validity legitimation ; Johnson and Christensen 2017 ; Onwuegbuzie and Johnson 2006 ).

It is under the purposive discretion of the researcher to determine whether a concurrent-dependent design, a concurrent-independent design, a sequential-dependent design, or a sequential-dependent design is needed to answer a particular research question or set of research questions in a given situation.

Point of integration

Each true mixed methods study has at least one “point of integration” – called the “point of interface” by Morse and Niehaus ( 2009 ) and Guest ( 2013 ) –, at which the qualitative and quantitative components are brought together. Having one or more points of integration is the distinguishing feature of a design based on multiple components. It is at this point that the components are “mixed”, hence the label “mixed methods designs”. The term “mixing”, however, is misleading, as the components are not simply mixed, but have to be integrated very carefully.

Determining where the point of integration will be, and how the results will be integrated, is an important, if not the most important, decision in the design of mixed methods research. Morse and Niehaus ( 2009 ) identify two possible points of integration: the results point of integration and the analytical point of integration.

Most commonly, integration takes place in the results point of integration . At some point in writing down the results of the first component, the results of the second component are added and integrated. A  joint display (listing the qualitative and quantitative findings and an integrative statement) might be used to facilitate this process.

In the case of an analytical point of integration , a first analytical stage of a qualitative component is followed by a second analytical stage, in which the topics identified in the first analytical stage are quantitized. The results of the qualitative component ultimately, and before writing down the results of the analytical phase as a whole, become quantitative; qualitizing also is a possible strategy, which would be the converse of this.

Other authors assume more than two possible points of integration. Teddlie and Tashakkori ( 2009 ) distinguish four different stages of an investigation: the conceptualization stage, the methodological experimental stage (data collection), the analytical experimental stage (data analysis), and the inferential stage. According to these authors, in all four stages, mixing is possible, and thus all four stages are potential points or integration.

However, the four possible points of integration used by Teddlie and Tashakkori ( 2009 ) are still too coarse to distinguish some types of mixing. Mixing in the experiential stage can take many different forms, for example the use of cognitive interviews to improve a questionnaire (tool development), or selecting people for an interview on the basis of the results of a questionnaire (sampling). Extending the definition by Guest ( 2013 ), we define the point of integration as “any point in a study where two or more research components are mixed or connected in some way”. Then, the point of integration in the two examples of this paragraph can be defined more accurately as “instrument development”, and “development of the sample”.

It is at the point of integration that qualitative and quantitative components are integrated. Some primary ways that the components can be connected to each other are as follows:

(1) merging the two data sets, (2) connecting from the analysis of one set of data to the collection of a second set of data, (3) embedding of one form of data within a larger design or procedure, and (4) using a framework (theoretical or program) to bind together the data sets (Creswell and Plano Clark 2011 , p. 76).

More generally, one can consider mixing at any or all of the following research components: purposes, research questions, theoretical drive, methods, methodology, paradigm, data, analysis, and results. One can also include mixing views of different researchers, participants, or stakeholders. The creativity of the mixed methods researcher designing a study is extensive.

Substantively, it can be useful to think of integration or mixing as comparing and bringing together two (or more) components on the basis of one or more of the purposes set out in the first section of this article. For example, it is possible to use qualitative data to illustrate a quantitative effect, or to determine whether the qualitative and the quantitative component yield convergent results ( triangulation ). An integrated result could also consist of a combination of a quantitatively established effect and a qualitative description of the underlying process . In the case of development, integration consists of an adjustment of an, often quantitative, for example, instrument or model or interpretation, based on qualitative assessments by members of the target group.

A special case is the integration of divergent results. The power of mixed methods research is its ability to deal with diversity and divergence. In the literature, we find two kinds of strategies for dealing with divergent results. A first set of strategies takes the detected divergence as the starting point for further analysis, with the aim to resolve the divergence. One possibility is to carry out further research (Cook 1985 ; Greene and Hall 2010 ). Further research is not always necessary. One can also look for a more comprehensive theory, which is able to account for both the results of the first component and the deviating results of the second component. This is a form of abduction (Erzberger and Prein 1997 ).

A fruitful starting point in trying to resolve divergence through abduction is to determine which component has resulted in a finding that is somehow expected, logical, and/or in line with existing research. The results of this research component, called the “sense” (“Lesart”), are subsequently compared to the results of the other component, called the “anti-sense” (“alternative Lesart”), which are considered dissonant, unexpected, and/or contrary to what had been found in the literature. The aim is to develop an overall explanation that fits both the sense and the anti-sense (Bazeley and Kemp 2012 ; Mendlinger and Cwikel 2008 ). Finally, a reanalysis of the data can sometimes lead to resolving divergence (Creswell and Plano Clark 2011 ).

Alternatively, one can question the existence of the encountered divergence. In this regard, Mathison ( 1988 ) recommends determining whether deviating results shown by the data can be explained by knowledge about the research and/or knowledge of the social world. Differences between results from different data sources could also be the result of properties of the methods involved, rather than reflect differences in reality (Yanchar and Williams 2006 ). In general, the conclusions of the individual components can be subjected to an inference quality audit (Teddlie and Tashakkori 2009 ), in which the researcher investigates the strength of each of the divergent conclusions. We recommend that researchers first determine whether there is “real” divergence, according to the strategies mentioned in the last paragraph. Next, an attempt can be made to resolve cases of “true” divergence, using one or more of the methods mentioned in this paragraph.

Design typology utilization

As already mentioned in Sect. 1, mixed methods designs can be classified into a mixed methods typology or taxonomy. A typology serves several purposes, including the following: guiding practice, legitimizing the field, generating new possibilities, and serving as a useful pedagogical tool (Teddlie and Tashakkori 2009 ). Note, however, that not all types of typologies are equally suitable for all purposes. For generating new possibilities, one will need a more exhaustive typology, while a useful pedagogical tool might be better served by a non-exhaustive overview of the most common mixed methods designs. Although some of the current MM design typologies include more designs than others, none of the current typologies is fully exhaustive. When designing a mixed methods study, it is often useful to borrow its name from an existing typology, or to construct a superior and nuanced clear name when your design is based on a modification of one or more of the designs.

Various typologies of mixed methods designs have been proposed. Creswell and Plano Clark’s ( 2011 ) typology of some “commonly used designs” includes six “major mixed methods designs”. Our summary of these designs runs as follows:

  • Convergent parallel design (“paralleles Design”) (the quantitative and qualitative strands of the research are performed independently, and their results are brought together in the overall interpretation),
  • Explanatory sequential design (“explanatives Design”) (a first phase of quantitative data collection and analysis is followed by the collection of qualitative data, which are used to explain the initial quantitative results),
  • Exploratory sequential design (“exploratives Design”) (a first phase of qualitative data collection and analysis is followed by the collection of quantitative data to test or generalize the initial qualitative results),
  • Embedded design (“Einbettungs-Design”) (in a traditional qualitative or quantitative design, a strand of the other type is added to enhance the overall design),
  • Transformative design (“politisch-transformatives Design”) (a transformative theoretical framework, e. g. feminism or critical race theory, shapes the interaction, priority, timing and mixing of the qualitative and quantitative strand),
  • Multiphase design (“Mehrphasen-Design”) (more than two phases or both sequential and concurrent strands are combined over a period of time within a program of study addressing an overall program objective).

Most of their designs presuppose a specific juxtaposition of the qualitative and quantitative component. Note that the last design is a complex type that is required in many mixed methods studies.

The following are our adapted definitions of Teddlie and Tashakkori’s ( 2009 ) five sets of mixed methods research designs (adapted from Teddlie and Tashakkori 2009 , p. 151):

  • Parallel mixed designs (“paralleles Mixed-Methods-Design”) – In these designs, one has two or more parallel quantitative and qualitative strands, either with some minimal time lapse or simultaneously; the strand results are integrated into meta-inferences after separate analysis are conducted; related QUAN and QUAL research questions are answered or aspects of the same mixed research question is addressed.
  • Sequential mixed designs (“sequenzielles Mixed-Methods-Design”) – In these designs, QUAL and QUAN strands occur across chronological phases, and the procedures/questions from the later strand emerge/depend/build on on the previous strand; the research questions are interrelated and sometimes evolve during the study.
  • Conversion mixed designs (“Transfer-Design” or “Konversionsdesign”) – In these parallel designs, mixing occurs when one type of data is transformed to the other type and then analyzed, and the additional findings are added to the results; this design answers related aspects of the same research question,
  • Multilevel mixed designs (“Mehrebenen-Mixed-Methods-Design”) – In these parallel or sequential designs, mixing occurs across multiple levels of analysis, as QUAN and QUAL data are analyzed and integrated to answer related aspects of the same research question or related questions.
  • Fully integrated mixed designs (“voll integriertes Mixed-Methods-Design”) – In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur. For example, rather than including integration only at the findings/results stage, or only across phases in a sequential design, mixing might occur at the conceptualization stage, the methodological stage, the analysis stage, and the inferential stage.

We recommend adding to Teddlie and Tashakkori’s typology a sixth design type, specifically, a  “hybrid” design type to include complex combinations of two or more of the other design types. We expect that many published MM designs will fall into the hybrid design type.

Morse and Niehaus ( 2009 ) listed eight mixed methods designs in their book (and suggested that authors create more complex combinations when needed). Our shorthand labels and descriptions (adapted from Morse and Niehaus 2009 , p. 25) run as follows:

  • QUAL + quan (inductive-simultaneous design where, the core component is qualitative and the supplemental component is quantitative)
  • QUAL → quan (inductive-sequential design, where the core component is qualitative and the supplemental component is quantitative)
  • QUAN + qual (deductive-simultaneous design where, the core component is quantitative and the supplemental component is qualitative)
  • QUAN → qual (deductive-sequential design, where the core component is quantitative and the supplemental component is qualitative)
  • QUAL + qual (inductive-simultaneous design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAL → qual (inductive-sequential design, where both components are qualitative; this is a multimethod design rather than a mixed methods design)
  • QUAN + quan (deductive-simultaneous design, where both components are quantitative; this is a multimethod design rather than a mixed methods design)
  • QUAN → quan (deductive-sequential design, where both components are quantitative; this is a multimethod design rather than a mixed methods design).

Notice that Morse and Niehaus ( 2009 ) included four mixed methods designs (the first four designs shown above) and four multimethod designs (the second set of four designs shown above) in their typology. The reader can, therefore, see that the design notation also works quite well for multimethod research designs. Notably absent from Morse and Niehaus’s book are equal-status or interactive designs. In addition, they assume that the core component should always be performed either concurrent with or before the supplemental component.

Johnson, Christensen, and Onwuegbuzie constructed a set of mixed methods designs without these limitations. The resulting mixed methods design matrix (see Johnson and Christensen 2017 , p. 478) contains nine designs, which we can label as follows (adapted from Johnson and Christensen 2017 , p. 478):

  • QUAL + QUAN (equal-status concurrent design),
  • QUAL + quan (qualitatively driven concurrent design),
  • QUAN + qual (quantitatively driven concurrent design),
  • QUAL → QUAN (equal-status sequential design),
  • QUAN → QUAL (equal-status sequential design),
  • QUAL → quan (qualitatively driven sequential design),
  • qual → QUAN (quantitatively driven sequential design),
  • QUAN → qual (quantitatively driven sequential design), and
  • quan → QUAL (qualitatively driven sequential design).

The above set of nine designs assumed only one qualitative and one quantitative component. However, this simplistic assumption can be relaxed in practice, allowing the reader to construct more complex designs. The Morse notation system is very powerful. For example, here is a three-stage equal-status concurrent-sequential design:

The key point here is that the Morse notation provides researchers with a powerful language for depicting and communicating the design constructed for a specific research study.

When designing a mixed methods study, it is sometimes helpful to include the mixing purpose (or characteristic on one of the other dimensions shown in Table  1 ) in the title of the study design (e. g., an explanatory sequential MM design, an exploratory-confirmatory MM design, a developmental MM design). Much more important, however, than a design name is for the author to provide an accurate description of what was done in the research study, so the reader will know exactly how the study was conducted. A design classification label can never replace such a description.

The common complexity of mixed methods design poses a problem to the above typologies of mixed methods research. The typologies were designed to classify whole mixed methods studies, and they are basically based on a classification of simple designs. In practice, many/most designs are complex. Complex designs are sometimes labeled “complex design”, “multiphase design”, “fully integrated design”, “hybrid design” and the like. Because complex designs occur very often in practice, the above typologies are not able to classify a large part of existing mixed methods research any further than by labeling them “complex”, which in itself is not very informative about the particular design. This problem does not fully apply to Morse’s notation system, which can be used to symbolize some more complex designs.

Something similar applies to the classification of the purposes of mixed methods research. The classifications of purposes mentioned in the “Purpose”-section, again, are basically meant for the classification of whole mixed methods studies. In practice, however, one single study often serves more than one purpose (Schoonenboom et al. 2017 ). The more purposes that are included in one study, the more difficult it becomes to select a design on the basis of the purpose of the investigation, as advised by Greene ( 2007 ). Of all purposes involved, then, which one should be the primary basis for the design? Or should the design be based upon all purposes included? And if so, how? For more information on how to articulate design complexity based on multiple purposes of mixing, see Schoonenboom et al. ( 2017 ).

It should be clear to the reader that, although much progress has been made in the area of mixed methods design typologies, the problem remains in developing a single typology that is effective in comprehensively listing a set of designs for mixed methods research. This is why we emphasize in this article the importance of learning to build on simple designs and construct one’s own design for one’s research questions. This will often result in a combination or “hybrid” design that goes beyond basic designs found in typologies, and a methodology section that provides much more information than a design name.

Typological versus interactive approaches to design

In the introduction, we made a distinction between design as a product and design as a process. Related to this, two different approaches to design can be distinguished: typological/taxonomic approaches (“systematische Ansätze”), such as those in the previous section, and interactive approaches (“interaktive Ansätze”) (the latter were called “dynamic” approaches by Creswell and Plano Clark 2011 ). Whereas typological/taxonomic approaches view designs as a sort of mold, in which the inquiry can be fit, interactive approaches (Maxwell 2013 ) view design as a process, in which a certain design-as-a-product might be the outcome of the process, but not its input.

The most frequently mentioned interactive approach to mixed methods research is the approach by Maxwell and Loomis ( 2003 ). Maxwell and Loomis distinguish the following components of a design: goals, conceptual framework, research question, methods, and validity. They argue convincingly that the most important task of the researcher is to deliver as the end product of the design process a design in which these five components fit together properly. During the design process, the researcher works alternately on the individual components, and as a result, their initial fit, if it existed, tends to get lost. The researcher should therefore regularly check during the research and continuing design process whether the components still fit together, and, if not, should adapt one or the other component to restore the fit between them. In an interactive approach, unlike the typological approach, design is viewed as an interactive process in which the components are continually compared during the research study to each other and adapted to each other.

Typological and interactive approaches to mixed methods research have been presented as mutually exclusive alternatives. In our view, however, they are not mutually exclusive. The interactive approach of Maxwell is a very powerful tool for conducting research, yet this approach is not specific to mixed methods research. Maxwell’s interactive approach emphasizes that the researcher should keep and monitor a close fit between the five components of research design. However, it does not indicate how one should combine qualitative and quantitative subcomponents within one of Maxwell’s five components (e. g., how one should combine a qualitative and a quantitative method, or a qualitative and a quantitative research question). Essential elements of the design process, such as timing and the point of integration are not covered by Maxwell’s approach. This is not a shortcoming of Maxwell’s approach, but it indicates that to support the design of mixed methods research, more is needed than Maxwell’s model currently has to offer.

Some authors state that design typologies are particularly useful for beginning researchers and interactive approaches are suited for experienced researchers (Creswell and Plano Clark 2011 ). However, like an experienced researcher, a research novice needs to align the components of his or her design properly with each other, and, like a beginning researcher, an advanced researcher should indicate how qualitative and quantitative components are combined with each other. This makes an interactive approach desirable, also for beginning researchers.

We see two merits of the typological/taxonomic approach . We agree with Greene ( 2007 ), who states that the value of the typological approach mainly lies in the different dimensions of mixed methods that result from its classifications. In this article, the primary dimensions include purpose, theoretical drive, timing, point of integration, typological vs. interactive approaches, planned vs. emergent designs, and complexity (also see secondary dimensions in Table  1 ). Unfortunately, all of these dimensions are not reflected in any single design typology reviewed here. A second merit of the typological approach is the provision of common mixed methods research designs, of common ways in which qualitative and quantitative research can be combined, as is done for example in the major designs of Creswell and Plano Clark ( 2011 ). Contrary to other authors, however, we do not consider these designs as a feature of a whole study, but rather, in line with Guest ( 2013 ), as a feature of one part of a design in which one qualitative and one quantitative component are combined. Although one study could have only one purpose, one point of integration, et cetera, we believe that combining “designs” is the rule and not the exception. Therefore, complex designs need to be constructed and modified as needed, and during the writing phase the design should be described in detail and perhaps given a creative and descriptive name.

Planned versus emergent designs

A mixed methods design can be thought out in advance, but can also arise during the course of the conduct of the study; the latter is called an “emergent” design (Creswell and Plano Clark 2011 ). Emergent designs arise, for example, when the researcher discovers during the study that one of the components is inadequate (Morse and Niehaus 2009 ). Addition of a component of the other type can sometimes remedy such an inadequacy. Some designs contain an emergent component by their nature. Initiation, for example, is the further exploration of unexpected outcomes. Unexpected outcomes are by definition not foreseen, and therefore cannot be included in the design in advance.

The question arises whether researchers should plan all these decisions beforehand, or whether they can make them during, and depending on the course of, the research process. The answer to this question is twofold. On the one hand, a researcher should decide beforehand which research components to include in the design, such that the conclusion that will be drawn will be robust. On the other hand, developments during research execution will sometimes prompt the researcher to decide to add additional components. In general, the advice is to be prepared for the unexpected. When one is able to plan for emergence, one should not refrain from doing so.

Dimension of complexity

Next, mixed methods designs are characterized by their complexity. In the literature, simple and complex designs are distinguished in various ways. A common distinction is between simple investigations with a single point of integration versus complex investigations with multiple points of integration (Guest 2013 ). When designing a mixed methods study, it can be useful to mention in the title whether the design of the study is simple or complex. The primary message of this section is as follows: It is the responsibility of the researcher to create more complex designs when needed to answer his or her research question(s) .

Teddlie and Tashakkori’s ( 2009 ) multilevel mixed designs and fully integrated mixed designs are both complex designs, but for different reasons. A multilevel mixed design is more complex ontologically, because it involves multiple levels of reality. For example, data might be collected both at the levels of schools and students, neighborhood and households, companies and employees, communities and inhabitants, or medical practices and patients (Yin 2013 ). Integration of these data does not only involve the integration of qualitative and quantitative data, but also the integration of data originating from different sources and existing at different levels. Little if any published research has discussed the possible ways of integrating data obtained in a multilevel mixed design (see Schoonenboom 2016 ). This is an area in need of additional research.

The fully-integrated mixed design is more complex because it contains multiple points of integration. As formulated by Teddlie and Tashakkori ( 2009 , p. 151):

In these designs, mixing occurs in an interactive manner at all stages of the study. At each stage, one approach affects the formulation of the other, and multiple types of implementation processes can occur.

Complexity, then, not only depends on the number of components, but also on the extent to which they depend on each other (e. g., “one approach affects the formulation of the other”).

Many of our design dimensions ultimately refer to different ways in which the qualitative and quantitative research components are interdependent. Different purposes of mixing ultimately differ in the way one component relates to, and depends upon, the other component. For example, these purposes include dependencies, such as “x illustrates y” and “x explains y”. Dependencies in the implementation of x and y occur to the extent that the design of y depends on the results of x (sequentiality). The theoretical drive creates dependencies, because the supplemental component y is performed and interpreted within the context and the theoretical drive of core component x. As a general rule in designing mixed methods research, one should examine and plan carefully the ways in which and the extent to which the various components depend on each other.

The dependence among components, which may or may not be present, has been summarized by Greene ( 2007 ). It is seen in the distinction between component designs (“Komponenten-Designs”), in which the components are independent of each other, and integrated designs (“integrierte Designs”), in which the components are interdependent. Of these two design categories, integrated designs are the more complex designs.

Secondary design considerations

The primary design dimensions explained above have been the focus of this article. There are a number of secondary considerations for researchers to also think about when they design their studies (Johnson and Christensen 2017 ). Now we list some secondary design issues and questions that should be thoughtfully considered during the construction of a strong mixed methods research design.

  • Phenomenon: Will the study be addressing (a) the same part or different parts of one phenomenon? (b) different phenomena?, or (c) the phenomenon/phenomena from different perspectives? Is the phenomenon (a) expected to be unique (e. g., historical event, particular group)?, (b) something expected to be part of a more regular and predictable phenomenon, or (c) a complex mixture of these?
  • Social scientific theory: Will the study generate a new substantive theory, test an already constructed theory, or achieve both in a sequential arrangement? Or is the researcher not interested in substantive theory based on empirical data?
  • Ideological drive: Will the study have an explicitly articulated ideological drive (e. g., feminism, critical race paradigm, transformative paradigm)?
  • Combination of sampling methods: What specific quantitative sampling method(s) will be used? What specific qualitative sampling methods(s) will be used? How will these be combined or related?
  • Degree to which the research participants will be similar or different: For example, participants or stakeholders with known differences of perspective would provide participants that are quite different.
  • Degree to which the researchers on the research team will be similar or different: For example, an experiment conducted by one researcher would be high on similarity, but the use of a heterogeneous and participatory research team would include many differences.
  • Implementation setting: Will the phenomenon be studied naturalistically, experimentally, or through a combination of these?
  • Degree to which the methods similar or different: For example, a structured interview and questionnaire are fairly similar but administration of a standardized test and participant observation in the field are quite different.
  • Validity criteria and strategies: What validity criteria and strategies will be used to address the defensibility of the study and the conclusions that will be drawn from it (see Chapter 11 in Johnson and Christensen 2017 )?
  • Full study: Will there be essentially one research study or more than one? How will the research report be structured?

Two case studies

The above design dimensions are now illustrated by examples. A nice collection of examples of mixed methods studies can be found in Hesse-Biber ( 2010 ), from which the following examples are taken. The description of the first case example is shown in Box 1.

Box 1

Summary of Roth ( 2006 ), research regarding the gender-wage gap within Wall Street securities firms. Adapted from Hesse-Biber ( 2010 , pp. 457–458)

Louise Marie Roth’s research, Selling Women Short: Gender and Money on Wall Street ( 2006 ), tackles gender inequality in the workplace. She was interested in understanding the gender-wage gap among highly performing Wall Street MBAs, who on the surface appeared to have the same “human capital” qualifications and were placed in high-ranking Wall Street securities firms as their first jobs. In addition, Roth wanted to understand the “structural factors” within the workplace setting that may contribute to the gender-wage gap and its persistence over time. […] Roth conducted semistructured interviews, nesting quantitative closed-ended questions into primarily qualitative in-depth interviews […] In analyzing the quantitative data from her sample, she statistically considered all those factors that might legitimately account for gendered differences such as number of hours worked, any human capital differences, and so on. Her analysis of the quantitative data revealed the presence of a significant gender gap in wages that remained unexplained after controlling for any legitimate factors that might otherwise make a difference. […] Quantitative findings showed the extent of the wage gap while providing numerical understanding of the disparity but did not provide her with an understanding of the specific processes within the workplace that might have contributed to the gender gap in wages. […] Her respondents’ lived experiences over time revealed the hidden inner structures of the workplace that consist of discriminatory organizational practices with regard to decision making in performance evaluations that are tightly tied to wage increases and promotion.

This example nicely illustrates the distinction we made between simultaneity and dependency. On the two aspects of the timing dimension, this study was a concurrent-dependent design answering a set of related research questions. The data collection in this example was conducted simultaneously, and was thus concurrent – the quantitative closed-ended questions were embedded into the qualitative in-depth interviews. In contrast, the analysis was dependent, as explained in the next paragraph.

One of the purposes of this study was explanation: The qualitative data were used to understand the processes underlying the quantitative outcomes. It is therefore an explanatory design, and might be labelled an “explanatory concurrent design”. Conceptually, explanatory designs are often dependent: The qualitative component is used to explain and clarify the outcomes of the quantitative component. In that sense, the qualitative analysis in the case study took the outcomes of the quantitative component (“the existence of the gender-wage gap” and “numerical understanding of the disparity”), and aimed at providing an explanation for that result of the quantitative data analysis , by relating it to the contextual circumstances in which the quantitative outcomes were produced. This purpose of mixing in the example corresponds to Bryman’s ( 2006 ) “contextual understanding”. On the other primary dimensions, (a) the design was ongoing over a three-year period but was not emergent, (b) the point of integration was results, and (c) the design was not complex with respect to the point of integration, as it had only one point of integration. Yet, it was complex in the sense of involving multiple levels; both the level of the individual and the organization were included. According to the approach of Johnson and Christensen ( 2017 ), this was a QUAL + quan design (that was qualitatively driven, explanatory, and concurrent). If we give this study design a name, perhaps it should focus on what was done in the study: “explaining an effect from the process by which it is produced”. Having said this, the name “explanatory concurrent design” could also be used.

The description of the second case example is shown in Box 2.

Box 2

Summary of McMahon’s ( 2007 ) explorative study of the meaning, role, and salience of rape myths within the subculture of college student athletes. Adapted from Hesse-Biber ( 2010 , pp. 461–462)

Sarah McMahon ( 2007 ) wanted to explore the subculture of college student athletes and specifically the meaning, role, and salience of rape myths within that culture. […] While she was looking for confirmation between the quantitative ([structured] survey) and qualitative (focus groups and individual interviews) findings, she entered this study skeptical of whether or not her quantitative and qualitative findings would mesh with one another. McMahon […] first administered a survey [instrument] to 205 sophomore and junior student athletes at one Northeast public university. […] The quantitative data revealed a very low acceptance of rape myths among this student population but revealed a higher acceptance of violence among men and individuals who did not know a survivor of sexual assault. In the second qualitative (QUAL) phase, “focus groups were conducted as semi-structured interviews” and facilitated by someone of the same gender as the participants (p. 360). […] She followed this up with a third qualitative component (QUAL), individual interviews, which were conducted to elaborate on themes discovered in the focus groups and determine any differences in students’ responses between situations (i. e., group setting vs. individual). The interview guide was designed specifically to address focus group topics that needed “more in-depth exploration” or clarification (p. 361). The qualitative findings from the focus groups and individual qualitative interviews revealed “subtle yet pervasive rape myths” that fell into four major themes: “the misunderstanding of consent, the belief in ‘accidental’ and fabricated rape, the contention that some women provoke rape, and the invulnerability of female athletes” (p. 363). She found that the survey’s finding of a “low acceptance of rape myths … was contradicted by the findings of the focus groups and individual interviews, which indicated the presence of subtle rape myths” (p. 362).

On the timing dimension, this is an example of a sequential-independent design. It is sequential, because the qualitative focus groups were conducted after the survey was administered. The analysis of the quantitative and qualitative data was independent: Both were analyzed independently, to see whether they yielded the same results (which they did not). This purpose, therefore, was triangulation. On the other primary dimensions, (a) the design was planned, (b) the point of integration was results, and (c) the design was not complex as it had only one point of integration, and involved only the level of the individual. The author called this a “sequential explanatory” design. We doubt, however, whether this is the most appropriate label, because the qualitative component did not provide an explanation for quantitative results that were taken as given. On the contrary, the qualitative results contradicted the quantitative results. Thus, a “sequential-independent” design, or a “sequential-triangulation” design or a “sequential-comparative” design would probably be a better name.

Notice further that the second case study had the same point of integration as the first case study. The two components were brought together in the results. Thus, although the case studies are very dissimilar in many respects, this does not become visible in their point of integration. It can therefore be helpful to determine whether their point of extension is different. A  point of extension is the point in the research process at which the second (or later) component comes into play. In the first case study, two related, but different research questions were answered, namely the quantitative question “How large is the gender-wage gap among highly performing Wall Street MBAs after controlling for any legitimate factors that might otherwise make a difference?”, and the qualitative research question “How do structural factors within the workplace setting contribute to the gender-wage gap and its persistence over time?” This case study contains one qualitative research question and one quantitative research question. Therefore, the point of extension is the research question. In the second case study, both components answered the same research question. They differed in their data collection (and subsequently in their data analysis): qualitative focus groups and individual interviews versus a quantitative questionnaire. In this case study, the point of extension was data collection. Thus, the point of extension can be used to distinguish between the two case studies.

Summary and conclusions

The purpose of this article is to help researchers to understand how to design a mixed methods research study. Perhaps the simplest approach is to design is to look at a single book and select one from the few designs included in that book. We believe that is only useful as a starting point. Here we have shown that one often needs to construct a research design to fit one’s unique research situation and questions.

First, we showed that there are there are many purposes for which qualitative and quantitative methods, methodologies, and paradigms can be mixed. This must be determined in interaction with the research questions. Inclusion of a purpose in the design name can sometimes provide readers with useful information about the study design, as in, e. g., an “explanatory sequential design” or an “exploratory-confirmatory design”.

The second dimension is theoretical drive in the sense that Morse and Niehaus ( 2009 ) use this term. That is, will the study have an inductive or a deductive drive, or, we added, a combination of these. Related to this idea is whether one will conduct a qualitatively driven, a quantitatively driven, or an equal-status mixed methods study. This language is sometimes included in the design name to communicate this characteristic of the study design (e. g., a “quantitatively driven sequential mixed methods design”).

The third dimension is timing , which has two aspects: simultaneity and dependence. Simultaneity refers to whether the components are to be implemented concurrently, sequentially, or a combination of these in a multiphase design. Simultaneity is commonly used in the naming of a mixed methods design because it communicates key information. The second aspect of timing, dependence , refers to whether a later component depends on the results of an earlier component, e. g., Did phase two specifically build on phase one in the research study? The fourth design dimension is the point of integration, which is where the qualitative and quantitative components are brought together and integrated. This is an essential dimension, but it usually does not need to be incorporated into the design name.

The fifth design dimension is that of typological vs. interactive design approaches . That is, will one select a design from a typology or use a more interactive approach to construct one’s own design? There are many typologies of designs currently in the literature. Our recommendation is that readers examine multiple design typologies to better understand the design process in mixed methods research and to understand what designs have been identified as popular in the field. However, when a design that would follow from one’s research questions is not available, the researcher can and should (a) combine designs into new designs or (b) simply construct a new and unique design. One can go a long way in depicting a complex design with Morse’s ( 1991 ) notation when used to its full potential. We also recommend that researchers understand the process approach to design from Maxwell and Loomis ( 2003 ), and realize that research design is a process and it needs, oftentimes, to be flexible and interactive.

The sixth design dimension or consideration is whether a design will be fully specified during the planning of the research study or if the design (or part of the design) will be allowed to emerge during the research process, or a combination of these. The seventh design dimension is called complexity . One sort of complexity mentioned was multilevel designs, but there are many complexities that can enter designs. The key point is that good research often requires the use of complex designs to answer one’s research questions. This is not something to avoid. It is the responsibility of the researcher to learn how to construct and describe and name mixed methods research designs. Always remember that designs should follow from one’s research questions and purposes, rather than questions and purposes following from a few currently named designs.

In addition to the six primary design dimensions or considerations, we provided a set of additional or secondary dimensions/considerations or questions to ask when constructing a mixed methods study design. Our purpose throughout this article has been to show what factors must be considered to design a high quality mixed methods research study. The more one knows and thinks about the primary and secondary dimensions of mixed methods design the better equipped one will be to pursue mixed methods research.

Acknowledgments

Open access funding provided by University of Vienna.

Biographies

1965, Dr., Professor of Empirical Pedagogy at University of Vienna, Austria. Research Areas: Mixed Methods Design, Philosophy of Mixed Methods Research, Innovation in Higher Education, Design and Evaluation of Intervention Studies, Educational Technology. Publications: Mixed methods in early childhood education. In: M. Fleer & B. v. Oers (Eds.), International handbook on early childhood education (Vol. 1). Dordrecht, The Netherlands: Springer 2017; The multilevel mixed intact group analysis: A mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research 10, 2016; The realist survey: How respondents’ voices can be used to test and revise correlational models. Journal of Mixed Methods Research 2015. Advance online publication.

1957, PhD, Professor of Professional Studies at University of South Alabama, Mobile, Alabama USA. Research Areas: Methods of Social Research, Program Evaluation, Quantitative, Qualitative and Mixed Methods, Philosophy of Social Science. Publications: Research methods, design and analysis. Boston, MA 2014 (with L. Christensen and L. Turner); Educational research: Quantitative, qualitative and mixed approaches. Los Angeles, CA 2017 (with L. Christensen); The Oxford handbook of multimethod and mixed methods research inquiry. New York, NY 2015 (with S. Hesse-Biber).

Bryman’s ( 2006 ) scheme of rationales for combining quantitative and qualitative research 1

  • Triangulation or greater validity – refers to the traditional view that quantitative and qualitative research might be combined to triangulate findings in order that they may be mutually corroborated. If the term was used as a synonym for integrating quantitative and qualitative research, it was not coded as triangulation.
  • Offset – refers to the suggestion that the research methods associated with both quantitative and qualitative research have their own strengths and weaknesses so that combining them allows the researcher to offset their weaknesses to draw on the strengths of both.
  • Completeness – refers to the notion that the researcher can bring together a more comprehensive account of the area of enquiry in which he or she is interested if both quantitative and qualitative research are employed.
  • Process – quantitative research provides an account of structures in social life but qualitative research provides sense of process.
  • Different research questions – this is the argument that quantitative and qualitative research can each answer different research questions but this item was coded only if authors explicitly stated that they were doing this.
  • Explanation – one is used to help explain findings generated by the other.
  • Unexpected results – refers to the suggestion that quantitative and qualitative research can be fruitfully combined when one generates surprising results that can be understood by employing the other.
  • Instrument development – refers to contexts in which qualitative research is employed to develop questionnaire and scale items – for example, so that better wording or more comprehensive closed answers can be generated.
  • Sampling – refers to situations in which one approach is used to facilitate the sampling of respondents or cases.
  • Credibility – refer s to suggestions that employing both approaches enhances the integrity of findings.
  • Context – refers to cases in which the combination is rationalized in terms of qualitative research providing contextual understanding coupled with either generalizable, externally valid findings or broad relationships among variables uncovered through a survey.
  • Illustration – refers to the use of qualitative data to illustrate quantitative findings, often referred to as putting “meat on the bones” of “dry” quantitative findings.
  • Utility or improving the usefulness of findings – refers to a suggestion, which is more likely to be prominent among articles with an applied focus, that combining the two approaches will be more useful to practitioners and others.
  • Confirm and discover – this entails using qualitative data to generate hypotheses and using quantitative research to test them within a single project.
  • Diversity of views – this includes two slightly different rationales – namely, combining researchers’ and participants’ perspectives through quantitative and qualitative research respectively, and uncovering relationships between variables through quantitative research while also revealing meanings among research participants through qualitative research.
  • Enhancement or building upon quantitative/qualitative findings – this entails a reference to making more of or augmenting either quantitative or qualitative findings by gathering data using a qualitative or quantitative research approach.
  • Other/unclear.
  • Not stated.

1 Reprinted with permission from “Integrating quantitative and qualitative research: How is it done?” by Alan Bryman ( 2006 ), Qualitative Research, 6, pp. 105–107.

Contributor Information

Judith Schoonenboom, Email: [email protected] .

R. Burke Johnson, Email: ude.amabalahtuos@nosnhojb .

  • Bazeley, Pat, Lynn Kemp Mosaics, triangles, and DNA: Metaphors for integrated analysis in mixed methods research. Journal of Mixed Methods Research. 2012; 6 :55–72. doi: 10.1177/1558689811419514. [ CrossRef ] [ Google Scholar ]
  • Bryman A. Integrating quantitative and qualitative research: how is it done? Qualitative Research. 2006; 6 :97–113. doi: 10.1177/1468794106058877. [ CrossRef ] [ Google Scholar ]
  • Cook TD. Postpositivist critical multiplism. In: Shotland RL, Mark MM, editors. Social science and social policy. Beverly Hills: SAGE; 1985. pp. 21–62. [ Google Scholar ]
  • Creswell JW, Plano Clark VL. Designing and conducting mixed methods research. 2. Los Angeles: SAGE; 2011. [ Google Scholar ]
  • Erzberger C, Prein G. Triangulation: Validity and empirically-based hypothesis construction. Quality and Quantity. 1997; 31 :141–154. doi: 10.1023/A:1004249313062. [ CrossRef ] [ Google Scholar ]
  • Greene JC. Mixed methods in social inquiry. San Francisco: Jossey-Bass; 2007. [ Google Scholar ]
  • Greene JC. Preserving distinctions within the multimethod and mixed methods research merger. Sharlene Hesse-Biber and R. Burke Johnson. New York: Oxford University Press; 2015. [ Google Scholar ]
  • Greene JC, Valerie J, Caracelli, Graham WF. Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis. 1989; 11 :255–274. doi: 10.3102/01623737011003255. [ CrossRef ] [ Google Scholar ]
  • Greene JC, Hall JN. Dialectics and pragmatism. In: Tashakkori A, Teddlie C, editors. SAGE handbook of mixed methods in social & behavioral research. 2. Los Angeles: SAGE; 2010. pp. 119–167. [ Google Scholar ]
  • Guest, Greg Describing mixed methods research: An alternative to typologies. Journal of Mixed Methods Research. 2013; 7 :141–151. doi: 10.1177/1558689812461179. [ CrossRef ] [ Google Scholar ]
  • Hesse-Biber S. Qualitative approaches to mixed methods practice. Qualitative Inquiry. 2010; 16 :455–468. doi: 10.1177/1077800410364611. [ CrossRef ] [ Google Scholar ]
  • Johnson BR. Dialectical pluralism: A metaparadigm whose time has come. Journal of Mixed Methods Research. 2017; 11 :156–173. doi: 10.1177/1558689815607692. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Christensen LB. Educational research: Quantitative, qualitative, and mixed approaches. 6. Los Angeles: SAGE; 2017. [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ. Mixed methods research: a research paradigm whose time has come. Educational Researcher. 2004; 33 (7):14–26. doi: 10.3102/0013189X033007014. [ CrossRef ] [ Google Scholar ]
  • Johnson BR, Onwuegbuzie AJ, Turner LA. Toward a definition of mixed methods research. Journal of Mixed Methods Research. 2007; 1 :112–133. doi: 10.1177/1558689806298224. [ CrossRef ] [ Google Scholar ]
  • Mathison S. Why triangulate? Educational Researcher. 1988; 17 :13–17. doi: 10.3102/0013189X017002013. [ CrossRef ] [ Google Scholar ]
  • Maxwell JA. Qualitative research design: An interactive approach. 3. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Maxwell, Joseph A., and Diane M. Loomis. 2003. Mixed methods design: An alternative approach. In Handbook of mixed methods in social & behavioral research , Eds. Abbas Tashakkori and Charles Teddlie, 241–271. Thousand Oaks: Sage.
  • McMahon S. Understanding community-specific rape myths: Exploring student athlete culture. Affilia. 2007; 22 :357–370. doi: 10.1177/0886109907306331. [ CrossRef ] [ Google Scholar ]
  • Mendlinger S, Cwikel J. Spiraling between qualitative and quantitative data on women’s health behaviors: A double helix model for mixed methods. Qualitative Health Research. 2008; 18 :280–293. doi: 10.1177/1049732307312392. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morgan DL. Integrating qualitative and quantitative methods: a pragmatic approach. Los Angeles: Sage; 2014. [ Google Scholar ]
  • Morse JM. Approaches to qualitative-quantitative methodological triangulation. Nursing Research. 1991; 40 :120–123. doi: 10.1097/00006199-199103000-00014. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Morse JM, Niehaus L. Mixed method design: Principles and procedures. Walnut Creek: Left Coast Press; 2009. [ Google Scholar ]
  • Onwuegbuzie AJ, Burke Johnson R. The “validity” issue in mixed research. Research in the Schools. 2006; 13 :48–63. [ Google Scholar ]
  • Roth LM. Selling women short: Gender and money on Wall Street. Princeton: Princeton University Press; 2006. [ Google Scholar ]
  • Schoonenboom J. The multilevel mixed intact group analysis: a mixed method to seek, detect, describe and explain differences between intact groups. Journal of Mixed Methods Research. 2016; 10 :129–146. doi: 10.1177/1558689814536283. [ CrossRef ] [ Google Scholar ]
  • Schoonenboom, Judith, R. Burke Johnson, and Dominik E. Froehlich. 2017, in press. Combining multiple purposes of mixing within a mixed methods research design. International Journal of Multiple Research Approaches .
  • Teddlie CB, Tashakkori A. Foundations of mixed methods research: Integrating quantitative and qualitative approaches in the social and behavioral sciences. Los Angeles: Sage; 2009. [ Google Scholar ]
  • Yanchar SC, Williams DD. Reconsidering the compatibility thesis and eclecticism: Five proposed guidelines for method use. Educational Researcher. 2006; 35 (9):3–12. doi: 10.3102/0013189X035009003. [ CrossRef ] [ Google Scholar ]
  • Yin RK. Case study research: design and methods. 5. Los Angeles: SAGE; 2013. [ Google Scholar ]
  • Alzheimer's disease & dementia
  • Arthritis & Rheumatism
  • Attention deficit disorders
  • Autism spectrum disorders
  • Biomedical technology
  • Diseases, Conditions, Syndromes
  • Endocrinology & Metabolism
  • Gastroenterology
  • Gerontology & Geriatrics
  • Health informatics
  • Inflammatory disorders
  • Medical economics
  • Medical research
  • Medications
  • Neuroscience
  • Obstetrics & gynaecology
  • Oncology & Cancer
  • Ophthalmology
  • Overweight & Obesity
  • Parkinson's & Movement disorders
  • Psychology & Psychiatry
  • Radiology & Imaging
  • Sleep disorders
  • Sports medicine & Kinesiology
  • Vaccination
  • Breast cancer
  • Cardiovascular disease
  • Chronic obstructive pulmonary disease
  • Colon cancer
  • Coronary artery disease
  • Heart attack
  • Heart disease
  • High blood pressure
  • Kidney disease
  • Lung cancer
  • Multiple sclerosis
  • Myocardial infarction
  • Ovarian cancer
  • Post traumatic stress disorder
  • Rheumatoid arthritis
  • Schizophrenia
  • Skin cancer
  • Type 2 diabetes
  • Full List »

share this!

April 16, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

New guidelines reflect growing use of AI in health care research

by NDORMS, University of Oxford

artificial intelligence

The widespread use of artificial intelligence (AI) in medical decision-making tools has led to an update of the TRIPOD guidelines for reporting clinical prediction models. The new TRIPOD+AI guidelines are launched in the BMJ today.

The TRIPOD guidelines (which stands for Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis Or Diagnosis) were developed in 2015 to improve tools to aid diagnosis and prognosis that are used by doctors. Widely used, their uptake by medical practitioners to estimate the probability that a specific condition is present or may occur in the future, has helped improve transparency and accuracy of decision-making and significantly improve patient care.

But research methods have moved on since 2015, and we are witnessing an acceleration of studies that are developing prediction models using AI, specifically machine learning methods. Transparency is one of the six core principles underpinning the WHO guidance on ethics and governance of artificial intelligence for health. TRIPOD+AI has therefore been developed to provide a framework and set of reporting standards to boost reporting of studies developing and evaluating AI prediction models regardless of the modeling approach.

The TRIPOD+AI guidelines were developed by a consortium of international investigators, led by researchers from the University of Oxford alongside researchers from other leading institutions across the world, health care professionals , industry, regulators, and journal editors. The development of the new guidance was informed by research highlighting poor and incomplete reporting of AI studies, a Delphi survey, and an online consensus meeting.

Gary Collins, Professor of Medical Statistics at the Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences (NDORMS), University of Oxford, and lead researcher in TRIPOD, says, "There is enormous potential for artificial intelligence to improve health care from earlier diagnosis of patients with lung cancer to identifying people at increased risk of heart attacks. We're only just starting to see how this technology can be used to improve patient outcomes.

"Deciding whether to adopt these tools is predicated on transparent reporting. Transparency enables errors to be identified, facilitates appraisal of methods and ensures effective oversight and regulation. Transparency can also create more trust and influence patient and public acceptability of the use of prediction models in health care."

The TRIPOD+AI statement consists of a 27-item checklist that supersedes TRIPOD 2015. The checklist details reporting recommendations for each item and is designed to help researchers, peer reviewers, editors, policymakers and patients understand and evaluate the quality of the study methods and findings of AI-driven research.

A key change in TRIPOD+AI has been an increased emphasis on trustworthiness and fairness. Prof. Carl Moons, UMC Utrecht said, "While these are not new concepts in prediction modeling, AI has drawn more attention to these as reporting issues. A reason for this is that many AI algorithms are developed on very specific data sets that are sometimes not even from studies or could simply be drawn from the internet.

"We also don't know which groups or subgroups were included. So to ensure that studies do not discriminate against any particular group or create inequalities in health care provision, and to ensure decision-makers can trust the source of the data, these factors become more important."

Dr. Xiaoxuan Liu and Prof Alastair Denniston, Directors of the NIHR Incubator for Regulatory Science in AI & Digital Health care are co-authors of TRIPOD+AI explained, "Many of the most important applications of AI in medicine are based on prediction models. We were delighted to support the development of TRIPOD+AI which is designed to improve the quality of evidence in this important area of AI research."

TRIPOD 2015 helped change the landscape of clinical research reporting bringing minimum reporting standards to prediction models. The original guidelines have been cited over 7500 times, featured in multiple journal instructions to authors, and been included in WHO and NICE briefing documents.

"I hope the TRIPOD+AI will lead to a marked improvement in reporting, reduce waste from incompletely reported research and enable stakeholders to arrive at an informed judgment based on full details on the potential of the AI technology to improve patient care and outcomes that cut through the hype in AI-driven health care innovations," concluded Gary.

Explore further

Feedback to editors

methods of research to be used

New study uncovers why boys born to mothers with HIV are at greater risk of health problems and death in infancy

6 minutes ago

methods of research to be used

Which treatments for malnutrition's long-term effects could help reduce mortality and health outcomes for children

8 minutes ago

methods of research to be used

Researchers present new findings on the development of the human forebrain

14 minutes ago

methods of research to be used

Researchers find evidence a natural juice can help gut health

methods of research to be used

New brain target key to easing tough-to-treat epilepsy

19 minutes ago

methods of research to be used

How Alzheimer's disease progresses faster in people with Down syndrome

20 minutes ago

methods of research to be used

Investigators identify a group of cells involved in working memory

23 minutes ago

methods of research to be used

Genetic variants found in two types of strabismus, sparking hope for future treatment of eye condition

30 minutes ago

methods of research to be used

Harnessing B cells could fight cancer or autoimmune diseases by targeting mitochondrial metabolism

44 minutes ago

methods of research to be used

Researchers find glucose levels of nondiabetic people vary more than thought

Related stories.

methods of research to be used

Experts establish checklist detailing key consensus reporting items for primary care studies

Nov 28, 2023

methods of research to be used

A new standard for reporting epidemic prediction research

Oct 19, 2021

methods of research to be used

Urology treatment studies show increased reporting of harmful effects

Dec 21, 2023

methods of research to be used

New reporting guidelines developed to improve AI in health care settings

May 19, 2022

methods of research to be used

New guidelines to improve reporting standards of studies that investigate causal mechanisms

Sep 21, 2021

methods of research to be used

New guidelines for reporting clinical trials of biofield therapies released

Feb 8, 2024

Recommended for you

methods of research to be used

AI speeds up drug design for Parkinson's by ten-fold

6 hours ago

methods of research to be used

Multidisciplinary research team creates computational models to predict heart valve leakage in children

20 hours ago

methods of research to be used

How AI improves physician and nurse collaboration to boost patient care

Apr 16, 2024

methods of research to be used

GPT-4 matches radiologists in detecting errors in radiology reports

methods of research to be used

Study reveals AI enhances physician-patient communication

Apr 15, 2024

methods of research to be used

Study shows AI improves accuracy of skin cancer diagnoses

Apr 12, 2024

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Medical Xpress in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

This paper is in the following e-collection/theme issue:

Published on 16.4.2024 in Vol 26 (2024)

User-Centered Development of a Patient Decision Aid for Choice of Early Abortion Method: Multi-Cycle Mixed Methods Study

Authors of this article:

Author Orcid Image

Original Paper

  • Kate J Wahl 1 , MSc   ; 
  • Melissa Brooks 2 , MD   ; 
  • Logan Trenaman 3 , PhD   ; 
  • Kirsten Desjardins-Lorimer 4 , MD   ; 
  • Carolyn M Bell 4 , MD   ; 
  • Nazgul Chokmorova 4 , MD   ; 
  • Romy Segall 2 , BSc, MD   ; 
  • Janelle Syring 4 , MD   ; 
  • Aleyah Williams 1 , MPH   ; 
  • Linda C Li 5 , PhD   ; 
  • Wendy V Norman 4, 6 * , MD, MHSc   ; 
  • Sarah Munro 1, 3 * , PhD  

1 Department of Obstetrics and Gynecology, University of British Columbia, Vancouver, BC, Canada

2 Department of Obstetrics and Gynecology, Dalhousie University, Halifax, NS, Canada

3 Department of Health Systems and Population Health, School of Public Health, University of Washington, Seattle, WA, United States

4 Department of Family Practice, University of British Columbia, Vancouver, BC, Canada

5 Department of Physical Therapy, University of British Columbia, Vancouver, BC, Canada

6 Department of Public Health, Environments and Society, Faculty of Public Health and Policy, London School of Hygiene & Tropical Medicine, London, United Kingdom

*these authors contributed equally

Corresponding Author:

Kate J Wahl, MSc

Department of Obstetrics and Gynecology

University of British Columbia

4500 Oak Street

Vancouver, BC, V6H 3N1

Phone: 1 4165231923

Email: [email protected]

Background: People seeking abortion in early pregnancy have the choice between medication and procedural options for care. The choice is preference-sensitive—there is no clinically superior option and the choice depends on what matters most to the individual patient. Patient decision aids (PtDAs) are shared decision-making tools that support people in making informed, values-aligned health care choices.

Objective: We aimed to develop and evaluate the usability of a web-based PtDA for the Canadian context, where abortion care is publicly funded and available without legal restriction.

Methods: We used a systematic, user-centered design approach guided by principles of integrated knowledge translation. We first developed a prototype using available evidence for abortion seekers’ decisional needs and the risks, benefits, and consequences of each option. We then refined the prototype through think-aloud interviews with participants at risk of unintended pregnancy (“patient” participants). Interviews were audio-recorded and documented through field notes. Finally, we conducted a web-based survey of patients and health care professionals involved with abortion care, which included the System Usability Scale. We used content analysis to identify usability issues described in the field notes and open-ended survey questions, and descriptive statistics to summarize participant characteristics and close-ended survey responses.

Results: A total of 61 individuals participated in this study. Further, 11 patients participated in think-aloud interviews. Overall, the response to the PtDA was positive; however, the content analysis identified issues related to the design, language, and information about the process and experience of obtaining abortion care. In response, we adapted the PtDA into an interactive website and revised it to include consistent and plain language, additional information (eg, pain experience narratives), and links to additional resources on how to find an abortion health care professional. In total, 25 patients and 25 health care professionals completed the survey. The mean System Usability Scale score met the threshold for good usability among both patient and health care professional participants. Most participants felt that the PtDA was user-friendly (patients: n=25, 100%; health care professionals: n=22, 88%), was not missing information (patients: n=21, 84%; health care professionals: n=18, 72%), and that it was appropriate for patients to complete the PtDA before a consultation (patients: n=23, 92%; health care professionals: n=23, 92%). Open-ended responses focused on improving usability by reducing the length of the PtDA and making the website more mobile-friendly.

Conclusions: We systematically designed the PtDA to address an unmet need to support informed, values-aligned decision-making about the method of abortion. The design process responded to a need identified by potential users and addressed unique sensitivities related to reproductive health decision-making.

Introduction

In total, 1 in 3 pregnancy-capable people in Canada will have an abortion in their lifetimes, and most will seek care early in pregnancy [ 1 ]. Medication abortion (using the gold-standard mifepristone/misoprostol regimen) and procedural abortion are common, safe, and effective options for abortion care in the first trimester [ 2 , 3 ]. The choice between using medications and presenting to a facility for a procedure is a preference-sensitive decision; there is no clinically superior option and the choice depends on what matters most to the individual patient regarding the respective treatments and the features of those options [ 4 - 6 ].

The choice of method of abortion can involve a process of shared decision-making, in which the patient and health care professional share the best available evidence about options, and the patient is supported to consider those options and clarify an informed preference [ 7 ]. There are many types of interventions available to support shared decision-making, including interventions targeting health care professionals (eg, educational materials, meetings, outreach visits, audit and feedback, and reminders) and patients (eg, patient decision aids [PtDA], appointment preparation packages, empowerment sessions, printed materials, and shared decision-making education) [ 8 ]. Of these interventions, PtDAs are well-suited to address challenges to shared decision-making about the method of abortion, including limited patient knowledge, public misinformation about options, poor access to health care professionals with sufficient expertise, and apprehension about abortion counseling [ 9 ].

PtDAs are widely used interventions that support people in making informed, deliberate health care choices by explicitly describing the health problem and decision, providing information about each option, and clarifying patient values [ 10 ]. The results of the 2023 Cochrane systematic review of 209 randomized controlled trials indicate that, compared to usual care (eg, information pamphlets or webpages), the use of PtDAs results in increases in patient knowledge, expectations of benefits and harms, clarity about what matters most to them, and participation in making a decision [ 11 ]. Of the studies included in the systematic review, 1 tested the effect of a PtDA leaflet for method of abortion and found that patients eligible for both medication and procedural abortion who received the PtDA were more knowledgeable, and had lower risk perceptions and decisional conflict than those who were in the control group [ 12 ]. However, that PtDA was developed 20 years ago in the UK health system and was not publicly available. A recent environmental scan of PtDAs for a method of abortion found that other available options meet few of the criteria set by the International Patient Decision Aid Standards (IPDAS) collaboration and do not include language and content optimized for end users [ 9 , 13 ].

Consequently, no PtDAs for method of abortion were available in Canada at the time of this study. This was a critical gap for both patients and health care professionals as, in 2017, mifepristone/misoprostol medication abortion came to the market, offering a new method of choice for people seeking abortion in the first trimester [ 14 ]. Unlike most jurisdictions, in Canada medication abortion is typically prescribed in primary care and dispensed in community pharmacies. Offering a PtDA in preparation for a brief primary care consultation allows the person seeking abortion more time to digest new information, consider their preferences, be ready to discuss their options, and make a quality decision.

In this context, we identified a need for a high-quality and publicly available PtDA to support people in making an informed choice about the method of abortion that reflects what is most important to them. Concurrently, our team was working in collaboration with knowledge users (health care professionals, patients, and health system decision makers) who were part of a larger project to investigate the implementation of mifepristone in Canada [ 15 , 16 ]. We, therefore, aimed to develop and evaluate the usability of a web-based PtDA for the Canadian context, where abortion care is publicly funded and available without legal restriction.

Study Design

We performed a mixed methods user-centered development and evaluation study informed by principles of integrated knowledge translation. Integrated knowledge translation is an approach to collaborative research in which researchers and knowledge users work together to identify a problem, conduct research as equal partners to address that problem, and coproduce research products that aim to impact health service delivery [ 17 ]. We selected this approach to increase the likelihood that our end PtDAs would be relevant, useable, and used for patients and health care professionals in Canada [ 17 ]. The need for a PtDA was identified through engagement with health care professionals. In 2017, they highlighted the need for patients to be supported in choosing between procedural care—which historically represented more than 90% of abortions in Canada [ 18 ]—and the newly available medication option [ 19 , 20 ]. This need was reaffirmed in 2022 by the Canadian federal health agency, Health Canada, which circulated a request for proposals to generate “evidence-based, culturally-relevant information aimed at supporting people in their reproductive decision-making and in accessing abortion services as needed” [ 21 ].

We operationalized integrated knowledge translation principles in a user-centered design process. User-centered design “grounds the characteristics of an innovation in information about the individuals who use that innovation, with a goal of maximizing ‘usability in context’” [ 22 ]. In PtDA development, user-centered design involves iteratively understanding users, developing and refining a prototype, and observing user interaction with the prototype [ 23 , 24 ]. Like integrated knowledge translation, this approach is predicated on the assumption that involving users throughout the process increases the relevance of the PtDA and the likelihood of successful implementation [ 24 ].

Our design process included the following steps ( Figure 1 ): identification of evidence about abortion patients’ decisional needs and the attributes of medication and procedural abortion that matter most from a patient perspective; development of a paper-based prototype; usability testing via think-aloud interviews with potential end users; refinement of the PtDA prototype into an interactive website; usability testing via a survey with potential end users and abortion health care professionals; and final revisions before launching the PtDA for real-world testing. Our systematic process was informed by user-centered methods for PtDA development [ 23 , 24 ], guidance from the IPDAS collaboration [ 25 - 27 ], and the Standards for Universal Reporting of Patient Decision Aid Evaluation checklist [ 10 ].

methods of research to be used

Our multidisciplinary team included experts in shared decision-making (SM and LT), a PhD student in patient-oriented knowledge translation (KJW), experts in integrated knowledge translation with health care professionals and policy makers (WVN and SM), clinical experts in abortion counseling and care (WVN and MB), a medical undergraduate student (RS), a research project coordinator (AW), and family medicine residents (KD-L, CMB, NC, and JS) who had an interest in abortion care. Additionally, a panel of experts external to the development process reviewed the PtDA for clinical accuracy following each revision of the prototype. These experts included coauthors of the national Society for Obstetricians and Gynaecologists of Canada (SOGC) clinical practice guidelines for abortion care in Canada. They were invited to this project because of their knowledge of first-trimester abortion care as well as their ability to support the implementation of the PtDA in guidelines and routine clinical practice.

Ethical Considerations

The research was approved by the University of British Columbia Children’s and Women’s Research Ethics Board (H16-01006) and the Nova Scotia Health Research Ethics Board (1027637). In each round of testing, participants received a CAD $20 (US $14.75) Amazon gift card by email for their participation.

Preliminary Work: Identification of Evidence

We identified the decisional needs of people seeking early abortion care using a 2018 systematic review of reasons for choosing an abortion method [ 28 ], an additional search that identified 1 study conducted in Canada following the 2017 availability of mifepristone/misoprostol medication abortion [ 29 ], and the SOGC clinical practice guidelines [ 2 , 3 ]. The review identified several key factors that matter most for patient choice of early abortion method: perceived simplicity and “naturalness,” fear of complication or bleeding , fear of anesthesia or surgery , timing of the procedure , and chance of sedation . The additional Canadian study found that the time required to complete the abortion and side effects were important factors. According to the SOGC clinical practice guidelines, the key information that should be communicated to the patient are gestational age limits and the risk of complications with increasing gestational age [ 2 , 3 ]. The guidelines also indicate that wait times , travel times , and cost considerations may be important in a person’s choice of abortion method and should be addressed [ 2 , 3 ].

We compiled a long list of attributes for our expert panel and then consolidated and refined the attribute list through each stage of the prototype evaluation. For evidence of how these factors differed for medication and procedural abortion, we drew primarily from the SOGC clinical practice guidelines for abortion [ 2 , 3 ]. For cost considerations, we described the range of federal, provincial, and population-specific programs that provide free coverage of abortion care for people in Canada.

Step 1: Developing the Prototype

Our goal was to produce an interactive, web-based PtDA that would be widely accessible to people seeking an abortion in Canada by leveraging the widespread use of digital health information, especially among reproductive-aged people [ 30 ]. Our first prototype was based on a previously identified paper-based question-and-answer comparison grid that presented evidence-based information about the medication and procedural options [ 9 , 31 ]. We calculated readability by inputting the plain text of the paper-based prototype into a Simple Measure of Gobbledygook (SMOG) Index calculator [ 32 ].

We made 2 intentional deviations from common practices in PtDA development [ 33 ]. First, we did not include an “opt-out” or “do nothing” option, which would describe the natural course of pregnancy. We chose to exclude this option to ensure clarity for users regarding the decision point; specifically, our decision point of interest was the method of abortion, not the choice to terminate or continue a pregnancy. Second, we characterized attributes of the options as key points rather than positive and negative features to avoid imposing value judgments onto subjective features (eg, having the abortion take place at home may be beneficial for some people but may be a deterrent for others).

Step 2: Usability Testing of the Prototype

We first conducted usability testing involving think-aloud interviews with patient participants to assess the paper-based prototype. Inclusion criteria included people aged 18-49 years assigned-female-at-birth who resided in Canada and could speak and read English. In January 2020, we recruited participants for the first round of think-aloud interviews [ 34 ] via email and poster advertising circulated to (1) a network of parent research advisors who were convened to guide a broader program of research about pregnancy and childbirth in British Columbia, Canada, and (2) a clinic providing surgical abortion care in Nova Scotia, Canada, as well as snowball sampling with participants. We purposively sought to advertise this study with these populations to ensure variation in age, ethnicity, level of education, parity, and abortion experience. Interested individuals reviewed this study information form and provided consent to participate, before scheduling an interview. The interviewer asked participants to think aloud as they navigated the prototype, for example describing what they liked or disliked, missing information, or lack of clarity. The interviewer noted the participant’s feedback on a copy of the prototype during the interview. Finally, the participant responded to questions adapted from the System Usability Scale [ 35 ], a measure designed to collect subjective ratings of a product’s usability, and completed a brief demographic questionnaire. The interviews were conducted via videoconferencing and were audio recorded. We deidentified the qualitative data and assigned each participant a unique identifier. Then, the interviewer listened to the recording and revised their field notes with additional information including relevant quotes.

For the analysis of think-aloud interviews, we used inductive content analysis to describe the usability and acceptability of different elements of the PtDA [ 36 ]. Further, 3 family medicine residents (KD-L, CMB, and NC) under guidance from a senior coauthor (SM) completed open coding to develop a list of initial categories, which we grouped under higher-order headings. We then organized these results in a table to illustrate usability issues (categories), illustrative participant quotes, and modifications to make. We then used the results of interviews to adapt the prototype into a web-based format, which we tested via further think-aloud interviews and a survey with people capable of becoming pregnant and health care professionals involved with abortion care.

Step 3: Usability Testing of the Website

For the web-based format, we used DecideApp PtDA open-source software, which provides a sustainable solution to the problems of low quality and high maintenance costs faced by web-based PtDAs by allowing developers to host, maintain, and update their tools at no cost. This software has been user-tested and can be accessed by phone, tablet, or computer [ 37 , 38 ]. It organizes a PtDA into 6 sections: Introduction, About Me, My Values, My Choice, Review, and Next Steps. In the My Values section, an interactive values clarification exercise allows users to rank and make trade-offs between attributes of the options. The final pages provide an opportunity for users to make a choice, complete a knowledge self-assessment, and consider the next steps to access their chosen method.

From July to August 2020, we recruited patient and health care professional participants using Twitter and the email list of the Canadian Abortion Providers Support platform, respectively. Participants received an email with a link to the PtDA and were redirected to the survey once they had navigated through the PtDA. As above, inclusion criteria included people aged 18-49 years assigned as female-at-birth who resided in Canada. Among health care professionals, we included eligible prescribers who may not have previously engaged in abortion care (family physicians, residents, nurse practitioners, and midwives), and allied health professionals and stakeholders who provide or support abortion care, who practiced in Canada. All participants had to speak and read English.

The survey included 3 sections: usability, implementation, and participant characteristics. The usability section consisted of the System Usability Scale [ 35 ], and purpose-built questions about what participants liked and disliked about the PtDA. The implementation section included open- and close-ended questions about how the PtDA compares to other resources and when it could be implemented in the care pathway. Patient participants also completed the Control Preference Scale, a validated measure used to determine their preferred role in decision-making (active, collaborative, or passive) [ 39 ]. Data on participant characteristics included gender, abortion experience (patient participants), and abortion practice (health care professional participants). We deidentified the qualitative data and assigned each participant a unique identifier. For the analysis of survey data, we characterized close-ended responses using descriptive statistics, and, following the analysis procedures described in Step 2 in the Methods section, used inductive content analysis of open-ended responses to generate categories associated with usability and implementation [ 36 ]. In 2021, we made minor revisions to the website based on the results of usability testing and published the PtDA for use in routine clinical care.

In the following sections, we outline the results of the development process including the results of the think-aloud interviews and survey, as well as the final decision aid prototype.

Our initial prototype, a paper-based question-and-answer comparison grid, presented evidence-based information comparing medication and procedural abortion. The first version of the prototype also included a second medication abortion regimen involving off-label use of methotrexate, however, we removed this option following a review by the clinical expert panel who advised us that there is very infrequent use of this regimen in Canada in comparison to the gold standard medication abortion option, mifepristone. Other changes at this stage involved clarifying the scope of practice (health care professionals other than gynecologists can perform a procedural abortion), abortion practice (gestational age limit and how the medication is taken), the abortion experience (what to expect in terms of bleeding), and risk (removing information about second- and third-trimester abortion). The updated prototype was finalized by a scientist (SM) and trainee (KJW) with expertise in PtDA development. The prototype (see Multimedia Appendix 1 ) was ultimately 4 pages long and described 18 attributes of each option framed as Frequently Asked Questions, including abortion eligibility (How far along in pregnancy can I be?), duration (How long does it take?), and side effects (How much will I bleed?). The SMOG grade level was 8.4.

Participant Characteristics

We included 11 participants in think-aloud interviews between January and July 2020, including 7 recruited through a parent research advisory network and 4 individuals who had recently attended an abortion clinic. The mean interview duration was 36 minutes (SD 6 minutes). The participants ranged in age from 31 to 37 years. All had been pregnant and 8 out of 11 (73%) participants had a personal experience of abortion (4 participants who had recently attended an abortion clinic and 4 participants from the parent research advisory who disclosed their experience during the interview). The characteristics of the sample are reported in Table 1 .

Overall, participants had a positive view of the paper-based, comparison grid PtDA. In total, 1 participant who had recently sought an abortion said, “I think this is great and super helpful. It would’ve been awesome to have had access to this right away … I don’t think there’s really anything missing from here that I was Googling about” (DA010). The only participant who expressed antichoice views indicated that the PtDA would be helpful to someone seeking to terminate a pregnancy (DA001). Another participant said, “[The PtDA] is not biased, it’s not like you’re going to die. It’s a fact, you know the facts and then you decide whether you want it or not. A lot of people feel it’s so shameful and judgmental, but this is very straightforward. I like it.” (DA002). Several participants stated they felt more informed and knowledgeable about the options.

In response to questions adapted from the System Usability Scale, all 11 participants agreed that the PtDA was easy to use, that most people could learn to use it quickly, and that they felt very confident using the prototype, and disagreed that it was awkward to use. In total, 8 (73%) participants agreed with the statement that the components of the PtDA were well-integrated. A majority of participants disagreed with the statements that the website was unnecessarily complex (n=8, 73%), that they would need the support of an expert to use it (n=8, 73%), that it was too inconsistent (n=9, 82%), and that they would need to learn a lot before using it (n=8, 73%). Further, 2 (18%) participants agreed with the statements that the PtDA was unnecessarily complex and that they would need to learn a lot before using it. Furthermore, 1 (9%) participant agreed with the statement that the PtDA was too inconsistent.

Through inductive analysis of think-aloud interviews, we identified 4 key usability categories: design, language, process, and experience.

Participants liked the side-by-side comparison layout, appreciated the summary of key points to remember, and said that overall, the presented information was clear. For example, 1 participant reflected, “I think it’s very clear ... it’s very simplistic, people will understand the left-hand column is for medical abortion and the right-hand column is for surgical.” (DA005) Some participants raised concerns about the aesthetics of the PtDA, difficulties recalling the headers across multiple pages, and the overall length of the PtDA.

Participants sought to clarify language at several points in the PtDA. Common feedback was that the gestational age limit for the medication and the procedure should be clarified. Participants also pointed out inconsistent use of language (eg, doctor and health care professional) and medical jargon.

Several participants were surprised to learn that family doctors could provide abortion care. Others noted that information about the duration—including travel time—and number of appointments for both medication and procedural abortion could be improved. In addition to clarifying the abortion process, several participants suggested including additional information and resources to help identify an abortion health care professional, understand when to seek help for abortion-related complications, and access emotional support. It was also important to participants that financial impacts (eg, hospital parking and menstrual pads) were included for each option.

Participants provided insight into the description of the physical, psychological, and other consequences associated with the abortion medication and procedure. Participants who had both types of abortion care felt that the description of pain that “may be worse than a period” was inaccurate. Other participants indicated that information about perceived and real risks was distressing or felt out of place, such as correcting myths about future fertility or breast cancer. Some participants indicated that patient stories would be valuable saying, for example, “I think what might be nice to help with the decision-making process is reading stories of people’s experiences” (DA006).

Modifications Made

Changes made based on these findings are described in Table 2 . Key user-centered modifications included transitioning to a web-based format with a consistent color scheme, clarifying who the PtDA is for (for typical pregnancies up to 10 weeks), adding information about telemedicine to reflect guidelines for the provision of abortion during pandemics, and developing brief first-person qualitative descriptions of the pain intensity for each option.

Through analysis of the interviews and consultation with our panel of clinical experts, we also identified that, among the 18 initial attributes in our prototype, 7 had the most relative importance to patients in choosing between medication and procedural abortion. These attributes also represented important differences between each option which forced participants to consider the trade-offs they were willing to make. Thus we moved all other potential attributes into an information section (My Options) that supported the user to gain knowledge before clarifying what mattered most to them by considering the differences between options (My Values).

a PtDA: patient decision aid.

b SOGC: Society of Obstetricians and Gynaecologists of Canada.

Description of the PtDA

As shown in Figure 2 , the revised version of the PtDA resulting from our systematic process is an interactive website. Initially, the title was My Body, My Choice ; however, this was changed to avoid association with antivaccine campaigns that co-opted this reproductive rights slogan. The new title, It’s My Choice or C’est Mon Choix , was selected for its easy use in English and French. The PtDA leads the user through 6 sections:

  • The Introduction section provides the user with information about the decision and the PtDA, as well as grids comparing positive and negative features of the abortion pill and procedure, including their chance of benefits (eg, effectiveness), harms (eg, complications), and other relevant factors (eg, number of appointments and cost).
  • The About Me section asks the user to identify any contraindications to the methods. It then prompts users to consider their privacy needs and gives examples of how this relates to each option (eg, the abortion pill can be explained to others as a miscarriage; procedural care can be completed quickly).
  • The My Values section includes a values clarification exercise, in which the user selects and weights (on a 0-100 scale) the relative importance of at least three of 7 decisional attributes: avoiding pain, avoiding bleeding, having the abortion at home, having an experience that feels like a miscarriage, having fewer appointments, less time off for recovery, and having a companion during the abortion.
  • The My Choice section highlights 1 option, based on the attribute weights the user assigned in the My Values section. For instance, if a user strongly preferred to avoid bleeding and have fewer appointments, the software would suggest that a procedural abortion would be a better match. For a user who preferred having the abortion at home and having a companion present, the software would suggest that a medication abortion would be a better match. The user selects the option they prefer.
  • The Review section asks the user to complete the 4-item SURE (Sure of Myself, Understand Information, Risk-Benefit Ratio, Encouragement) screening test [ 41 ], and advises them to talk with an expert if they answer “no” to any of the questions. This section also includes information phone lines to ensure that users can seek confidential, accurate, and nonjudgmental support.
  • Lastly, in the Next Steps section, users see a summary of their choice and the features that matter most to them, instructions for how to save the results, keep the results private, and find an abortion health care professional. Each section of the PtDA includes a “Leave” button in case users need to navigate away from the website quickly.

We calculated readability by inputting the plain text of the web-based PtDA into a SMOG Index calculator [ 32 ], which assessed the reading level of the web-based PtDA as grade 9.2.

To ensure users’ trust in the information as accurate and unbiased we provided a data declaration on the landing page: “the clinical information presented in this decision aid comes from Society of Obstetricians and Gynaecologists best practice guidelines.” On the landing page, we also specify “This website was developed by researchers at the University of British Columbia and Dalhousie University. This tool is not supported or connected to any pharmaceutical company.”

methods of research to be used

A total of 50 participants, including 25 patients and 25 health care professionals, reviewed the PtDA website and completed the survey between January and March 2021. The majority of patient (n=23, 92%) and health care professional (n=23, 92%) participants identified as cisgender women. Among patient participants, 16% (n=4) reported one or more previous abortions in various clinical settings. More than half (n=16, 64%) of health care professionals offered care in private medical offices, with other locations including sexual health clinics, community health centers, and youth clinics. Many health care professionals were family physicians (n=11, 44%), and other common types were nurse practitioners (n=7, 28%) and midwives (n=3, 12%). The mean proportion of the clinical practice of each health care professional devoted to abortion care was 18% (SD 13%). Most health care professional respondents (n=18, 72%) were involved with the provision of medication, but not procedural, abortion care. The characteristics of patient and health care professional participants are reported in Table 3 .

a In total, 4 participants reported a history of abortion care, representing 6 abortion procedures.

b Not available.

The mean System Usability Score met the threshold for good usability among both patient (mean 85.7, SD 8.6) and health care professional (mean 80, SD 12) participants, although some health care professionals agreed with the statement, “I found the website to be unnecessarily complex,” (see Multimedia Appendix 3 for the full distribution of responses from patient and health care professionals). All 25 patients and 22 out of 25 (88%) health care professional respondents indicated that the user-friendliness of the PtDA was good or the best imaginable. When asked what they liked most about the PtDA, both participant groups described the ease of use, comparison of options, and the explicit values clarification exercise. When asked what they liked least about the PtDA, several health care professionals and some patients pointed out that it was difficult to use on a cell phone. A summary of usability results is presented in Table 4 .

In total, 21 (84%) patients and 18 (72%) health care professionals felt that the PtDA was not missing any information needed to decide about the method of abortion in early pregnancy. While acknowledging that it is “hard to balance being easy to read/understand while including enough accurate clinical information,” several health care professionals and some patients indicated that the PtDA was too long and repetitive. Among the 4 (16%) patient participants who felt information was missing, the most common suggestion was a tool for locating an abortion health care professional. The 7 (28%) health care professionals who felt information was missing primarily made suggestions about the medical information included in the PtDA (eg, listing midwives as health care professionals with abortion care in scope of practice and the appropriateness of gender-inclusive terminology) and the accessibility of information for various language and cultural groups.

a Not available.

Implementation

Participants viewed the PtDA as a positive addition to current resources. Patients with a history of abortion care described looking for the information on the internet and speaking with friends, family members, and health care professionals. Compared with these sources of information, many patients liked the credibility and anonymity of the PtDA, whereas some disliked that it was less personal than a conversation. Further, 18 (72%) health care professional participants said that the PtDA would add to or replace the resources they currently use in practice. Compared with these other resources, health care professionals liked that the PtDA could be explored by patients independently and that it would support them in thinking about the option that was best for them. The disadvantages of the PtDA compared with existing resources were the length—which health care professionals felt would make it difficult to use in a clinical interaction—and the lack of localized information. In total, 23 each (92%) of patient and health care professional participants felt that they would use the PtDA before a consultation.

Principal Results

We designed a web-based, interactive PtDA for the choice of method of abortion in early pregnancy [ 42 ], taking a user-centered approach that involved usability testing with 36 patients and 25 health care professionals. Both patient and health care professional participants indicated that the PtDA had good usability and would be a valuable resource for decision-making. This PtDA fills a critical need to support the autonomy of patients and shared decision-making with their health care professional related to the preference-sensitive choice of method of abortion.

Comparison With Prior Work

A 2017 systematic review and environmental scan found that existing PtDAs for the method of abortion are of suboptimal quality [ 9 ]. Of the 50 PtDAs identified, all but one were created without expertise in decision aid design (eg, abortion services, reproductive health organizations, and consumer health information organizations); however, the development process for this UK-based pamphlet-style PtDA was not reported. The remaining PtDAs were noninteractive websites, smartphone apps, and PDFs that were not tested with users. The authors found that the information about methods of abortion was presented in a disorganized, inconsistent, and unequal way. Subsequent work has found that existing PtDAs emphasize medical (versus social, emotional, and practical) attributes, do not include values clarification, and can be biased to persuade users of a certain method [ 13 ].

To address some of the challenges identified in the literature, we systematically structured and designed elements of the PtDA following newly proposed IPDAS criteria (eg, showing positive and negative features with equal detail) [ 33 ]. We included an explicit values-clarification exercise, which a recent meta-analysis found to decrease decisional conflict and values-incongruent choices [ 43 ].

We based the decision aid on comprehensive and up-to-date scientific evidence related to the effectiveness and safety of medication abortion and procedural abortion; however, less evidence was available for nonmedical attributes. For example, many existing PtDAs incorrectly frame privacy as a “factual advantage” of medication abortion [ 13 ]. To address this, we included privacy in the About Me section as something that means “different things to different people.” Similarly, evidence suggests that patients who do not feel appropriately informed about the pain associated with their method of abortion are less satisfied with their choice [ 44 , 45 ]; and the degree of pain experienced varies across options and among individuals. Following the suggestion of patient participants to include stories and recognizing that evidence for the inclusion of narratives in PtDAs is emerging [ 46 ], we elected to develop brief first-person qualitative descriptions of the pain experience. The inclusion of narratives in PtDAs may be effective in supporting patients to avoid surprise and regret, to minimize affective forecasting errors, and to “visualize” their health condition or treatment experience [ 46 ]. Guided by the narrative immersion model, our goal was to provide a “real-world preview” of the pain experience [ 47 ].

In addition to integrating user perspectives on the optimal tone, content, and format of the PtDA, user testing provided evidence to inform the future implementation of the PtDA. A clear barrier to the completion of the PtDA during the clinical encounter from the health care professional perspective was its length, supporting the finding of a recent rapid realist review, which theorized that health care professionals are less likely to use long or otherwise complex PtDAs that are difficult to integrate into routine practice [ 48 ]. However, 46 out of 50 (92%) participants endorsed the use of the PtDA by the patient alone before the initial consultation, which was aligned with the patient participant’s preference to take an active role in making the final decision about their method of abortion as well as the best practice of early, pre-encounter distribution of PtDAs [ 48 ].

A unique feature of this PtDA was that it resulted from a broader program of integrated knowledge translation designed to support access to medication abortion once mifepristone became available in Canada in 2017. Guided by the principle that including knowledge users in research yields results that are more relevant and useful [ 49 ], we developed the PtDA in response to a knowledge user need, involved health care professional users as partners in our research process, including as coauthors, and integrated feedback from the expert panel. This parallels a theory of PtDA implementation that proposes that early involvement of health care professionals in PtDA development “creates a sense of ownership, increases buy-in, helps to legitimize content, and ensures the PtDA (content and delivery) is consistent with current practice” thereby increasing the likelihood of PtDA integration into routine clinical settings [ 48 ].

Viewed through an integrated knowledge translation lens, our findings point toward future areas of work to support access to abortion in Canada. Several patient participants indicated a need for tools to identify health care professionals who offer abortion care. Some shared that their primary health care professionals did not offer medication abortion despite it being within their scope of practice, and instead referred them to an abortion clinic for methods of counseling and care. We addressed this challenge in the PtDA by including links to available resources, such as confidential phone lines that link patients to health care professionals in their region. On the website we also indicated that patient users could ask their primary care providers whether they provide abortion care; however, we acknowledge that this may place the patient in a vulnerable position if their health care professional is uncomfortable with, or unable to, provide this service for any reason. Future work should investigate opportunities to shorten the pathway to this time-sensitive care, including how to support patients who use the decision aid to act on their informed preference for the method of abortion. This work may involve developing a tool for patients to talk to their primary care provider about prescribing medication abortion.

Strengths and Limitations

Several factors affect the interpretation of our work. Although potential patient users participated in the iterative development process, the patient perspective was not represented in a formal advisory panel in the same way that the health care professional experts were. Participant characteristics collected for the think-aloud interviews demonstrated that our patient sample did not include people with lower education attainment, for whom the grade level and length of the PtDA could present a barrier [ 50 ]. Any transfer of the PtDA to jurisdictions outside Canada must consider how legal, regulatory, and other contextual factors affect the choice of the method of abortion. Since this study was completed, we have explored additional strategies to address these concerns, including additional user testing with people from equity-deserving groups, drop-down menus to adjust the level of detail, further plain language editing, and videos illustrating core content. Since the focus of this study was usability, we did not assess PtDA effectiveness, including impact on knowledge, decisional conflict, choice predisposition and decision, or concordance; however, a randomized controlled trial currently underway will measure the impact of the PtDA on these outcomes in a clinical setting. Finally, our integrated knowledge translation approach added to the robustness of our study by ensuring that health care professionals and patients were equal partners in the research process. One impact of this partnered approach is that our team has received funding support from Health Canada to implement the website on a national scale for people across Canada considering their abortion options [ 51 ].

Conclusions

The PtDA provides people choosing a method of early abortion and their health care professionals with a resource to understand methods of abortion available in the Canadian context and support to make a values-aligned choice. We designed the PtDA using a systematic approach that included both patient and health care professional participants to help ensure its relevance and usability. Our future work will seek to evaluate the implementation of the PtDA in clinical settings, create alternate formats to enhance accessibility, and develop a sustainable update policy. We will also continue to advance access to abortion care in Canada with our broader integrated knowledge translation program of research.

Acknowledgments

The authors thank the participants for contributing their time and expertise to the design of this tool. Family medicine residents CMB, NC, KD-L, and JS were supported by Sue Harris grants, Department of Family Practice, University of British Columbia. KJW was supported by the Vanier Scholar Award (2020-23). SM was supported by a Michael Smith Health Research BC Scholar Award (18270). WVN was supported by a Canadian Institutes of Health Research and Public Health Agency of Canada Chair in Applied Public Health Research (2014-2024, CPP-329455-107837). All grants underwent external peer review for scientific quality. The funders played no role in the design of this study, data collection, analysis, interpretation, or preparation of this paper.

Data Availability

Our ethics approval has specified the primary data is not available.

Authors' Contributions

KJW, SM, and MB conceived of and designed this study. CMB, NC, and KD-L led interview data collection, analysis, and interpretation with input from SM. RS and JS led survey data collection, analysis, and interpretation with input from SM and MB. AW, LCL, and WVN contributed to the synthesis and interpretation of results. KJW, SM, and LT wrote the first draft of this paper, and all authors contributed to this paper’s revisions and approved the final version.

Conflicts of Interest

None declared.

Patient decision aid prototype.

Raw data for pain narratives.

Full distribution of System Usability Scale scores for patients and providers.

  • Norman WV. Induced abortion in Canada 1974-2005: trends over the first generation with legal access. Contraception. 2012;85(2):185-191. [ CrossRef ] [ Medline ]
  • Costescu D, Guilbert E, Bernardin J, Black A, Dunn S, Fitzsimmons B, et al. Medical abortion. J Obstet Gynaecol Can. 2016;38(4):366-389. [ CrossRef ] [ Medline ]
  • Costescu D, Guilbert É. No. 360-induced abortion: surgical abortion and second trimester medical methods. J Obstet Gynaecol Can. 2018;40(6):750-783. [ CrossRef ] [ Medline ]
  • Wennberg JE. Unwarranted variations in healthcare delivery: implications for academic medical centres. BMJ. 2002;325(7370):961-964. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Elwyn G, Frosch D, Rollnick S. Dual equipoise shared decision making: definitions for decision and behaviour support interventions. Implement Sci. 2009;4:75. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sepucha KR, Mulley AG. A practical approach to measuring the quality of preference-sensitive decisions. In: Edwards A, Elwyn G, editors. Shared Decision-Making in Health Care: Achieving Evidence-based Patient Choice. Oxford. Oxford University Press; 2009;151-156.
  • Elwyn G, Laitner S, Coulter A, Walker E, Watson P, Thomson R. Implementing shared decision making in the NHS. BMJ. 2010;341:c5146. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Légaré F, Adekpedjou R, Stacey D, Turcotte S, Kryworuchko J, Graham ID, et al. Interventions for increasing the use of shared decision making by healthcare professionals. Cochrane Database Syst Rev. 2018;7(7):CD006732. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Donnelly KZ, Elwyn G, Thompson R. Quantity over quality-findings from a systematic review and environmental scan of patient decision aids on early abortion methods. Health Expect. 2018;21(1):316-326. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sepucha KR, Abhyankar P, Hoffman AS, Bekker HL, LeBlanc A, Levin CA, et al. Standards for Universal Reporting of Patient Decision Aid Evaluation studies: the development of SUNDAE checklist. BMJ Qual Saf. 2018;27(5):380-388. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Stacey D, Lewis KB, Smith M, Carley M, Volk R, Douglas EE, et al. Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev. 2024;1(1):CD001431. [ CrossRef ] [ Medline ]
  • Wong SSM, Thornton JG, Gbolade B, Bekker HL. A randomised controlled trial of a decision-aid leaflet to facilitate women's choice between pregnancy termination methods. BJOG. 2006;113(6):688-694. [ CrossRef ] [ Medline ]
  • Donnelly KZ, Elwyn G, Theiler R, Thompson R. Promoting or undermining quality decision making? A qualitative content analysis of patient decision aids comparing surgical and medication abortion. Womens Health Issues. 2019;29(5):414-423. [ CrossRef ] [ Medline ]
  • Grant K. Long-awaited abortion pill Mifegymiso makes Canadian debut. The Globe and Mail. 2017. URL: https:/​/www.​theglobeandmail.com/​news/​national/​long-awaited-abortion-pill-mifegymiso-rolls-out-in-canada/​article33695167/​?ref=http:/​/www.​theglobeandmail.​com& [accessed 2023-04-03]
  • Norman WV, Munro S, Brooks M, Devane C, Guilbert E, Renner R, et al. Could implementation of mifepristone address Canada's urban-rural abortion access disparity: a mixed-methods implementation study protocol. BMJ Open. 2019;9(4):e028443. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Munro SB, Dunn S, Guilbert ER, Norman WV. Advancing reproductive health through policy-engaged research in abortion care. Semin Reprod Med. 2022;40(5-06):268-276. [ CrossRef ] [ Medline ]
  • Dunn SI, Bhati DK, Reszel J, Kothari A, McCutcheon C, Graham ID. Understanding how and under what circumstances integrated knowledge translation works for people engaged in collaborative research: metasynthesis of IKTRN casebooks. JBI Evid Implement. 2023;21(3):277-293. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Norman WV, Guilbert ER, Okpaleke C, Hayden AS, Lichtenberg ES, Paul M, et al. Abortion health services in Canada: results of a 2012 national survey. Can Fam Physician. 2016;62(4):e209-e217. [ FREE Full text ] [ Medline ]
  • Munro S, Guilbert E, Wagner MS, Wilcox ES, Devane C, Dunn S, et al. Perspectives among canadian physicians on factors influencing implementation of mifepristone medical abortion: a national qualitative study. Ann Fam Med. 2020;18(5):413-421. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Munro S, Wahl K, Soon JA, Guilbert E, Wilcox ES, Leduc-Robert G, et al. Pharmacist dispensing of the abortion pill in Canada: diffusion of innovation meets integrated knowledge translation. Implement Sci. 2021;16(1):76. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Health care policy and strategies program. Call for proposals: funding opportunities for sexual and reproductive health. Health Canada. URL: https://www.canada.ca/en/health-canada/programs/health-care-policy-strategies-program.html [accessed 2024-03-14]
  • Dopp AR, Parisi KE, Munson SA, Lyon AR. A glossary of user-centered design strategies for implementation experts. Transl Behav Med. 2019;9(6):1057-1064. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vaisson G, Provencher T, Dugas M, Trottier ME, Dansokho SC, Colquhoun H, et al. User involvement in the design and development of patient decision aids and other personal health tools: a systematic review. Med Decis Making. 2021;41(3):261-274. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Witteman HO, Maki KG, Vaisson G, Finderup J, Lewis KB, Steffensen KD, et al. Systematic development of patient decision aids: an update from the IPDAS collaboration. Med Decis Making. 2021;41(7):736-754. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Joseph-Williams N, Newcombe R, Politi M, Durand MA, Sivell S, Stacey D, et al. Toward minimum standards for certifying patient decision aids: a modified Delphi consensus process. Med Decis Making. 2014;34(6):699-710. [ CrossRef ] [ Medline ]
  • Stacey D, Volk RJ, IPDAS Evidence Update Leads (Hilary Bekker, Karina Dahl Steffensen, Tammy C. Hoffmann, Kirsten McCaffery, Rachel Thompson, Richard Thomson, Lyndal Trevena, Trudy van der Weijden, and Holly Witteman). The International Patient Decision Aid Standards (IPDAS) collaboration: evidence update 2.0. Med Decis Making. 2021;41(7):729-733. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hoffman AS, Volk RJ, Saarimaki A, Stirling C, Li LC, Härter M, et al. Delivering patient decision aids on the internet: definitions, theories, current evidence, and emerging research areas. BMC Med Inform Decis Mak. 2013;13(Suppl 2):S13. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kanstrup C, Mäkelä M, Graungaard AH. Women's reasons for choosing abortion method: a systematic literature review. Scand J Public Health. 2018;46(8):835-845. [ CrossRef ] [ Medline ]
  • Murray ME, Casson M, Pudwell J, Waddington A. Patients' motivation for surgical versus medical abortion. J Obstet Gynaecol Can. 2019;41(9):1325-1329. [ CrossRef ] [ Medline ]
  • Kummervold PE, Chronaki CE, Lausen B, Prokosch HU, Rasmussen J, Santana S, et al. eHealth trends in Europe 2005-2007: a population-based survey. J Med Internet Res. 2008;10(4):e42. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Early abortion options. Reproductive Health Access Project. 2022. URL: https://www.reproductiveaccess.org/resource/early-abortion-options/ [accessed 2019-03-24]
  • Readability Formulas: free readability assessment tools to help you write for your readers. URL: https://readabilityformulas.com/ [accessed 2022-12-15]
  • Martin RW, Andersen SB, O'Brien MA, Bravo P, Hoffmann T, Olling K, et al. Providing balanced information about options in patient decision aids: an update from the International Patient Decision Aid Standards. Med Decis Making. 2021;41(7):780-800. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lundgrén-Laine H, Salanterä S. Think-aloud technique and protocol analysis in clinical decision-making research. Qual Health Res. 2010;20(4):565-575. [ CrossRef ] [ Medline ]
  • Bangor A, Kortum PT, Miller JT. An empirical evaluation of the system usability scale. Int J Hum Comput Interact. 2008;24(6):574-594. [ CrossRef ]
  • Elo S, Kyngäs H. The qualitative content analysis process. J Adv Nurs. 2008;62(1):107-115. [ CrossRef ] [ Medline ]
  • Bansback N, Li LC, Lynd L, Bryan S. Development and preliminary user testing of the DCIDA (Dynamic Computer Interactive Decision Application) for 'nudging' patients towards high quality decisions. BMC Med Inform Decis Mak. 2014;14:62. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Trenaman L, Munro S, Almeida F, Ayas N, Hicklin J, Bansback N. Development of a patient decision aid prototype for adults with obstructive sleep apnea. Sleep Breath. 2016;20(2):653-661. [ CrossRef ] [ Medline ]
  • Degner LF, Sloan JA, Venkatesh P. The control preferences scale. Can J Nurs Res. 1997;29(3):21-43. [ Medline ]
  • Patev AJ, Hood KB. Towards a better understanding of abortion misinformation in the USA: a review of the literature. Cult Health Sex. 2021;23(3):285-300. [ CrossRef ] [ Medline ]
  • Légaré F, Kearing S, Clay K, Gagnon S, D'Amours D, Rousseau M, et al. Are you SURE?: Assessing patient decisional conflict with a 4-item screening test. Can Fam Physician. 2010;56(8):e308-e314. [ FREE Full text ] [ Medline ]
  • The Society of Obstetricians and Gynaecologists of Canada. URL: https://www.sexandu.ca/its-my-choice/ [accessed 2024-03-30]
  • Witteman HO, Ndjaboue R, Vaisson G, Dansokho SC, Arnold B, Bridges JFP, et al. Clarifying values: an updated and expanded systematic review and meta-analysis. Med Decis Making. 2021;41(7):801-820. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cavet S, Fiala C, Scemama A, Partouche H. Assessment of pain during medical abortion with home use of misoprostol. Eur J Contracept Reprod Health Care. 2017;22(3):207-211. [ CrossRef ] [ Medline ]
  • Baraitser P, Free C, Norman WV, Lewandowska M, Meiksin R, Palmer MJ, et al. Improving experience of medical abortion at home in a changing therapeutic, technological and regulatory landscape: a realist review. BMJ Open. 2022;12(11):e066650. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shaffer VA, Brodney S, Gavaruzzi T, Zisman-Ilani Y, Munro S, Smith SK, et al. Do personal stories make patient decision aids more effective? An update from the International Patient Decision Aids Standards. Med Decis Making. 2021;41(7):897-906. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Shaffer VA, Focella ES, Hathaway A, Scherer LD, Zikmund-Fisher BJ. On the usefulness of narratives: an interdisciplinary review and theoretical model. Ann Behav Med. 2018;52(5):429-442. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Joseph-Williams N, Abhyankar P, Boland L, Bravo P, Brenner AT, Brodney S, et al. What works in implementing patient decision aids in routine clinical settings? A rapid realist review and update from the International Patient Decision Aid Standards collaboration. Med Decis Making. 2021;41(7):907-937. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guide to knowledge translation planning at CIHR: integrated and end-of-grant approaches. Government of Canada, Canadian Institutes of Health Research. 2012. URL: http://www.cihr-irsc.gc.ca/e/45321.html [accessed 2018-10-08]
  • Muscat DM, Smith J, Mac O, Cadet T, Giguere A, Housten AJ, et al. Addressing health literacy in patient decision aids: an update from the International Patient Decision Aid Standards. Med Decis Making. 2021;41(7):848-869. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • The CART access project. Contraception and Abortion Research Team (CART-GRAC). 2024. URL: https://cart-grac.ubc.ca/the-cart-access-project-2/ [accessed 2024-01-28]

Abbreviations

Edited by T Leung; submitted 07.05.23; peer-reviewed by G Sebastian, R French, B Zikmund-Fisher; comments to author 11.01.24; revised version received 23.02.24; accepted 25.02.24; published 16.04.24.

©Kate J Wahl, Melissa Brooks, Logan Trenaman, Kirsten Desjardins-Lorimer, Carolyn M Bell, Nazgul Chokmorova, Romy Segall, Janelle Syring, Aleyah Williams, Linda C Li, Wendy V Norman, Sarah Munro. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 16.04.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Prevent plagiarism. Run a free check.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved April 16, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, what is your plagiarism score.

ORIGINAL RESEARCH article

Developing key indicators for sustainable food system: a comprehensive application of stakeholder consultations and delphi method provisionally accepted.

  • 1 Institute for Population and Social Research, Mahidol University, Thailand

The final, formatted version of the article will be published soon.

The overall status of the food system in Thailand is currently unknown. Although several national and international reports describe Thailand food system, they are not accurate and relevant to inform policies. This study aims to develop indicators which measure Thailand's sustainable food system. We adopted seven-dimensional metrics proposed by Gustafson to facilitate a comparative analysis of food systems, namely (1) food nutrient adequacy; (2) ecosystem stability; (3) food availability and affordability; (4) sociocultural well-being; (5) food safety; (6) resilience; and (7) waste and loss reduction. Three rounds of the Delphi method were convened to assess the proposed indicators using the Item Objective Congruence (IOC) by 48 Thai stakeholders recruited from the government, NGOs, and academia. IOC is a procedure used in test development for evaluating content validity at the item development stage. In each round, the average IOC for each item was carefully considered, together with stakeholders' comments on whether to retain, remove, or recruit new indicators. The communication through mail and email was sent out so that stakeholders could assess independently. A total of 88 and 73 indicators went to the first and second round Delphi assessment; this resulted in 62 final indicators after the third round. In conclusion, these 62 indicators and 190 sub-indicators are too many for policy uses. As an ongoing indicator development, we plan that these 62 indicators will be further tested in different settings to assess data feasibility. After field tests, the final prioritized indicators will be submitted for policy decisions for regular national monitoring and informing policy towards sustainable food systems in Thailand.

Keywords: Sustainable food system, indicator, Food security, resilience, Agriculture, Delphi method

Received: 08 Jan 2024; Accepted: 17 Apr 2024.

Copyright: © 2024 Rittirong, Chuenglertsiri, Nitnara and Phulkerd. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

* Correspondence: Dr. Jongjit Rittirong, Institute for Population and Social Research, Mahidol University, Salaya, Thailand

People also looked at

share this!

April 17, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

Novel method proposed to design high-efficiency guest components for ternary organic solar cells

by Chen Na, Chinese Academy of Sciences

Novel method proposed to design high-efficiency guest components for ternary organic solar cells

A research group led by Prof. Ge Ziyi at the Ningbo Institute of Materials Technology and Engineering of the Chinese Academy of Sciences has proposed a key strategy for optimizing guest components to minimize non-radiative voltage losses and thus achieve high-efficiency ternary organic solar cells (OSCs).

This work was published in Advanced Materials .

OSCs have attracted considerable attention in the field of organic electronic devices due to their light weight, good mechanical flexibility and transparency. The ternary strategy, in which a guest component is introduced into a host binary system , is considered to be one of the most effective and facile ways to achieve OSCs with excellent power conversion efficiencies (PCEs).

Various efficient guest components have been developed for binary host systems, but there is still no effective way to predict the effectiveness of guest components in improving device efficiency.

Using density functional theory calculations, the researchers designed and synthesized three asymmetrical non-fullerene acceptors (ANFs) with different electrostatic potential (ESP) distributions, namely ANF-1, ANF-2, and ANF-3.

These three ANFs were used as guest acceptors in the well-known host system of D18:N3. Then the effects of the introduction of the guest component on the photovoltaic properties were carefully investigated.

The experimental results showed that a large ESP difference between the guest acceptor and the host system leads to strong intermolecular interactions and a high miscibility of the two molecules, which improves the luminescent efficiency of the blend film and the electroluminescence quantum yield (EQE EL ) of the device, thus reducing the voltage loss of ternary OSCs.

Compared with ANF-1 and ANF-2-based ternary films, the ANF-3-based ternary film exhibited a larger ESP difference between the guest acceptor and the host acceptor, thus exhibiting higher and more balanced mobility, and smaller phase separation size, contributing to a more effective charge transport.

As a result, the D18:N3:ANF-3 ternary OSC achieved the highest PCE of 18.93% with a low voltage loss of 0.236 eV.

This work provides an important guideline for designing high-efficiency guest acceptors to further improve the open-circuit voltage in the ternary OSCs: increase the ESP difference between the guest acceptor and the host acceptor.

Furthermore, this rule has been verified in other reported high-efficiency ternary OSCs, demonstrating its excellent generality.

This study may shed light on the design and development of single-junction OSCs with PCE above 20%.

Explore further

Feedback to editors

methods of research to be used

Researchers develop energy-efficient probabilistic computer by combining CMOS with stochastic nanomagnet

methods of research to be used

A rimless wheel robot that can reliably overcome steps

4 hours ago

methods of research to be used

Student engineering team successfully builds and runs hydrogen-powered engine

7 hours ago

methods of research to be used

Cooler transformers could help electric grid

18 hours ago

methods of research to be used

Neutron scattering study points the way to more powerful lithium batteries

19 hours ago

methods of research to be used

Taichi: A large-scale diffractive hybrid photonic AI chiplet

Apr 16, 2024

methods of research to be used

New insight about the working principles of bipolar membranes could guide future fuel cell design

methods of research to be used

Using sound waves for photonic machine learning: Study lays foundation for reconfigurable neuromorphic building blocks

methods of research to be used

Samsung returns to top of the smartphone market: Industry tracker

methods of research to be used

Safeguarding the future of online security with AI and metasurfaces

Apr 15, 2024

Related Stories

methods of research to be used

Isomerization strategy on non-fullerene guest acceptor enables stable and efficient organic solar cells

Jun 21, 2023

methods of research to be used

Alkoxy substitution on asymmetric conjugated molecule enables high-efficiency ternary organic solar cells

Dec 8, 2022

methods of research to be used

Ductile oligomeric acceptor enables highly efficient and mechanically robust flexible organic solar cells

Sep 28, 2023

methods of research to be used

Flexible solar cell achieves major power conversion efficiency gains

Sep 4, 2023

methods of research to be used

Microstructure morphology fine-tuning of active layer film boosts organic solar cell efficiency

Jul 22, 2021

methods of research to be used

Researchers set new efficiency record for stable binary organic solar cells

Apr 1, 2024

Recommended for you

methods of research to be used

Research team manufactures the first universal, programmable and multifunctional photonic chip

methods of research to be used

Advance in light-based computing shows capabilities for future smart cameras

methods of research to be used

Researchers develop stretchable quantum dot display

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.

Your Privacy

This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use .

E-mail newsletter

IMAGES

  1. 15 Types of Research Methods (2024)

    methods of research to be used

  2. Types of Research Methodology: Uses, Types & Benefits

    methods of research to be used

  3. Module 1: Introduction: What is Research?

    methods of research to be used

  4. Research Paper Methodology

    methods of research to be used

  5. Types of Research Methodology: Uses, Types & Benefits

    methods of research to be used

  6. Types of Research Archives

    methods of research to be used

VIDEO

  1. The scientific approach and alternative approaches to investigation

  2. Data Collection Methods / Research Methodology (part 7) #researchmethodology #datacollection

  3. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  4. Research Types

  5. What are the components of research methodology?

  6. Research Methods Definitions Types and Examples

COMMENTS

  1. Research Methods

    The research methods you use depend on the type of data you need to answer your research question. If you want to measure something or test a hypothesis, use quantitative methods. If you want to explore ideas, thoughts and meanings, use qualitative methods. If you want to analyze a large amount of readily-available data, use secondary data.

  2. Research Methods--Quantitative, Qualitative, and More: Overview

    About Research Methods. This guide provides an overview of research methods, how to choose and use them, and supports and resources at UC Berkeley. As Patten and Newhart note in the book Understanding Research Methods, "Research methods are the building blocks of the scientific enterprise. They are the "how" for building systematic knowledge.

  3. Research Methods

    Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

  4. Research Methodology

    The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

  5. What Is a Research Methodology?

    Step 1: Explain your methodological approach. Step 2: Describe your data collection methods. Step 3: Describe your analysis method. Step 4: Evaluate and justify the methodological choices you made. Tips for writing a strong methodology chapter. Other interesting articles.

  6. What Is Research Methodology? Definition + Examples

    As we mentioned, research methodology refers to the collection of practical decisions regarding what data you'll collect, from who, how you'll collect it and how you'll analyse it. Research design, on the other hand, is more about the overall strategy you'll adopt in your study. For example, whether you'll use an experimental design ...

  7. How To Choose The Right Research Methodology

    Mixed methods-based research, as you'd expect, attempts to bring these two types of research together, drawing on both qualitative and quantitative data.Quite often, mixed methods-based studies will use qualitative research to explore a situation and develop a potential model of understanding (this is called a conceptual framework), and then go on to use quantitative methods to test that ...

  8. Types of Research Designs Compared

    The words you use to describe your research depend on your discipline and field. In general, though, the form your research design takes will be shaped by: ... But the type of research is only the first step: next, you have to make more concrete decisions about your research methods and the details of the study. Read more about creating a ...

  9. Research Methods

    The research methods you use depend on the type of data you need to answer your research question. If you want to measure something or test a hypothesis, use quantitative methods. If you want to explore ideas, thoughts, and meanings, use qualitative methods. If you want to analyse a large amount of readily available data, use secondary data.

  10. Overview of Research Methodology

    This research methods guide will help you choose a methodology and launch into your research project. Data collection and data analysis are research methods that can be applied to many disciplines. There is Qualitative research and Quantitative Research. The focus of this guide, includes most popular methods including: surveys.

  11. Research Methods: What are research methods?

    What are research methods. Research methods are the strategies, processes or techniques utilized in the collection of data or evidence for analysis in order to uncover new information or create better understanding of a topic. There are different types of research methods which use different tools for data collection.

  12. What are research methods?

    Research methods are different from research methodologies because they are the ways in which you will collect the data for your research project. The best method for your project largely depends on your topic, the type of data you will need, and the people or items from which you will be collecting data. The following boxes below contain a ...

  13. 15 Types of Research Methods (2024)

    Research methods refer to the strategies, tools, and techniques used to gather and analyze data in a structured way in order to answer a research question or investigate a hypothesis (Hammond & Wellington, 2020). Generally, we place research methods into two categories: quantitative and qualitative. Each has its own strengths and weaknesses ...

  14. Types of Research Methods (With Best Practices and Examples)

    Professionals use research methods while studying medicine, human behavior and other scholarly topics. There are two main categories of research methods: qualitative research methods and quantitative research methods. Quantitative research methods involve using numbers to measure data. Researchers can use statistical analysis to find ...

  15. Choosing the Right Research Methodology: A Guide

    These methods are used to understand how language is used in real-world situations, identify common themes or overarching ideas, and describe and interpret various texts. Data analysis for qualitative research typically includes discourse analysis, thematic analysis, and textual analysis. Quantitative research methodology:

  16. Research Methods Guide: Research Design & Method

    Most frequently used methods include: Observation / Participant Observation. Surveys. Interviews. Focus Groups. Experiments. Secondary Data Analysis / Archival Study. Mixed Methods (combination of some of the above) One particular method could be better suited to your research goal than others, because the data you collect from different ...

  17. Research Design

    Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions. This type of research can be used to establish generalizable facts about a topic. Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.

  18. A tutorial on methodological studies: the what, when, how and why

    The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies.

  19. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  20. What are research methodologies?

    According to Dawson (2019),a research methodology is the primary principle that will guide your research. It becomes the general approach in conducting research on your topic and determines what research method you will use. A research methodology is different from a research method because research methods are the tools you use to gather your ...

  21. How to Construct a Mixed Methods Research Design

    Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e. g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration. ...

  22. Twelve tips for using rapid research methods in health professions

    Rapid research methods have been increasingly used in healthcare, especially for qualitative research studies and literature reviews. An essential aspect of using rapid research methods is pragmatism, in which there is a balance between the constraints of the short time frame (typically less than 3 months), the available resources, and the ...

  23. New guidelines reflect growing use of AI in health care research

    But research methods have moved on since 2015, and we are witnessing an acceleration of studies that are developing prediction models using AI, specifically machine learning methods.

  24. UF Health researchers find new method to use MRIs to delve into secrets

    On a team led by UF Institute on Aging researcher Yenisel Cruz-Almeida, Ph.D., Valdes-Hernandez used a publicly available AI model developed elsewhere that was trained on 14,000 brain MRIs to predict brain age. UF Health researchers then "retrained" the model using AI techniques by having the model analyze an additional 6,281 non-research ...

  25. Journal of Medical Internet Research

    Background: The COVID-19 pandemic has had a significant global impact, with millions of cases and deaths. Research highlights the persistence of symptoms over time (post-COVID-19 condition), a situation of particular concern in children and young people with symptoms. Social media such as Twitter (subsequently rebranded as X) could provide valuable information on the impact of the post ...

  26. Journal of Medical Internet Research

    Methods: We used a systematic, user-centered design approach guided by principles of integrated knowledge translation. We first developed a prototype using available evidence for abortion seekers' decisional needs and the risks, benefits, and consequences of each option. ... Journal of Medical Internet Research 8291 articles

  27. What Is Qualitative Research?

    Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which involves collecting and ...

  28. Frontiers

    The overall status of the food system in Thailand is currently unknown. Although several national and international reports describe Thailand food system, they are not accurate and relevant to inform policies. This study aims to develop indicators which measure Thailand's sustainable food system. We adopted seven-dimensional metrics proposed by Gustafson to facilitate a comparative analysis of ...

  29. Malnutrition enteropathy in Zambian and Zimbabwean children ...

    Per-protocol analysis used ANCOVA, adjusted for baseline biomarker value, sex, oedema, HIV status, diarrhoea, weight-for-length Z-score, and study site, with pre-specified significance of P < 0.10.

  30. Novel method proposed to design high-efficiency guest components for

    A research group led by Prof. Ge Ziyi at the Ningbo Institute of Materials Technology and Engineering of the Chinese Academy of Sciences has proposed a key strategy for optimizing guest components to minimize non-radiative voltage losses and thus achieve high-efficiency ternary organic solar cells (OSCs). This work was published in Advanced ...