Systematic Reviews

  • Levels of Evidence
  • Evidence Pyramid
  • Joanna Briggs Institute

The evidence pyramid is often used to illustrate the development of evidence. At the base of the pyramid is animal research and laboratory studies – this is where ideas are first developed. As you progress up the pyramid the amount of information available decreases in volume, but increases in relevance to the clinical setting.

Meta Analysis  – systematic review that uses quantitative methods to synthesize and summarize the results.

Systematic Review  – summary of the medical literature that uses explicit methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate st atistical techniques to combine these valid studies.

Randomized Controlled Trial – Participants are randomly allocated into an experimental group or a control group and followed over time for the variables/outcomes of interest.

Cohort Study – Involves identification of two groups (cohorts) of patients, one which received the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.

Case Control Study – study which involves identifying patients who have the outcome of interest (cases) and patients without the same outcome (controls), and looking back to see if they had the exposure of interest.

Case Series   – report on a series of patients with an outcome of interest. No control group is involved.

  • Levels of Evidence from The Centre for Evidence-Based Medicine
  • The JBI Model of Evidence Based Healthcare
  • How to Use the Evidence: Assessment and Application of Scientific Evidence From the National Health and Medical Research Council (NHMRC) of Australia. Book must be downloaded; not available to read online.

When searching for evidence to answer clinical questions, aim to identify the highest level of available evidence. Evidence hierarchies can help you strategically identify which resources to use for finding evidence, as well as which search results are most likely to be "best".                                             

Hierarchy of Evidence. For a text-based version, see text below image.

Image source: Evidence-Based Practice: Study Design from Duke University Medical Center Library & Archives. This work is licensed under a Creativ e Commons Attribution-ShareAlike 4.0 International License .

The hierarchy of evidence (also known as the evidence-based pyramid) is depicted as a triangular representation of the levels of evidence with the strongest evidence at the top which progresses down through evidence with decreasing strength. At the top of the pyramid are research syntheses, such as Meta-Analyses and Systematic Reviews, the strongest forms of evidence. Below research syntheses are primary research studies progressing from experimental studies, such as Randomized Controlled Trials, to observational studies, such as Cohort Studies, Case-Control Studies, Cross-Sectional Studies, Case Series, and Case Reports. Non-Human Animal Studies and Laboratory Studies occupy the lowest level of evidence at the base of the pyramid.

  • Finding Evidence-Based Answers to Clinical Questions – Quickly & Effectively A tip sheet from the health sciences librarians at UC Davis Libraries to help you get started with selecting resources for finding evidence, based on type of question.
  • << Previous: What is a Systematic Review?
  • Next: Locating Systematic Reviews >>
  • Getting Started
  • What is a Systematic Review?
  • Locating Systematic Reviews
  • Searching Systematically
  • Developing Answerable Questions
  • Identifying Synonyms & Related Terms
  • Using Truncation and Wildcards
  • Identifying Search Limits/Exclusion Criteria
  • Keyword vs. Subject Searching
  • Where to Search
  • Search Filters
  • Sensitivity vs. Precision
  • Core Databases
  • Other Databases
  • Clinical Trial Registries
  • Conference Presentations
  • Databases Indexing Grey Literature
  • Web Searching
  • Handsearching
  • Citation Indexes
  • Documenting the Search Process
  • Managing your Review

Research Support

  • Last Updated: Feb 29, 2024 3:16 PM
  • URL: https://guides.library.ucdavis.edu/systematic-reviews

Systematic Reviews: Levels of evidence and study design

Levels of evidence.

"Levels of Evidence" tables have been developed which outline and grade the best evidence. However, the review question will determine the choice of study design.

Secondary sources provide analysis, synthesis, interpretation and evaluation of primary works. Secondary sources are not evidence, but rather provide a commentary on and discussion of evidence. e.g. systematic review

Primary sources contain the original data and analysis from research studies. No outside evaluation or interpretation is provided. An example of a primary literature source is a peer-reviewed research article. Other primary sources include preprints, theses, reports and conference proceedings.

Levels of evidence for primary sources fall into the following broad categories of study designs   (listed from highest to lowest):

  • Experimental : RTC's (Randomised Control Trials)
  • Quasi-experimental studies (Non-randomised control studies, Before-and-after study, Interrupted time series)
  • Observational studies (Cohort study, Case-control study, Case series) 

Based on information from Centre for Reviews and Dissemination. (2009). Systematic reviews: CRD's guidance for undertaking reviews in health care. Retrieved from http://www.york.ac.uk/inst/crd/index_guidance.htm

Hierarchy of Evidence Pyramid

"Levels of Evidence" are often represented in as a pyramid, with the highest level of evidence at the top:

level of evidence systematic literature review

Types of Study Design

The following definitions are adapted from the Glossary in " Systematic reviews: CRD's Guidance for Undertaking Reviews in Health Care " , Centre for Reviews and Dissemination, University of York :

  • Systematic Review The application of strategies that limit bias in the assembly, critical appraisal, and synthesis of all relevant studies on a specific topic and research question. 
  • Meta-analysis A systematic review which uses quantitative methods to summarise the results
  • Randomized control clinical trial (RCT) A group of patients is randomised into an experimental group and a control group. These groups are followed up for the variables/outcomes of interest.
  • Cohort study Involves the identification of two groups (cohorts) of patients, one which did receive the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.
  • Case-control study Involves identifying patients who have the outcome of interest (cases) and control patients without the same outcome, and looking to see if they had the exposure of interest.
  • Critically appraised topic A short summary of an article from the literature, created to answer a specific clinical question.

EBM and Study Design

  • << Previous: SR protocol
  • Next: Searching for systematic reviews >>
  • Getting started
  • Types of reviews
  • Formulate the question
  • SR protocol
  • Levels of evidence and study design
  • Searching for systematic reviews
  • Search strategies
  • Subject databases
  • Keeping up to date/Alerts
  • Trial registers
  • Conference proceedings
  • Critical appraisal
  • Documenting and reporting
  • Managing search results
  • Statistical methods
  • Journal information/publishing
  • Contact a librarian
  • Last Updated: Oct 2, 2023 4:13 PM
  • URL: https://ecu.au.libguides.com/systematic-reviews

Edith Cowan University acknowledges and respects the Noongar people, who are the traditional custodians of the land upon which its campuses stand and its programs operate. In particular ECU pays its respects to the Elders, past and present, of the Noongar people, and embrace their culture, wisdom and knowledge.

Systematic Review Process: best practices

Levels of evidence.

  • Formulate your search question
  • Translate Search Strategies
  • Locate systematic reviews & create a protocol
  • Data collection: select your sources
  • Why Grey Literature is important
  • Screening and Data management
  • KU Systematic Reviews

Hierarchy of evidence pyramid

level of evidence systematic literature review

The pyramidal shape qualitatively integrates the amount of evidence generally available from each type of study design and the strength of evidence expected from indicated designs.  Study designs in ascending levels of the pyramid generally exhibit increased quality of evidence and reduced risk of bias.

Understand the different levels of evidence

Meta Analysis  - systematic review that uses quantitative methods to synthesize and summarize the results.

Systematic Review  - summary of the medical literature that uses explicit methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses appropriate statistical techniques to combine these valid studies.

Randomised Controlled Trial  - Participants are randomly allocated into an experimental group or a control group and followed over time for the variables/outcomes of interest.

Cohort Study  - Involves identification of two groups (cohorts) of patients, one which received the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest.

Case Control Study  - study which involves identifying patients who have the outcome of interest (cases) and patients without the same outcome (controls), and looking back to see if they had the exposure of interest.

Case Series  - report on a series of patients with an outcome of interest. No control group is involved.  (Definitions from CEBM)

Scholarly publications

The Joanna Briggs Institute Reviewers’ Manual 2015 Methodology for JBI Scoping Reviews

Clarifying differences between review designs and methods

Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach

A scoping review of scoping reviews: advancing the approach and enhancing the consistency

  • << Previous: Welcome
  • Next: Structure your search strategy >>

Suna Kıraç Library on Social Media

Koç University Suna Kıraç Library Rumelifeneri Yolu, 34450, Sarıyer-İstanbul T:+90-212 338 13 17 F:+90-212 338 13 21 [email protected]

  • Evidence-Based Medicine
  • Finding the Evidence
  • eJournals for EBM

Levels of Evidence

  • JAMA Users' Guides
  • Tutorials (Learning EBM)
  • Web Resources

Resources That Rate The Evidence

  • ACP Smart Medicine
  • Agency for Healthcare Research and Quality
  • Clinical Evidence
  • Cochrane Library
  • Health Services/Technology Assessment Texts (HSTAT)
  • PDQ® Cancer Information Summaries from NCI
  • Trip Database

Critically Appraised Individual Articles

  • Evidence-Based Complementary and Alternative Medicine
  • Evidence-Based Dentistry
  • Evidence-Based Nursing
  • Journal of Evidence-Based Dental Practice

Grades of Recommendation

Critically-appraised individual articles and synopses include:

Filtered evidence:

  • Level I: Evidence from a systematic review of all relevant randomized controlled trials.
  • Level II: Evidence from a meta-analysis of all relevant randomized controlled trials.
  • Level III: Evidence from evidence summaries developed from systematic reviews
  • Level IV: Evidence from guidelines developed from systematic reviews
  • Level V: Evidence from meta-syntheses of a group of descriptive or qualitative studies
  • Level VI: Evidence from evidence summaries of individual studies
  • Level VII: Evidence from one properly designed randomized controlled trial

Unfiltered evidence:

  • Level VIII: Evidence from nonrandomized controlled clinical trials, nonrandomized clinical trials, cohort studies, case series, case reports, and individual qualitative studies.
  • Level IX: Evidence from opinion of authorities and/or reports of expert committee

Two things to remember:

1. Studies in which randomization occurs represent a higher level of evidence than those in which subject selection is not random.

2. Controlled studies carry a higher level of evidence than those in which control groups are not used.

Strength of Recommendation Taxonomy (SORT)

  • SORT The American Academy of Family Physicians uses the Strength of Recommendation Taxonomy (SORT) to label key recommendations in clinical review articles. In general, only key recommendations are given a Strength-of-Recommendation grade. Grades are assigned on the basis of the quality and consistency of available evidence.
  • << Previous: eJournals for EBM
  • Next: JAMA Users' Guides >>
  • Last Updated: Jan 25, 2024 4:15 PM
  • URL: https://guides.library.stonybrook.edu/evidence-based-medicine
  • Request a Class
  • Hours & Locations
  • Ask a Librarian
  • Special Collections
  • Library Faculty & Staff

Library Administration: 631.632.7100

  • Stony Brook Home
  • Campus Maps
  • Web Accessibility Information
  • Accessibility Barrier Report Form

campaign for stony brook

Comments or Suggestions? | Library Webmaster

Creative Commons License

Except where otherwise noted, this work by SBU Libraries is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License .

DistillerSR Logo

About Systematic Reviews

What Level of Evidence Is a Systematic Review?

level of evidence systematic literature review

Automate every stage of your literature review to produce evidence-based research faster and more accurately.

Given the explosion of medical literature, and the need to use high-quality evidence to inform healthcare decision-making, a hierarchical system was developed to classify available evidence. This hierarchy, known as the levels of evidence, is the cornerstone of evidence-based medicine. In this article, we will look at levels of evidence in further detail, and see where systematic reviews stand in this hierarchy.

What Are The Levels Of Evidence?

The Levels of Evidence, also known as the hierarchy of evidence, is a heuristic method used to rank the relative strengths of the results obtained from scientific research. Due to the insurmountable amount of available research, the evidence-based medicine movement organizes and assesses this huge volume and diversity of evidence with ‘evidence hierarchies’. In 2014, Jacob Stegenga defined an evidence hierarchy as ‘a rank ordering of methods according to the potential for that method to suffer from systematic bias’ [1]. The rank is usually determined by one or more parameters of the study design. The elements that are ordered in evidence hierarchies are usually different kinds of methods, and the property on which the ordering is based is taken to be the internal validity of a method relative to the hypotheses regarding the efficacy of tested medical intervention [2]. Therefore, at the top of the hierarchy are studies with the highest internal validity or lowest risk of bias relative to the tested medical intervention’s hypothesized efficacy.

The need for developing a hierarchical system for the classification of the available evidence was pointed out by Archibald Cochrane with the publication of ‘Effectiveness and Efficiency’ in 1972, in which he argued that decisions about medical treatment should be based on a systematic review of clinical evidence[3]. In 1979, the Canadian Task Force on Periodic Health Examination published the first-ever ranking of medical evidence, proposing four quality levels [4]. These levels were used to assign an alphabetical grade to the strength of recommendations or interventions. The levels of evidence were further described and expanded by Sackett in 1989. Both systems placed Randomized controlled trials (RCT) at the highest level and case series or expert opinions at the lowest level, based on the probability of bias.

Gordon Guyatt, the Canadian physician, who coined the term ‘evidence-based medicine’ in 1991, proposed another approach for classifying the strengths of recommendations. In ‘Users Guide to medical literature’, Guyatt expanded the existing categorization to account for new systematic procedures for combining results from different studies [5]. Referencing Guyatts paper, Trisha Greenhalgh summarized the revised hierarchy as follows [6],

  • Systematic reviews and Meta-analyses
  • Randomized controlled trials with definitive results (confidence intervals that do not overlap the threshold of clinically significant effect)
  • Randomized controlled trials with non-definitive results (a point estimate that suggests a clinically significant effect but with confidence intervals overlapping the threshold for this effect)
  • Cohort studies
  • Case-control studies
  • Cross-sectional surveys
  • Case reports

Learn More About DistillerSR

(Article continues below)

level of evidence systematic literature review

Where Do Systematic Reviews Stand in the Level of Evidence Hierarchy?

In most evidence hierarchies, well-conducted systematic reviews and meta-analyses are at the top. As such, in the hierarchy of evidence, systematic reviews including meta-analysis of methodologically sound RCTs with consistent results, are considered the highest level of evidence [5]. This is due to the fact that systematic reviews not only offer the benefit of collating all the available evidence from a variety of sources but also critically appraise the quality of the evidence collected. The level of evidence attributed to a systematic review with meta-analysis also owes to its methodological design which is focused on the minimization of bias. Bias (of which there are many types) can confound the outcomes of a study such that it may over or underestimate the true treatment effect. Therefore, systematic reviews of randomized control trials which are designed specifically to minimize bias from confounding factors have become the highest level of evidence. However, the position of systematic reviews at the top is not absolute. For example,

  • The process of conducting a systematic review is rigorous and it is estimated that it takes between 6 and 18 months to complete the procedure depending on the topic. Therefore, the results generated by a systematic review could be superseded by new evidence.
  • The results of a large, well conducted randomized controlled trial (RCT) could be more convincing than a systematic review of smaller inefficient RCTs.

Evidence-based medicine relies on ‘the best available evidence’, and to fully understand this one needs to have a clear knowledge of the hierarchy of evidence and how it can be used to formulate a grade of recommendation. It is critical to establish which evidence is the most authoritative for a particular subject. The levels of evidence provide a guide and researchers need to be careful while interpreting their results. They specify a hierarchical order for different researches based on their internal validity (What Are the Levels of Evidence? – Center for Evidence-Based Management).

  • Stegenga J (October 2014). “Down with the hierarchies”. Topoi. 33 (2): 313–22. doi:10.1007/s11245-013-9189-4. S2CID 109929514.
  • Borgerson K. Valuing evidence: bias and the evidence hierarchy of evidence-based medicine. Perspect Biol Med. 2009 Spring;52(2):218-33. doi: 10.1353/pbm.0.0086. PMID: 19395821.
  • Stavrou, A.; Challoumas, D.; Dimitrakakis, G. “Archibald Cochrane (1909–1988): the father of evidence-based medicine.” Interactive Cardiovascular and Thoracic Surgery, 18(1) (2014): 121-124.
  • Spitzer, W. et al. The periodic health examination. Canadian Task Force on the Periodic Health Examination. (1979). Canadian Medical Association journal, 121(9), 1193–1254.
  • Guyatt, G. H.; Sackett, D. L.; Sinclair, J. C.; Hayward, R.; Cook, D. J.; Cook, R. J. “Users’ guides to the medical literature IX. A method for grading health care recommendations.” JAMA, 274 (22) (1995): 1800-1804.
  • Greenhalgh T. How to read a paper. Getting your bearings (deciding what the paper is about). BMJ. 1997;315(7102):243-246. doi:10.1136/bmj.315.7102.243
  • What Are the Levels of Evidence? – Center for Evidence Based Management. cebma.org/faq/what-are-the-levels-of-evidence/.

3 Reasons to Connect

level of evidence systematic literature review

  • Library databases
  • Library website

Evidence-Based Research: Levels of Evidence Pyramid

Introduction.

One way to organize the different types of evidence involved in evidence-based practice research is the levels of evidence pyramid. The pyramid includes a variety of evidence types and levels.

  • systematic reviews
  • critically-appraised topics
  • critically-appraised individual articles
  • randomized controlled trials
  • cohort studies
  • case-controlled studies, case series, and case reports
  • Background information, expert opinion

Levels of evidence pyramid

The levels of evidence pyramid provides a way to visualize both the quality of evidence and the amount of evidence available. For example, systematic reviews are at the top of the pyramid, meaning they are both the highest level of evidence and the least common. As you go down the pyramid, the amount of evidence will increase as the quality of the evidence decreases.

Levels of Evidence Pyramid

Text alternative for Levels of Evidence Pyramid diagram

EBM Pyramid and EBM Page Generator, copyright 2006 Trustees of Dartmouth College and Yale University. All Rights Reserved. Produced by Jan Glover, David Izzo, Karen Odato and Lei Wang.

Filtered Resources

Filtered resources appraise the quality of studies and often make recommendations for practice. The main types of filtered resources in evidence-based practice are:

Scroll down the page to the Systematic reviews , Critically-appraised topics , and Critically-appraised individual articles sections for links to resources where you can find each of these types of filtered information.

Systematic reviews

Authors of a systematic review ask a specific clinical question, perform a comprehensive literature review, eliminate the poorly done studies, and attempt to make practice recommendations based on the well-done studies. Systematic reviews include only experimental, or quantitative, studies, and often include only randomized controlled trials.

You can find systematic reviews in these filtered databases :

  • Cochrane Database of Systematic Reviews Cochrane systematic reviews are considered the gold standard for systematic reviews. This database contains both systematic reviews and review protocols. To find only systematic reviews, select Cochrane Reviews in the Document Type box.
  • JBI EBP Database (formerly Joanna Briggs Institute EBP Database) This database includes systematic reviews, evidence summaries, and best practice information sheets. To find only systematic reviews, click on Limits and then select Systematic Reviews in the Publication Types box. To see how to use the limit and find full text, please see our Joanna Briggs Institute Search Help page .

Open Access databases provide unrestricted access to and use of peer-reviewed and non peer-reviewed journal articles, books, dissertations, and more.

You can also find systematic reviews in this unfiltered database :

Some journals are peer reviewed

To learn more about finding systematic reviews, please see our guide:

  • Filtered Resources: Systematic Reviews

Critically-appraised topics

Authors of critically-appraised topics evaluate and synthesize multiple research studies. Critically-appraised topics are like short systematic reviews focused on a particular topic.

You can find critically-appraised topics in these resources:

  • Annual Reviews This collection offers comprehensive, timely collections of critical reviews written by leading scientists. To find reviews on your topic, use the search box in the upper-right corner.
  • Guideline Central This free database offers quick-reference guideline summaries organized by a new non-profit initiative which will aim to fill the gap left by the sudden closure of AHRQ’s National Guideline Clearinghouse (NGC).
  • JBI EBP Database (formerly Joanna Briggs Institute EBP Database) To find critically-appraised topics in JBI, click on Limits and then select Evidence Summaries from the Publication Types box. To see how to use the limit and find full text, please see our Joanna Briggs Institute Search Help page .
  • National Institute for Health and Care Excellence (NICE) Evidence-based recommendations for health and care in England.
  • Filtered Resources: Critically-Appraised Topics

Critically-appraised individual articles

Authors of critically-appraised individual articles evaluate and synopsize individual research studies.

You can find critically-appraised individual articles in these resources:

  • EvidenceAlerts Quality articles from over 120 clinical journals are selected by research staff and then rated for clinical relevance and interest by an international group of physicians. Note: You must create a free account to search EvidenceAlerts.
  • ACP Journal Club This journal publishes reviews of research on the care of adults and adolescents. You can either browse this journal or use the Search within this publication feature.
  • Evidence-Based Nursing This journal reviews research studies that are relevant to best nursing practice. You can either browse individual issues or use the search box in the upper-right corner.

To learn more about finding critically-appraised individual articles, please see our guide:

  • Filtered Resources: Critically-Appraised Individual Articles

Unfiltered resources

You may not always be able to find information on your topic in the filtered literature. When this happens, you'll need to search the primary or unfiltered literature. Keep in mind that with unfiltered resources, you take on the role of reviewing what you find to make sure it is valid and reliable.

Note: You can also find systematic reviews and other filtered resources in these unfiltered databases.

The Levels of Evidence Pyramid includes unfiltered study types in this order of evidence from higher to lower:

You can search for each of these types of evidence in the following databases:

TRIP database

Background information & expert opinion.

Background information and expert opinions are not necessarily backed by research studies. They include point-of-care resources, textbooks, conference proceedings, etc.

  • Family Physicians Inquiries Network: Clinical Inquiries Provide the ideal answers to clinical questions using a structured search, critical appraisal, authoritative recommendations, clinical perspective, and rigorous peer review. Clinical Inquiries deliver best evidence for point-of-care use.
  • Harrison, T. R., & Fauci, A. S. (2009). Harrison's Manual of Medicine . New York: McGraw-Hill Professional. Contains the clinical portions of Harrison's Principles of Internal Medicine .
  • Lippincott manual of nursing practice (8th ed.). (2006). Philadelphia, PA: Lippincott Williams & Wilkins. Provides background information on clinical nursing practice.
  • Medscape: Drugs & Diseases An open-access, point-of-care medical reference that includes clinical information from top physicians and pharmacists in the United States and worldwide.
  • Virginia Henderson Global Nursing e-Repository An open-access repository that contains works by nurses and is sponsored by Sigma Theta Tau International, the Honor Society of Nursing. Note: This resource contains both expert opinion and evidence-based practice articles.
  • Previous Page: Phrasing Research Questions
  • Next Page: Evidence Types
  • Office of Student Disability Services

Walden Resources

Departments.

  • Academic Residencies
  • Academic Skills
  • Career Planning and Development
  • Customer Care Team
  • Field Experience
  • Military Services
  • Student Success Advising
  • Writing Skills

Centers and Offices

  • Center for Social Change
  • Office of Academic Support and Instructional Services
  • Office of Degree Acceleration
  • Office of Research and Doctoral Services
  • Office of Student Affairs

Student Resources

  • Doctoral Writing Assessment
  • Form & Style Review
  • Quick Answers
  • ScholarWorks
  • SKIL Courses and Workshops
  • Walden Bookstore
  • Walden Catalog & Student Handbook
  • Student Safety/Title IX
  • Legal & Consumer Information
  • Website Terms and Conditions
  • Cookie Policy
  • Accessibility
  • Accreditation
  • State Authorization
  • Net Price Calculator
  • Contact Walden

Walden University is a member of Adtalem Global Education, Inc. www.adtalem.com Walden University is certified to operate by SCHEV © 2024 Walden University LLC. All rights reserved.

Banner

Nursing-Johns Hopkins Evidence-Based Practice Model

Jhebp model for levels of evidence, jhebp levels of evidence overview.

  • Levels I, II and III

Evidence-Based Practice (EBP) uses a rating system to appraise evidence (usually a research study published as a journal article). The level of evidence corresponds to the research study design. Scientific research is considered to be the strongest form of evidence and recommendations from the strongest form of evidence will most likely lead to the best practices. The strength of evidence can vary from study to study based on the methods used and the quality of reporting by the researchers. You will want to seek the highest level of evidence available on your topic (Dang et al., 2022, p. 130).

The Johns Hopkins EBP model uses 3 ratings for the level of scientific research evidence 

  • true experimental (level I)
  • quasi-experimental (level II)
  • nonexperimental (level III) 

The level determination is based on the research meeting the study design requirements  (Dang et al., 2022, p. 146-7).

You will use the Research Appraisal Tool (Appendix E) along with the Evidence Level and Quality Guide (Appendix D) to analyze and  appraise research studies . (Tools linked below.)

N onresearch evidence is covered in Levels IV and V.

  • Evidence Level and Quality Guide (Appendix D)
  • Research Evidence Appraisal Tool (Appendix E)

Level I Experimental study

randomized controlled trial (RCT)

Systematic review of RCTs, with or without meta-analysis

Level II Quasi-experimental Study

Systematic review of a combination of RCTs and quasi-experimental, or quasi-experimental studies only, with or without meta-analysis.

Level III Non-experimental study

Systematic review of a combination of RCTs, quasi-experimental and non-experimental, or non-experimental studies only, with or without meta-analysis.

Qualitative study or systematic review, with or without meta-analysis

Level IV Opinion of respected authorities and/or nationally recognized expert committees/consensus panels based on scientific evidence.

Clinical practice guidelines

Consensus panels

Level V Based on experiential and non-research evidence.

Literature reviews

Quality improvement, program, or financial evaluation

Case reports

Opinion of nationally recognized expert(s) based on experiential evidence

These flow charts can also help you detemine the level of evidence throigh a series of questions.

Single Quantitative Research Study

flow cart for deciding the level of evidence for quantitative studies using JHEBP model

Summary/Reviews 

flow chart for determining the level of evidence for reviews using the JHEBP model

These charts are a part of the Research Evidence Appraisal Tool (Appendix E) document.

Dang, D., Dearholt, S., Bissett, K., Ascenzi, J., & Whalen, M. (2022). Johns Hopkins evidence-based practice for nurses and healthcare professionals: Model and guidelines. 4th ed. Sigma Theta Tau International

  • << Previous: Start Here
  • Next: Levels I, II and III >>
  • Last Updated: Feb 8, 2024 1:24 PM
  • URL: https://bradley.libguides.com/jhebp

Winona State University

Darrell W. Krueger Library Krueger Library

Evidence based practice toolkit.

  • What is EBP?
  • Asking Your Question

Levels of Evidence / Evidence Hierarchy

Evidence pyramid (levels of evidence), definitions, research designs in the hierarchy, clinical questions --- research designs.

  • Evidence Appraisal
  • Find Research
  • Standards of Practice

Profile Photo

Levels of evidence (sometimes called hierarchy of evidence) are assigned to studies based on the research design, quality of the study, and applicability to patient care. Higher levels of evidence have less risk of bias . 

Levels of Evidence (Melnyk & Fineout-Overholt 2023)

*Adapted from: Melnyk, & Fineout-Overholt, E. (2023).  Evidence-based practice in nursing & healthcare: A guide to best practice   (Fifth edition.). Wolters Kluwer.

Levels of Evidence (LoBiondo-Wood & Haber 2022)

Adapted from LoBiondo-Wood, G. & Haber, J. (2022). Nursing research: Methods and critical appraisal for evidence-based practice (10th ed.). Elsevier.

Evidence Pyramid

" Evidence Pyramid " is a product of Tufts University and is licensed under BY-NC-SA license 4.0

Tufts' "Evidence Pyramid" is based in part on the  Oxford Centre for Evidence-Based Medicine: Levels of Evidence (2009)

Cover Art

  • Oxford Centre for Evidence Based Medicine Glossary

Different types of clinical questions are best answered by different types of research studies.  You might not always find the highest level of evidence (i.e., systematic review or meta-analysis) to answer your question. When this happens, work your way down to the next highest level of evidence.

This table suggests study designs best suited to answer each type of clinical question.

  • << Previous: Asking Your Question
  • Next: Evidence Appraisal >>
  • Last Updated: Apr 2, 2024 7:02 PM
  • URL: https://libguides.winona.edu/ebptoolkit

WSU

Simmons University logo

Nursing - Systematic Reviews: Levels of Evidence

  • Levels of Evidence
  • Meta-Analyses
  • Definitions
  • Citation Search
  • Write & Cite
  • Give Feedback

Nursing: systematic reviews

"How would I use the 6S Model while taking care of a patient?" .cls-1{fill:#fff;stroke:#79a13f;stroke-miterlimit:10;stroke-width:5px;}.cls-2{fill:#79a13f;} The 6S Model is designed to work from the top down, starting with Systems - also referred to as computerized decision support systems (CDSSs). DiCenso et al. describes that, “an evidence-based clinical information system integrates and concisely summarizes all relevant and important research evidence about a clinical problem, is updated as new research evidence becomes available, and automatically links (through an electronic medical record) a specific patient’s circumstances to the relevant information” (2009). Systematic reviews lead up to this type of bio-available level of evidence.

What are systematic reviews, polit–beck evidence hierarchy/levels of evidence scale for therapy questions.

"Figure 2.2 [in context of book] shows our eight-level evidence hierarchy for Therapy/intervention questions. This hierarchy ranks sources of evidence with respect the readiness of an intervention to be put to use in practice" (Polit & Beck, 2021, p. 28). Levels are ranked on risk of bias - level one being the least bias, level eight being the most biased. There are several types of levels of evidence scales designed for answering different questions. "An evidence hierarchy for Prognosis questions, for example, is different from the hierarchy for Therapy questions" (p. 29).

Advantages of Levels of Evidence Scales

"Through controls imposed by manipulation, comparison, and randomization, alternative explanations can be discredited. It is because of this strength that meta-analyses of RCTs, which integrate evidence from multiple experiments, are at the pinnacle of the evidence hierarchies for Therapy questions" (p. 188).

"Tip: Traditional evidence hierarchies or level of evidence scales (e.g., Figure 2.2), rank evidence sources almost exclusively based on the risk of internal validity threats" (p. 217).

Systematic reviews can provide researchers with knowledge that prior evidence shows. This can help clarify established efficacy of a treatment without unnecessary and thus unethical research. Greenhalgh (2019) illustrates this citing Dean Fergusson and colleagues (2005) systematic review on a clinical surgical topic (p. 128).

Limits of Levels of Evidence Scales

Regarding the importance of real-world clinical practice settings, and the conflicting tradeoffs between internal and external validity, Polit and Beck (2021) write, "the first (and most prevalent) approach is to emphasize one and sacrifice another. Most often, it is external validity that is sacrificed. For example, external validity is not even considered in ranking evidence in level of evidence scales" (p. 221). ... From an EBP perspective, it is important to remember that drawing inferences about causal relationships relies not only on how high up on the evidence hierarchy a study is (Figure 2.2), but also, for any given level of the hierarchy, how successful the researcher was in managing study validity and balancing competing validity demands" (p. 222).

Polit and Beck note Levin (2014) that an evidence hierarchy "is not meant to provide a quality rating for evidence retrieved in the search for an answer" (p. 6), and as the Oxford Center for Evidence-Based Medicine concurs that evidence scales are, 'NOT intended to provide you with a definitive judgment about the quality of the evidence. There will inevitably be cases where "lower-level" evidence...will provide stronger than a "higher level" study (Howick et al., 2011, p.2)'" (p. 30).

Level of evidence (e.g., Figure 2.2) + Quality of evidence = Strength of evidence .

The 6S Model of Levels of Evidence

"The 6S hierarchy does not imply a gradient of evidence in terms of quality , but rather in terms of ease in retrieving relevant evidence to address a clinical question. At all levels, the evidence should be assessed for quality and relevance" (Polit & Beck, 2021, p. 24, Tip box).

The 6S Pyramid proposes a structure of quantitative evidence where articles that include pre-appraised and pre-synthesized studies are located at the top of the hierarchy (McMaster U., n.d.).

It can help to consider the level of evidence that a document represents, for example, a scientific article that summarizes and analyses many similar articles may provide more insight than the conclusion of a single research article. This is not to say that summaries can not be flawed, nor does it suggest that rare case studies should be ignored. The aim of health research is the well-being of all people, therefore it is important to use current evidence in light of patient preferences negotiated with clinical expertise.

Other Gradings in Levels of Evidence

While it is accepted that the strongest evidence is derived from meta-analyses, various evidence grading systems exist. for example: The Johns Hopkins Nursing Evidence-Based Practice model ranks evidence from level I to level V, as follows (Seben et al., 2010): Level I: Meta-analysis of randomized clinical trials (RCTs); experimental studies; RCTs Level II: Quasi-experimental studies Level III: Non-experimental or qualitative studies Level IV: Opinions of nationally recognized experts based on research evidence or an expert consensus panel Level V: Opinions of individual experts based on non-research evidence (e.g., case studies, literature reviews, organizational experience, and personal experience) The American Association of Critical-Care Nurses (AACN) evidence level system , updated in 2009, ranks evidence as follows (Armola et al., 2009): Level A: Meta-analysis of multiple controlled studies or meta-synthesis of qualitative studies with results that consistently support a specific action, intervention, or treatment Level B: Well-designed, controlled randomized or non-randomized studies with results that consistently support a specific action, intervention, or treatment Level C: Qualitative, descriptive, or correlational studies, integrative or systematic reviews, or RCTs with inconsistent results Level D: Peer-reviewed professional organizational standards, with clinical studies to support recommendations Level E: Theory-based evidence from expert opinion or multiple case reports Level M: Manufacturers’ recommendations (2017)

EBM Pyramid and EBM Page Generator

Unfiltered are resources that are primary sources describing original research. Randomized controlled trials, cohort studies, case-controlled studies, and case series/reports are considered unfiltered information.

Filtered are resources that are secondary sources which summarize and analyze the available evidence. They evaluate the quality of individual studies and often provide recommendations for practice. Systematic reviews, critically-appraised topics, and critically-appraised individual articles are considered filtered information.

Armola, R. R., Bourgault, A. M., Halm, M. A., Board, R. M., Bucher, L., Harrington, L., ... Medina, J. (2009). AACN levels of evidence. What's new? Critical Care Nurse , 29 (4), 70-73. doi:10.4037/ccn2009969

DiCenso, A., Bayley, L., & Haynes, R. B. (2009). Accessing pre-appraised evidence: Fine-tuning the 5S model into a 6S model. BMJ Evidence-Based Nursing , 12 (4) https://ebn.bmj.com/content/12/4/99.2.short

Fergusson, D., Glass, K. C., Hutton, B., & Shapiro, S. (2005). Randomized controlled trials of Aprotinin in cardiac surgery: Could clinical equipoise have stopped the bleeding?. Clinical Trials , 2 (3), 218-232.

Glover, J., Izzo, D., Odato, K. & Wang, L. (2008). Evidence-based mental health resources . EBM Pyramid and EBM Page Generator. Copyright 2008. All Rights Reserved. Retrieved April 28, 2020 from https://web.archive.org/web/20200219181415/http://www.dartmouth.edu/~biomed/resources.htmld/guides/ebm_psych_resources.html Note. Document removed from host. Old link used with the WayBack Machine of the Internet Archive to retrieve the original webpage on 2/10/21 http://www.dartmouth.edu/~biomed/resources.htmld/guides/ebm_psych_resources.html

Greenhalgh, T. (2019). How to read a paper: The basics of evidence-based medicine and healthcare . (Sixth ed.). Wiley Blackwell.

Haynes, R. B. (2001). Of studies, syntheses, synopses, and systems: The “4S” evolution of services for finding current best evidence. BMJ Evidence-Based Medicine , 6 (2), 36-38.

Haynes, R. B. (2006). Of studies, syntheses, synopses, summaries, and systems: the “5S” evolution of information services for evidence-based healthcare decisions. BMJ Evidence-Based Medicine , 11 (6), 162-164.

McMaster University (n.d.). 6S Search Pyramid Tool https://www.nccmt.ca/capacity-development/6s-search-pyramid

Polit, D., & Beck, C. (2019). Nursing research: Generating and assessing evidence for nursing practice . Wolters Kluwer Health.

Schub, E., Walsh, K. & Pravikoff D. (Ed.) (2017). Evidence-based nursing practice: Implementing [Skill Set]. Nursing Reference Center Plus

Seben, S., March, K. S., & Pugh, L. C. (2010). Evidence-based practice: The forum approach. American Nurse Today , 5 (11), 32-34.

  • Systematic Review from the Encyclopedia of Nursing Research by Cheryl Holly Systematic reviews provide reliable evidential summaries of past research for the busy practitioner. By pooling results from multiple studies, findings are based on multiple populations, conditions, and circumstances. The pooled results of many small and large studies have more precise, powerful, and convincing conclusions (Holly, Salmond, & Saimbert, 2016) [ references in article ]. This scholarly synthesis of research findings and other evidence forms the foundation for evidence-based practice allowing the practitioner to make up-to-date decisions.

Standards & Guides

  • Cochrane Handbook for Systematic Reviews of Interventions The Cochrane Handbook for Systematic Reviews of Interventions is the official guide that describes in detail the process of preparing and maintaining Cochrane systematic reviews on the effects of healthcare interventions.
  • Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) PRISMA is an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses. PRISMA focuses on the reporting of reviews evaluating randomized trials, but can also be used as a basis for reporting systematic reviews of other types of research, particularly evaluations of interventions.
  • Systematic Reviews by The Centre for Reviews and Dissemination "The guidance has been written for those with an understanding of health research but who are new to systematic reviews; those with some experience but who want to learn more; and for commissioners. We hope that experienced systematic reviewers will also find this guidance of value; for example when planning a review in an area that is unfamiliar or with an expanded scope. This guidance might also be useful to those who need to evaluate the quality of systematic reviews, including, for example, anyone with responsibility for implementing systematic review findings" (CRD, 2009, p. vi, "Who should use this guide")

  • Carrying out systematic literature reviews: An introduction by Alan Davies Systematic reviews provide a synthesis of evidence for a specific topic of interest, summarising the results of multiple studies to aid in clinical decisions and resource allocation. They remain among the best forms of evidence, and reduce the bias inherent in other methods. A solid understanding of the systematic review process can be of benefit to nurses that carry out such reviews, and for those who make decisions based on them. An overview of the main steps involved in carrying out a systematic review is presented, including some of the common tools and frameworks utilised in this area. This should provide a good starting point for those that are considering embarking on such work, and to aid readers of such reviews in their understanding of the main review components, in order to appraise the quality of a review that may be used to inform subsequent clinical decision making (Davies, 2019, Abstract)
  • Papers that summarize other papers (systematic reviews and meta-analyses) by Trisha Greenhalgh ... a systematic review is an overview of primary studies that: contains a statement of objectives, sources and methods; has been conducted in a way that is explicit, transparent and reproducible (Figure 9.1) [ Table found in book chapter ]. The most enduring and reliable systematic reviews, notably those undertaken by the Cochrane Collaboration (discussed later in this chapter), are regularly updated to incorporate new evidence (Greenhalgh, 2020, p. 117, Chapter 9).
  • A PRISMA assessment of the reporting quality of systematic reviews of nursing published in the Cochrane Library and paper-based journals by Juxia Zhang et al. The Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was released as a standard of reporting systematic reviewers (SRs). However, not all SRs adhere completely to this standard. This study aimed to evaluate the reporting quality of SRs published in the Cochrane Library and paper-based journals (Zhang et al., 2019, Abstract).

Cochrane [Username]. (2016, Jan 27). What are systematic reviews? YouTube. https://www.youtube.com/watch?v=egJlW4vkb1Y

Davies, A. (2019). Carrying out systematic literature reviews: An introduction. British Journal of Nursing , 28 (15), 1008–1014. https://doi-org.ezproxy.simmons.edu/10.12968/bjon.2019.28.15.1008

Greenhalgh, T. (2019). Papers that summarize other papers (systematic reviews and meta-analyses). In How to read a Paper : The basics of evidence-based medicine and healthcare . (Sixth ed., pp. 117-136). Wiley Blackwell.

Holly, C. (2017). Systematic review. In J. Fitzpatrick (Ed.), Encyclopedia of nursing research (4th ed.). Springer Publishing Company. Credo Reference.

Zhang, J., Han, L., Shields, L., Tian, J., & Wang, J. (2019). A PRISMA assessment of the reporting quality of systematic reviews of nursing published in the Cochrane Library and paper-based journals. Medicine , 98 (49), e18099. https://doi.org/10.1097/MD.0000000000018099

  • << Previous: Start
  • Next: Meta-Analyses >>
  • Last Updated: Nov 3, 2023 1:19 PM
  • URL: https://simmons.libguides.com/systematic-reviews

MSU Libraries

  • Need help? Ask a Librarian

Nursing Literature and Other Types of Reviews

  • Literature and Other Types of Reviews
  • Starting Your Search
  • Constructing Your Search
  • Selecting Databases and Saving Your Search

Levels of Evidence

  • Creating a PRISMA Table
  • Literature Table and Synthesis
  • Other Resources

Levels of evidence (sometimes called hierarchy of evidence) are assigned to studies based on the methodological quality of their design, validity, and applicability to patient care. These decisions gives the grade (or strength) of recommendation. Just because something is lower on the pyramid doesn't mean that the study itself is lower-quality, it just means that the methods used may not be as clinically rigorous as higher levels of the pyramid. In nursing, the system for assigning levels of evidence is often from Melnyk & Fineout-Overholt's 2011 book,  Evidence-based Practice in Nursing and Healthcare: A Guide to Best Practice .  The Levels of Evidence below are adapted from Melnyk & Fineout-Overholt's (2011) model.  

level of evidence systematic literature review

Melnyk & Fineout-Overholt (2011)

  • Meta-Analysis:  A systematic review that uses quantitative methods to summarize the results. (Level 1)
  • Systematic Review:  A comprehensive review that authors have systematically searched for, appraised, and summarized all of the medical literature for a specific topic (Level 1)
  • Randomized Controlled Trials:  RCT's include a randomized group of patients in an experimental group and a control group. These groups are followed up for the variables/outcomes of interest. Examples of RCTs are clinical trials that compare the effects of drugs, surgical techniques, medical devices, diagnostic procedures, diets or other medical treatments. (can be Level 2 or Level 4, depending on how expansive the study)
  • Non-Randomized Controlled Trials:  A clinical trial in which the participants are not assigned by chance to different treatment groups. Participants may choose which group they want to be in, or they may be assigned to the groups by the researchers.
  • Cohort Study:  Identifies two groups (cohorts) of patients, one which did receive the exposure of interest, and one which did not, and following these cohorts forward for the outcome of interest. ( Level 5)
  • Case-Control Study:  Involves identifying patients who have the outcome of interest (cases) and control patients without the same outcome, and looking to see if they had the exposure of interest.
  • Background Information/Expert Opinion:  Handbooks, encyclopedias, and textbooks often provide a good foundation or introduction and often include generalized information about a condition.  While background information presents a convenient summary, often it takes about three years for this type of literature to be published. (Level 7)
  • << Previous: Selecting Databases and Saving Your Search
  • Next: Creating a PRISMA Table >>
  • Last Updated: Mar 27, 2024 3:56 PM
  • URL: https://libguides.lib.msu.edu/nursinglitreview

Library Homepage

  • Evidence-Based Practice
  • Levels of Evidence
  • Evaluating Nursing Research
  • Systematic Review
  • Evidence-Based Database Resources
  • Cite Sources
  • Get Handouts

Explanation

"Determining what constitutes the best evidence requires an ability to identify, critique and categorize literature, placing it into a so-called hierarchy of evidence or, rank-order, with randomized controlled trials (RCT's) and meta-analyses of RCT's at the top and uncontrolled studies or opinion at the bottom. This is a necessary first step as the ability to infer a recommendation or establish a grade of recommendation for a treatment or intervention is directly related to the quality of evidence that is available for review. These steps then provide the basis for the development of clinical practice guidelines, to not replace clinical decision making but augment it. There have been a number of systems developed to try to categorize studies into their respective levels of evidence."

Source: B.A. Petrisor, J. Keating, E. Schemitsch. Grading the evidence: Levels of evidence and grades of recommendation, Injury, Volume 37, Issue 4, 2006, Pages 321-327, ISSN 0020-1383, https://doi.org/10.1016/j.injury.2006.02.001 .

level of evidence systematic literature review

Source: Merlin, T. , Weston, A. , Tooher, R., (2009). Extending An Evidence Hierarchy To Include Topics Other Than Treatement: Revisting The Australian 'Levels of Evidence'. BMC Medical Research Methodology 2009, 9:34.   doi:10.1186/1471-2288-9-34

Multimedia Resources

A general overview of the concept of levels of evidence and how it is applied in the medical field. Source ( https://youtu.be/OaOzXEWIXY4 )

Provides a more in-depth look at the different levels of evidence as reported in the John Hopkins Hierarchy. ( Source  https://youtu.be/u_-lxyFtlN8 )

Relevant Articles

  • An Analysis of References Used for the Orthopaedic In-Training Examination: What are Their Levels of Evidence and Journal Impact Factors?
  • Extending An Evidence Hierarchy To Include Topics Other Than Treatment: Revising The Australian 'Levels Of Evidence'
  • Grading The Evidence: Levels Of Evidence And Grades Of Recommendation
  • Levels Of Evidence: A Comparison Between Top Medical Journals And General Pediatric Journals
  • Levels of Evidence in the Clinical Sports Medicine Literature: Are We Getting Better Over Time?
  • Levels Of Evidence Ratings In The Urological Literature: An Assessment Of Interobserver Agreement
  • A Nurses’ Guide To The Hierarchy Of Research Designs And Evidence
  • What Are The Levels Of Evidence On Which We Base Decisions For Surgical Management Of Lower Extremity Bone Tumors?

Relevant Websites

  • Determining The Level Of Evidence: Experimental Research Appraisal - Nursing2020
  • Evidence-Based Nursing Research Guide: Evidence Levels & Types - DePaul University Library
  • Evidence-Based Practice - Levels of Evidence - Nurse.com
  • Grading Levels Of Evidence - Clinical Information Access Portal
  • The Levels Of Evidence And Their Role In Evidence-Based Medicine - National Library Of Medicine
  • Levels Of Evidence In Research - Elsevier
  • Levels of Evidence in Research: Examples, Hierachies & Practice - Research.Com
  • Nursing - Evidence-Based Practice: Levels of Evidence - Simmons University Library
  • << Previous: PICO
  • Next: Evaluating Nursing Research >>
  • Last Updated: Mar 12, 2024 11:14 AM
  • URL: https://libguides.kean.edu/evidence-based-practice
  • Open access
  • Published: 26 March 2024

Barriers and enablers to the implementation of patient-reported outcome and experience measures (PROMs/PREMs): protocol for an umbrella review

  • Guillaume Fontaine   ORCID: orcid.org/0000-0002-7806-814X 1 , 2 ,
  • Marie-Eve Poitras 3 , 4 ,
  • Maxime Sasseville 5 , 6 ,
  • Marie-Pascale Pomey 7 , 8 ,
  • Jérôme Ouellet 9 ,
  • Lydia Ould Brahim 1 ,
  • Sydney Wasserman 1 ,
  • Frédéric Bergeron 10 &
  • Sylvie D. Lambert 1 , 11  

Systematic Reviews volume  13 , Article number:  96 ( 2024 ) Cite this article

684 Accesses

1 Altmetric

Metrics details

Patient-reported outcome and experience measures (PROMs and PREMs, respectively) are evidence-based, standardized questionnaires that can be used to capture patients’ perspectives of their health and health care. While substantial investments have been made in the implementation of PROMs and PREMs, their use remains fragmented and limited in many settings. Analysis of multi-level barriers and enablers to the implementation of PROMs and PREMs has been hampered by the lack of use of state-of-the-art implementation science frameworks. This umbrella review aims to consolidate available evidence from existing quantitative, qualitative, and mixed-methods systematic and scoping reviews covering factors that influence the implementation of PROMs and PREMs in healthcare settings.

An umbrella review of systematic and scoping reviews will be conducted following the guidelines of the Joanna Briggs Institute (JBI). Qualitative, quantitative, and mixed methods reviews of studies focusing on the implementation of PROMs and/or PREMs in all healthcare settings will be considered for inclusion. Eight bibliographical databases will be searched. All review steps will be conducted by two reviewers independently. Included reviews will be appraised and data will be extracted in four steps: (1) assessing the methodological quality of reviews using the JBI Critical Appraisal Checklist; (2) extracting data from included reviews; (3) theory-based coding of barriers and enablers using the Consolidated Framework for Implementation Research (CFIR) 2.0; and (4) identifying the barriers and enablers best supported by reviews using the Grading of Recommendations Assessment, Development and Evaluation-Confidence in the Evidence from Reviews of Qualitative research (GRADE-CERQual) approach. Findings will be presented in diagrammatic and tabular forms in a manner that aligns with the objective and scope of this umbrella review, along with a narrative summary.

This umbrella review of quantitative, qualitative, and mixed-methods systematic and scoping reviews will inform policymakers, researchers, managers, and clinicians regarding which factors hamper or enable the adoption and sustained use of PROMs and PREMs in healthcare settings, and the level of confidence in the evidence supporting these factors. Findings will orient the selection and adaptation of implementation strategies tailored to the factors identified.

Systematic review registration

PROSPERO CRD42023421845.

Peer Review reports

Capturing patients’ perspectives of their health and healthcare needs using standardized patient-reported outcome and experience measures (referred to herein as PROMs and PREMs, respectively) has been the focus of over 40 years of research [ 1 , 2 ]. PROMs/PREMs are standardized, validated questionnaires (generic or disease-specific); PROMs are completed by patients about their health, functioning, and quality of life, whereas PREMs are focused on patients’ experiences whilst receiving care [ 1 ]. PROMs/PREMs are associated with a robust evidence-base across multiple illnesses; they can increase charting of patients’ needs [ 3 ], and improve patient-clinician communication [ 3 , 4 , 5 ], which in turn can lead to improved symptom management [ 4 , 5 , 6 ], thereby improving patients’ quality of life, reducing health care utilization [ 5 ], and increasing survival rates [ 7 ].

Multipurpose applications of PROMs/PREMs have led to substantial investments in their implementation. In the USA, PROMs are part of payer mandates; in the United Kingdom, they are used for benchmarking and included in a national registry; and Denmark has embedded them across healthcare sectors [ 8 , 9 , 10 , 11 ]. In Canada, the Canadian Institute for Health Information (CIHI) has advocated for a standardized core set of PROMs [ 12 ], and the Canadian Partnership Against Cancer (CPAC) recently spearheaded PROM implementation in oncology in 10 provinces/territories. In 2017, the Organisation for Economic Co-operation and Development (OECD) launched the Patient-Reported Indicators Surveys (PaRIS) to build international capacity for PROMs/PREMs in primary care [ 13 ]. Yet, in many countries across the globe, their use remains fragmented, characterized by broad swaths of pre-implementation, pilots, and full implementation in narrow domains [ 12 , 14 , 15 ]. PROM/PREM implementation remains driven by silos of local healthcare networks [ 16 ].

Barriers and enablers to the implementation of PROMs/PREMs exist at the patient level (e.g., low health literacy), [ 17 ] clinician level (e.g., obtaining PROM/PREM results from external digital platforms) [ 17 , 18 , 19 ], service level (e.g., lack of integration in clinics’ workflow) [ 17 , 20 ] and organizational/system-level (e.g., organizational policies conflicting with PROM implementation goals) [ 21 ]. Foster and colleagues [ 22 ] conducted an umbrella review on the barriers and facilitators to implementing PROMs in healthcare settings. The umbrella review identified a number of bidirectional factors arising at different stages that can impact the implementation of PROMs; these factors were related to the implementation process, the organization, and healthcare providers [ 22 ]. However, the umbrella review focused solely on PROMs, excluding PREMs, and the theory-based analysis of implementation factors was limited. Another ongoing umbrella review is restricted to investigating barriers and enablers at the healthcare provider level, omitting the multilevel changes required for successful PROM/PREM implementation [ 23 ].

State-of-the-art approaches from implementation science can support the identification of multilevel factors influencing the implementation of PROMs and PREMs in different healthcare settings [ 24 , 25 , 26 ]. The second version of the Consolidated Framework for Implementation Research (CFIR 2.0) can guide the exploration of determinants influencing the implementation of PROMs and PREMs [ 27 ]. The CFIR is a meta-theoretical framework providing a repository of standardized implementation-related constructs at the individual, organizational, and external levels that can be applied across the spectrum of implementation research [ 27 ]. CFIR 2.0 includes five domains pertaining to the characteristics of the innovation targeted for implementation, the implementation process, the individuals involved in the implementation, the inner setting, and the outer setting [ 27 ]. Using an implementation framework to identify the multilevel factors influencing the implementation of PROMs/PREMs is critical to select and tailor implementation strategies to address barriers [ 28 , 29 , 30 , 31 ]. Implementation strategies are the “how”, the specific means or methods for promoting the adoption of evidence-based innovations (e.g., role revisions, audit, provide feedback) [ 32 ]. Selecting and adapting implementation strategies to facilitate the implementation of PROMs/PREMs can be time-consuming, as there are more than 73 implementation strategies to choose from [ 33 ]. Thus, a detailed understanding of the barriers to PROM/PREM implementation can inform and streamline the selection and adaptation of implementation strategies, saving financial, human, and material resources [ 24 , 25 , 26 , 32 , 34 ].

Review objective and questions

In this umbrella review, we aim to consolidate available evidence from existing quantitative, qualitative, and mixed-methods systematic and scoping reviews covering factors that influence the implementation of PROMs and PREMs in healthcare settings.

We will address the following questions:

What are the factors that hinder or enable the implementation of PROMs and PREMs in healthcare settings, and what is the level of confidence in the evidence supporting these factors?

What are the similarities and differences in barriers and enablers across settings and geographical regions?

What are the similarities and differences in the perceptions of barriers and enablers between patients, clinicians, managers, and decision-makers?

What are the implementation theories, models, and frameworks that have been used to guide research in this field?

Review design and registration

An umbrella review of systematic and scoping reviews will be conducted following the guidelines of the Joanna Briggs Institute (JBI) [ 35 , 36 ]. The umbrella review is a form of evidence synthesis that aims to address the challenge of collating, assessing, and synthesizing evidence from multiple reviews on a specific topic [ 35 ]. This protocol was registered on PROSPERO (CRD42023421845) and is presented according to the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) guidelines (see Supplementary material  1 ) [ 37 ]. We will use the Preferred Reporting Items for Overviews of Reviews (PRIOR) guidelines [ 38 ] and the PRISMA guidelines [ 39 ] to report results (e.g., flowchart, search process).

Eligibility criteria

The eligibility criteria were developed following discussions among the project team including researchers with experience in the implementation of PROMs and PREMs in different fields (e.g., cancer care, primary care) and implementation science. These criteria were refined after being piloted on a set of studies. The final eligibility criteria for the review are detailed in Table  1 . We will consider for inclusion all qualitative, quantitative, and mixed methods reviews of studies focusing on the implementation of PROMs or PREMs in any healthcare setting.

Information sources

Searches will be conducted in eight databases: CINAHL, via EBSCOhost (1980 to present); Cochrane Database of Systematic Reviews; Evidence-Based Medicine Reviews; EMBASE, via Ovid SP (1947 to present); ERIC, via Ovid SP (1966 to present); PsycINFO, via APA PsycNet (1967 to present); PubMed (including MEDLINE), via NCBI (1946 to present); Web of Science, via Clarivate Analytics (1900 to present). CINAHL is a leading database for nursing and allied health literature. The Cochrane Database of Systematic Reviews and Evidence-Based Medicine Reviews are essential for accessing high-quality systematic reviews and meta-analyses. EMBASE is a biomedical and pharmacological database offering extensive coverage of drug research, pharmacology, and medical devices, complementing PubMed. ERIC provides valuable insights from educational research that are relevant to our study given the intersection of healthcare and education in PROMs and PREMs. PsycINFO is crucial for accessing research on the psychological aspects of PROMs and PREMs. PubMed, encompassing MEDLINE, is a primary resource for biomedical literature. Web of Science offers a broad and diverse range of scientific literature providing interdisciplinary coverage. We will use additional strategies to complement our exploration including examining references cited in eligible articles, searching for authors who have published extensively in the field, and conducting backward/forward citation searches of related systematic reviews and influential articles.

Search strategy

A comprehensive search strategy was developed iteratively by the review team in collaboration with an experienced librarian with a Master’s of Science in Information (FB). First, an initial limited search of MEDLINE and CINAHL will be undertaken to identify reviews on PROM/PREM implementation. The text words contained in the titles and abstracts, and the index terms used to describe these reviews will be analyzed and applied to a modified search strategy (as needed). We adapted elements from the search strategies of two recent reviews in the field of PROM/PREM implementation [ 22 , 23 ] to fit our objectives. The search strategy for PubMed is presented in Supplementary material 2 . The search strategy will be tailored for each information source. The complete search strategy for each database will be made available for transparency and reproducibility in the final manuscript.

Selection process

All identified citations will be collated and uploaded into the Covidence systematic review software (Veritas Health Innovation, Melbourne, Australia), and duplicates removed. Following training on 50 titles, titles will be screened by two independent reviewers for assessment against the inclusion criteria for the review. Multiple rounds of calibration might be needed. Once titles have been screened, retained abstracts will be reviewed, preferably by the same two reviewers. However, inter-rater reliability will be re-established on 50 abstracts to re-calibrate (as needed). Lastly, the full texts of retained abstracts will be located and assessed in detail against the inclusion criteria by two independent reviewers. Reasons for excluding articles from full-text review onwards will be recorded in the PRIOR flow diagram (PRISMA-like flowchart) [ 38 ]. Any disagreements that arise between the reviewers at each stage of the selection process will be resolved through discussion, or with an additional reviewer. More specifically, throughout the project, weekly team meetings will be held and will provide the opportunity for the team to discuss and resolve any disagreement that arises during the different stages, from study selection to data extraction.

Quality appraisal and data extraction

As presented in Fig.  1 , included reviews will be appraised and data will be extracted and analyzed in four steps using validated tools and methodologies [ 27 , 36 , 40 ]. All four steps will be conducted by two reviewers independently, and a third will be involved in case of disagreement. More reviewers may be needed depending on the number of reviews included.

figure 1

Tools/methodology applied in each phase of the umbrella review. Figure adapted from Boudewijns and colleagues [ 41 ] with permission. CFIR 2.0 = Consolidated Framework for Implementation Research, version 2 [ 27 ]. GRADE–CERQual = Grading of Recommendations Assessment Development and Evaluation–Confidence in the Evidence from Reviews of Qualitative Research [ 42 ]. JBI = Joanna Briggs Institute [ 36 ]

Step 1—assessing the quality of included reviews

In the first step, two reviewers will independently assess the methodological quality of the reviews using the JBI Critical Appraisal Checklist for Systematic Reviews and Research Syntheses, presented in Supplementary material  3 . We have selected this checklist for its comprehensiveness, applicability to different types of knowledge syntheses, and ease of use, requiring minimal training for reviewers to apply it. The checklist consists of 11 questions. It evaluates whether the review question is clearly and explicitly stated, the inclusion criteria were appropriate for that question, and the search strategy and sources used to determine if they were suitable and adequate for capturing relevant studies. It also assesses the appropriateness of the criteria used for appraising studies, as well as whether the critical appraisal was conducted independently by two or more reviewers. The checklist further examines if there were methods in place to minimize errors during data extraction, if the methods used to combine studies were appropriate, and whether the likelihood of publication bias was assessed. Additionally, it verifies if the recommendations for policy and/or practice are supported by the reported data and if the directives for new research are appropriate. Each question should be answered as “yes”, “no”, or “unclear”. Not applicable “NA” is also provided as an option and may be appropriate in rare instances. The results of the quality appraisal will provide the basis for assessing confidence in the evidence in step four. Any disagreements that arise between the reviewers will be resolved through discussion, or with a third reviewer, or at team meetings.

Step 2—extracting data from included reviews

For the second step, we have developed a modified version of the JBI Data Extraction Form for Umbrella Reviews, presented in Supplementary material  3 . We will pilot our data extraction form on two of the included reviews, and it will be revised for clarity, as needed. Subsequently, two independent reviewers will conduct all extraction for each review independently. We will collect the following data: (a) authors and date; (b) country; (c) review aims, objectives; (d) focus of the review; (e) context; (f) population; (g) eligibility criteria; (f) review type and methodology; (g) data sources; (h) dates of search; (i) number of included studies; (j) characteristics of included studies (including study type, critical appraisal score); (k) implementation framework guiding analysis; (l) implementation strategies discussed; (m) results and significance; and (n) conclusions. Barriers and enablers will be extracted separately in step 3. Any disagreements that arise between the reviewers will be resolved through discussion, or with a third reviewer, or at team meetings.

Step 3—theory-based coding of barriers and enablers

In the third step, we will use the second version of the Consolidated Framework for Implementation Research (CFIR) [ 27 ] to guide our proposed exploration of determinants influencing the implementation of PROMs and PREMs (see Fig.  2 ). The CFIR is a meta-theoretical framework providing a repository of standardized implementation-related constructs at the individual, organizational, and external levels that can be applied across the spectrum of implementation research. CFIR contains 48 constructs and 19 subconstructs representing determinants of implementation across five domains: Innovation (i.e., PROMs and PREMs), Outer Setting (e.g., national policy context), Inner Setting (e.g., work infrastructure), Individuals (e.g., healthcare professional motivation) and Implementation Process (e.g., assessing context) [ 27 ]. To ensure that coding remains grounded in the chosen theoretical framework, we have developed a codebook based on the second version of the CFIR, presented in Supplementary material 3 . Furthermore, an initial training session and regular touchpoints will be held to discuss coding procedures among the team members involved.

figure 2

The second version of the Consolidated Framework for Implementation Research and its five domains: innovation, outer setting, inner setting, individuals, and implementation process [ 27 , 43 ]

To code factors influencing the implementation of PROMs and PREMs using the CFIR, we will upload all PDFs of the included reviews and their appendices in the NVivo qualitative data analysis software (QSR International, Burlington, USA). All reviews will be independently coded by two reviewers. Any disagreements that arise between the reviewers will be resolved through discussion, or with a third reviewer.

Step 4—identifying the barriers and enablers best supported by the reviews

In the fourth and final step, we will use the Grading of Recommendations Assessment, Development, and Evaluation-Confidence in the Evidence from Reviews of Qualitative research (GRADE-CERQual) approach to assess the level of confidence in the barriers and enablers to PROM/PREM implementation identified in step 3 (see Supplementary material  3 ). This process will identify which barriers and enablers are best supported by the evidence in the included reviews. GRADE-CERQual includes four domains: (a) methodological limitations, (b) coherence and (c) adequacy of data, and (d) relevance (see Table  2 ). For each review finding, we will assign a score per domain from one point (substantial concerns) to four points (no concerns to very minor concerns). The score for the methodological limitations of the review will be assigned based on the JBI Critical Appraisal (step 1). The score for coherence will be assigned based on the presence of contradictory findings as well as ambiguous/incomplete data for that finding in the umbrella review. The score for adequacy of data will be assigned based on the richness of the data supporting the umbrella review finding. Finally, the score for relevance will be assigned based on how well the included reviews supporting a specific barrier or enabler to the implementation of PROMs/PREMs are applicable to the umbrella review context. This will allow us to identify which factors are supported by evidence with the highest level of confidence, and their corresponding level of evidence. A calibration exercise will be conducted on three systematic reviews with team members involved in this stage of the umbrella review, and adjustments to procedures will be discussed in team meetings.

The data synthesis plan for the umbrella review has been meticulously designed to present extracted data in a format that is both informative and accessible, aiding in decision-making and providing a clear overview of the synthesized evidence.

Data extracted from the included systematic reviews will be organized into diagrams and tables, ensuring the presentation is closely aligned with our objectives and scope. These will categorize the distribution of reviews in several ways: by the year or period of publication, country of origin, target population, context, type of review, and various implementation factors. This stratification will allow for an at-a-glance understanding of the breadth and focus of the existing literature. To further assist in the application of the findings, a Summary of Qualitative Findings (SoQF) table will be constructed. This table will list each barrier and enabler identified within the systematic reviews and provide an overall confidence assessment for each finding. The confidence assessment will be based on the methodological soundness and relevance of the evidence supporting each identified barrier or enabler. Importantly, the SoQF table will include explanations for these assessments, making the basis for each judgement transparent [ 42 ]. Additionally, a CERQual Evidence Profile will be prepared, offering a detailed look at the reviewers’ judgements concerning each component of the CERQual approach. These components contribute to the overall confidence in the evidence for each identified barrier or enabler. The CERQual Evidence Profile will serve as a comprehensive record of the quality and applicability of the evidence [ 42 ].

Finally, we will conduct a narrative synthesis accompanying the tabular and diagrammatic presentations, summarizing the findings and discussing their implications concerning the review’s objectives and questions. This narrative will interpret the significance of the barriers and enablers identified, explaining how the synthesized evidence fits into the existing knowledge base and pointing out potential directions for future research or policy formulation.

This protocol outlines an umbrella review aiming to consolidate available evidence on the implementation of PROMs and PREMs in healthcare settings. Through our synthesis of quantitative, qualitative, and mixed-methods systematic and scoping reviews, we will answer two key questions: which factors hinder or enable the adoption and sustained use of PROMs and PREMs in healthcare settings, and what is the level of confidence in the evidence supporting these factors? Our findings will indicate which factors can influence the adoption of PROMs and PREMs, including clinician buy-in, patient engagement, and organizational support. Furthermore, our review will provide key insights regarding how barriers and enablers to PROM/PREM implementation differ across settings and how perceptions around their implementation differ between patients, clinicians, managers, and decision-makers. The consideration of different healthcare settings and the inclusion of studies from different geographical regions and healthcare systems will provide a global perspective, essential for understanding how context-specific factors might influence the generalizability of findings.

Strengths of this umbrella review include the use of a state-of-the-art implementation framework (CFIR 2.0) to identify, categorize, and synthesize multilevel factors influencing the implementation of PROMs/PREMS, and the use of the GRADE-CERQual approach to identify the level of confidence in the evidence supporting these factors. Using CFIR 2.0 will address a key limitation of current research in the field, since reviews and primary research are often focused on provider- and patient-level barriers and enablers, omitting organizational- and system-level factors affecting PROM/PREM implementation. This umbrella review will expose knowledge gaps to orient further research to improve our understanding of the complex factors at play in the adoption and sustained use of PROMs and PREMs in healthcare settings. Importantly, using CFIR 2.0 will allow the mapping of barriers and enablers identified to relevant implementation strategy taxonomies, such as the Expert Recommendations for Implementing Change (ERIC) Taxonomy [ 34 ]. This is crucial for designing tailored implementation strategies, as it can ensure that the chosen approaches to support implementation are directly aligned with the specific barriers and enablers to the uptake of PROMs and PREMs.

Umbrella reviews are also associated with some limitations, including being limited to the inclusion of systematic reviews and other knowledge syntheses, while additional primary studies are likely to have since been published. These additional empirical studies will not be captured, but we will minimize this risk by updating the search strategy at least once before the completion of the umbrella review. A second key challenge in umbrella reviews is the overlap between the primary studies, as many studies will have been included in different systematic reviews on the same topic. To address this issue, we will prepare a matrix of primary studies included in systematic reviews to gain insight into double counting of primary studies.

We will maintain an audit trail document amendments to this umbrella review protocol and report these in both the PROSPERO register and subsequent publications. Findings will be disseminated through publications in peer-reviewed journals in the fields of implementation, medicine, as well as health services, and policy research. We will also disseminate results through relevant conferences and social media using different strategies (e.g., graphical abstract). Furthermore, we will leverage existing connections between SDL and decision-makers at a provincial and national level in Canada to disseminate the findings of the review to a wider audience (e.g., the Director of Quebec Cancerology Program, Canadian Association of Psychosocial Oncology).

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analyzed for the purposes of this publication.

Abbreviations

Confidence in the evidence from reviews of qualitative research

Consolidated framework for implementation research

Canadian Institute for Health Information

Canadian Partnership Against Cancer

Expert recommendations for implementing change

Grading of recommendations assessment, development and evaluation

Joanna Briggs Institute

Organisation for economic co-operation and development

Patient-reported indicators surveys

Patient-reported experience measure

Preferred reporting items for overviews of reviews

Preferred reporting items for systematic review and meta-analysis

Preferred reporting items for systematic review and meta-analysis protocols

Patient-reported outcome measure

Kingsley C, Patel S. Patient-reported outcome measures and patient-reported experience measures. BJA Education. 2017;16:137–44.

Article   Google Scholar  

Jamieson Gilmore K CI, Coletta L, Allin S. The uses of patient reported experience measures in health systems: a systematic narrative review. Health Policy. 2022.

Gibbons CPI, Gonçalves-Bradley DC, et al. Routine provision of feedback from patientreported outcome measurements to healthcare providers and patients in clinical practice. Cochrane Database Syst Rev. 2021;10:Cd011589.

PubMed   Google Scholar  

Howell DMS, Wilkinson K, et al. Patient-reported outcomes in routine cancer clinical practice: a scoping review of use, impact on health outcomes, and implementation factors. Ann Oncol. 2015;26:1846–58.

Article   CAS   PubMed   Google Scholar  

Kotronoulas GKN, Maguire R, et al. What is the value of the routine use of patient-reported outcome measures toward improvement of patient outcomes, processes of care, and health service outcomes in cancer care? A systematic review of controlled trials. J Clin Oncol. 2014;32:1480–501.

Article   PubMed   Google Scholar  

Chen J OL, Hollis SJ. A systematic review of the impact of routine collection of patient reported outcome measures on patients, providers and health organisations in an oncologic setting. BMC Health Serv Res. 2013;13(211).

Basch E. Symptom monitoring With patient-reported outcomes during routine cancer treatment: A randomized controlled trial. J Clin Oncol. 2016;34:2198–2198.

Forcino RCMM, Engel JA, O’Malley AJ, Elwyn G. Routine patient-reported experience measurement of shared decision-making in the USA: a qualitative study of the current state according to frontrunners. BMJ Open. 2020;10: e037087.

Article   PubMed   PubMed Central   Google Scholar  

Timmins N. NHS goes to the PROMS. BMJ. 2008;336:1464–5.

Mjåset C. Value-based health care in four different health care systems. NEJM Catalyst. 2020.

Sekretariatet P. PRO – patient reported outcome. https://pro-danmark.dk/da/proenglish .

Terner MLK, Chow C, Webster G. Advancing PROMs for health system use in Canada and beyond. J Patient Rep Outcomes. 2021;5:94.

Slawomirski L, van den Berg M, Karmakar-Hore S. Patient-Reported indicator survey (Paris): aligning practice and policy for better health outcomes. World Med J. 2018;64(3):8–14.

Google Scholar  

Ahmed SBL, Bartlett SJ, et al. A catalyst for transforming health systems and person-centred care: Canadian national position statement on patient-reported outcomes. Curr Oncol. 2020;27:90–9.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Pross C, Geissler A, Busse R. Measuring, reporting, and rewarding quality of care in 5 nations: 5 policy levers to enhance hospital quality accountability. Milbank Q. 2017;95(1):136–83.

Ernst SCK, Steinbeck V, Busse R, Pross C. Toward system-wide implementation of patient-reported outcome measures: a framework for countries, states, and regions. Value in Health. 2022;25(9):1539–47.

Nguyen HBP, Dhillon H, Sundaresan P. A review of the barriers to using Patient-Reported Outcomes (PROs) and Patient-Reported Outcome Measures (PROMs) in routine cancer care. J Med Radiation Sci. 2021;68:186–95.

Davis SAM, Smith M, et al. Paving the way for electronic patient-centered measurement in team-based primary care: integrated knowledge translation approach. JMIR Form Res. 2022;6: e33584.

Bull CTH, Watson D, Callander EJ. Selecting and implementing patient-reported outcome and experience measures to assess health system performance. JAMA Health Forum. 2022;3: e220326.

Schepers SAHL, Zadeh S, Grootenhuis MA, Wiener L. Healthcare professionals’ preferences and perceived barriers for routine assessment of patient-reported outcomes in pediatric oncology practice: moving toward international processes of change. Pediatr Blood Cancer. 2016;63:2181–8.

Glenwright BG, Simmich J, Cottrell Mea. Facilitators and barriers to implementing electronic patient-reported outcome and experience measures in a health care setting: a systematic review. J Patient Rep Outcomes. 2023;7(13).  https://doi.org/10.1186/s41687-023-00554-2

Foster A, Croot L, Brazier J, Harris J, O’Cathain A. The facilitators and barriers to implementing patient reported outcome measures in organisations delivering health related services: a systematic review of reviews. J Patient Rep Outcomes. 2018;2(1):1–16.

Wolff AC, Dresselhuis A, Hejazi Sea. Healthcare provider characteristics that influence the implementation of individual-level patient-centered outcome measure (PROM) and patient-reported experience measure (PREM) data across practice settings: a protocol for a mixed methods systematic review with a narrative synthesis. Syst Rev. 2021;10(169).  https://doi.org/10.1186/s13643-021-01725-2

Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7(1):50. https://doi.org/10.1186/1748-5908-7-50 .

French SD, Green SE, O’Connor DA, et al. Developing theory-informed behaviour change interventions to implement evidence into practice: a systematic approach using the Theoretical Domains Framework. Implement Sci. 2012;7(1):38. https://doi.org/10.1186/1748-5908-7-38 .

Wolfenden L, Foy R, Presseau J, Grimshaw J M, Ivers N M, al. PBJe. Designing and undertaking randomised implementation trials: guide for researchers. BMJ. 2021;372.  https://doi.org/10.1136/bmj.m3721

Damschroder LJ, Reardon, C.M., Widerquist, M.A.O. et al. ,. The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Science. 2022;17:75. https://doi.org/10.1186/s13012-022-01245-0

Bradshaw ASM, Mulderrig M, et al. Implementing person-centred outcome measures in palliative care: An exploratory qualitative study using Normalisation Process Theory to understand processes and context. Palliat Med. 2021;35:397–407.

Stover AMHL, van Oers HA, Greenhalgh J, Potter CM. Using an implementation science approach to implement and evaluate patient-reported outcome measures (PROM) initiatives in routine care settings. Qual Life Res. 2021;30:3015–33.

Manalili KSM. Using implementation science to inform the integration of electronic patient-reported experience measures (ePREMs) into healthcare quality improvement: description of a theory-based application in primary care. Qual Life Res. 2021;30:3073–84.

Patey AM, Fontaine, G., Francis, J. J., McCleary, N., Presseau, J., & Grimshaw, J. M. Healthcare professional behaviour: health impact, prevalence of evidence-based behaviours, correlates and interventions. Psychol Health. 2022:766–794. https://doi.org/10.1080/08870446.2022.2100887

Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):1–11. https://doi.org/10.1186/1748-5908-8-139 .

Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Sci. 2017;10:21. https://doi.org/10.1186/s13012-015-0209-1 .

Waltz TJ, Powell BJ, Matthieu MM, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:109. https://doi.org/10.1186/s13012-015-0295-0 .

Aromataris E MZ. Chapter 11: Umbrella Reviews. In: Aromataris E, Munn Z, eds. Joanna Briggs Institute Reviewer's Manual. The Joanna Briggs Institute; 2020.

Aromataris E, Fernandez R, Godfrey C, Holly C, Kahlil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an Umbrella review approach. Int J Evid Based Healthc. 2015;13(3):132–40.

Moher D, Shamseer, L., Clarke, M. et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev. 2015;4(1). https://doi.org/10.1186/2046-4053-4-1

Gates MGA, Pieper D, Fernandes RM, Tricco AC, Moher D, et al. Reporting guideline for overviews of reviews of healthcare interventions: development of the PRIOR statement. BMJ. 2022;378: e070849. https://doi.org/10.1136/bmj-2022-070849 .

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Moher D. The. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg. 2020;2021(88).

Dixon-Woods M, Agarwal, S., Young, B., Jones, D., & Sutton, A. Integrative approaches to qualitative and quantitative evidence. Health Development Agency; 2004.

Boudewijns EA, Trucchi, M., van der Kleij, R. M., Vermond, D., Hoffman, C. M., Chavannes, N. H., ... & Brakema, E. A. Facilitators and barriers to the implementation of improved solid fuel cookstoves and clean fuels in low-income and middle-income countries: an umbrella review. Lancet Planet Health. 2022.

Lewin SGC, Munthe-Kaas H, et al. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med. 2015;12(10):e1001895. https://doi.org/10.1371/journal.pmed.1001895 .

The Centre for Implementation. The Consolidated Framework for Implementation Research (CFIR) 2.0. Adapted from "The updated Consolidated Framework for Implementation Research based on user feedback," by Damschroder, L.J., Reardon, C.M., Widerquist, M.A.O. et al., 2022, Implementation Sci 17, 75. Image copyright 2022 by The Center for Implementation. https://thecenterforimplementation.com/toolbox/cfir

Lewin S, Booth A, Glenton C, et al. Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series. Implementation Sci 2018;13(Suppl 1):2. https://doi.org/10.1186/s13012-017-0688-3 .

Download references

Acknowledgements

We wish to acknowledge the involvement of a patient-partner on the RRISIQ grant supporting this project (Lisa Marcovici). LM will provide feedback and guidance on the findings of the umbrella review, orienting the interpretation of findings and the next steps of this project.

We wish to acknowledge funding from the Quebec Network on Nursing Intervention Research/Réseau de recherche en intervention en sciences infirmières du Québec (RRISIQ), a research network funded by the Fonds de recherche du Québec en Santé (FRQ-S). Funders had no role in the development of this protocol.

Author information

Authors and affiliations.

Ingram School of Nursing, Faculty of Medicine and Health Sciences, McGill University, 680 Rue Sherbrooke O #1800, Montréal, QC, H3A 2M7, Canada

Guillaume Fontaine, Lydia Ould Brahim, Sydney Wasserman & Sylvie D. Lambert

Centre for Clinical Epidemiology, Lady Davis Institute for Medical Research, Sir Mortimer B. Davis Jewish General Hospital, CIUSSS West-Central Montreal, 3755 Chem. de la Côte-Sainte-Catherine, Montréal, QC, H3T 1E2, Canada

Guillaume Fontaine

Department of Family Medicine and Emergency Medicine, Faculty of Medicine and Health Sciences, Université de Sherbrooke, 3001 12 Ave N Building X1, Sherbrooke, QC, J1H 5N4, Canada

Marie-Eve Poitras

Centre Intégré Universitaire de Santé Et de Services Sociaux (CIUSSS) du Saguenay-Lac-Saint-Jean du Québec, 930 Rue Jacques-Cartier E, Chicoutimi, QC, G7H 7K9, Canada

Faculty of Nursing, Université Laval, 1050 Av. de La Médecine, Québec, QC, G1V 0A6, Canada

Maxime Sasseville

Centre de Recherche en Santé Durable VITAM, CIUSSS de La Capitale-Nationale, 2480, Chemin de La Canardière, Quebec City, QC, G1J 2G1, Canada

Faculty of Medicine & School of Public Health, Université de Montréal, Pavillon Roger-Gaudry, 2900 Edouard Montpetit Blvd, Montreal, QC, H3T 1J4, Canada

Marie-Pascale Pomey

Centre de Recherche du Centre Hospitalier de L, Université de Montréal (CR-CHUM), 900 Saint Denis St., Montreal, QC, H2X 0A9, Canada

Direction of Nursing, CIUSSS de L’Ouest de L’Île-de-Montréal, 3830, Avenue Lacombe, Montreal, QC, H3T 1M5, Canada

Jérôme Ouellet

Université Laval Library, Pavillon Alexandre-Vachon 1045, Avenue de La Médecine, Québec, Québec), G1V 0A6, Canada

Frédéric Bergeron

St. Mary’s Research Centre, CIUSSS de L’Ouest de L’Île-de-Montréal, 3777 Jean Brillant St, Montreal, QC, H3T 0A2, Canada

Sylvie D. Lambert

You can also search for this author in PubMed   Google Scholar

Contributions

GF and SDL conceptualized the study. GF, MEP, MS, MP, and SDL developed the study methods. GF drafted the manuscript, with critical revisions and additions by SDL. All authors provided intellectual content and reviewed and edited the manuscript. GF is the guarantor of this review. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Guillaume Fontaine .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Fontaine, G., Poitras, ME., Sasseville, M. et al. Barriers and enablers to the implementation of patient-reported outcome and experience measures (PROMs/PREMs): protocol for an umbrella review. Syst Rev 13 , 96 (2024). https://doi.org/10.1186/s13643-024-02512-5

Download citation

Received : 24 May 2023

Accepted : 13 March 2024

Published : 26 March 2024

DOI : https://doi.org/10.1186/s13643-024-02512-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Patient-reported outcome measures
  • Patient-reported experience measures
  • Implementation science
  • Umbrella review
  • Systematic review
  • Overview of reviews
  • Facilitators

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

level of evidence systematic literature review

medRxiv

Top-funded digital health companies offering lifestyle interventions for dementia prevention: Company overview and evidence analysis

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Rasita Vinay
  • For correspondence: [email protected]
  • ORCID record for Tobias Kowatsch
  • ORCID record for Marcia Nißen
  • Info/History
  • Supplementary material
  • Preview PDF

Background and objective Dementia prevention has been recognized as a top priority by public health authorities due to the lack of disease modifying treatments. In this regard, digital dementia-preventive lifestyle services (DDLS) emerge as potentially pivotal services, aiming to address modifiable risk factors on a large scale. This study aims to identify the top-funded companies offering DDLS globally and evaluate their clinical evidence to gain insights into the current state of the global service landscape.

Methods A systematic screening of two financial databases (Pitchbook and Crunchbase) was conducted. Corresponding published clinical evidence was collected through a systematic literature review and analyzed regarding study purpose, results, quality of results, and level of clinical evidence.

Findings The ten top-funded companies offering DDLS received a total funding of EUR 128.52 million, of which three companies collected more than 75%. Clinical evidence was limited due to only nine eligible publications, small clinical subject groups, the absence of longitudinal study designs, and no direct evidence of dementia prevention.

Conclusion The study highlights the need for a more rigorous evaluation of DDLS effectiveness in today’s market. It serves as a starting point for further research in digital dementia prevention.

Competing Interest Statement

RV, PH, TK, and MN are affiliated with the Centre for Digital Health Interventions (CDHI), a joint initiative of the Institute for Implementation Science in Health Care, University of Zurich, the Department of Management, Technology, and Economics at the ETH Zurich, and the institute of Technology Management and School of Medicine at the University of St. Gallen. CDHI is funded in part by CSS, a Swiss health insurer, Mavie Next, an Austrian healthcare provider and MTIP, a Swiss investor company. TK is also a co-founder of Pathmate Technologies, a university spin-off company that creates and delivers digital clinical pathways. However, neither CSS nor Pathmate Technologies, Mavie Next, or MTIP was involved in this research. All other authors declare no conflict of interest.

Funding Statement

This study did not receive any funding

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Data Availability

All data produced in the present work are contained in the manuscript

View the discussion thread.

Supplementary Material

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Public and Global Health
  • Addiction Medicine (316)
  • Allergy and Immunology (617)
  • Anesthesia (159)
  • Cardiovascular Medicine (2274)
  • Dentistry and Oral Medicine (279)
  • Dermatology (201)
  • Emergency Medicine (369)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (798)
  • Epidemiology (11572)
  • Forensic Medicine (10)
  • Gastroenterology (677)
  • Genetic and Genomic Medicine (3575)
  • Geriatric Medicine (336)
  • Health Economics (615)
  • Health Informatics (2303)
  • Health Policy (913)
  • Health Systems and Quality Improvement (863)
  • Hematology (335)
  • HIV/AIDS (751)
  • Infectious Diseases (except HIV/AIDS) (13147)
  • Intensive Care and Critical Care Medicine (755)
  • Medical Education (359)
  • Medical Ethics (100)
  • Nephrology (388)
  • Neurology (3346)
  • Nursing (191)
  • Nutrition (506)
  • Obstetrics and Gynecology (651)
  • Occupational and Environmental Health (644)
  • Oncology (1756)
  • Ophthalmology (524)
  • Orthopedics (209)
  • Otolaryngology (284)
  • Pain Medicine (223)
  • Palliative Medicine (66)
  • Pathology (437)
  • Pediatrics (1001)
  • Pharmacology and Therapeutics (422)
  • Primary Care Research (405)
  • Psychiatry and Clinical Psychology (3056)
  • Public and Global Health (5983)
  • Radiology and Imaging (1221)
  • Rehabilitation Medicine and Physical Therapy (713)
  • Respiratory Medicine (810)
  • Rheumatology (367)
  • Sexual and Reproductive Health (350)
  • Sports Medicine (315)
  • Surgery (386)
  • Toxicology (50)
  • Transplantation (170)
  • Urology (142)
  • Systematic Review
  • Open access
  • Published: 28 March 2024

Thoracolumbar Interfascial Plane (TLIP) block verses other paraspinal fascial plane blocks and local infiltration for enhanced pain control after spine surgery: a systematic review

  • Tarika D. Patel 1 ,
  • Meagan N. McNicholas 1 ,
  • Peyton A. Paschell 1 ,
  • Paul M. Arnold 2 &
  • Cheng-ting Lee 3  

BMC Anesthesiology volume  24 , Article number:  122 ( 2024 ) Cite this article

165 Accesses

Metrics details

Spinal surgeries are accompanied by excessive pain due to extensive dissection and muscle retraction during the procedure. Thoracolumbar interfascial plane (TLIP) blocks for spinal surgeries are a recent addition to regional anesthesia to improve postoperative pain management. When performing a classical TLIP (cTLIP) block, anesthetics are injected between the muscle (m.) multifidus and m. longissimus. During a modified TLIP (mTLIP) block, anesthetics are injected between the m. longissimus and m. iliocostalis instead. Our systematic review provides a comprehensive evaluation of the effectiveness of TLIP blocks in improving postoperative outcomes in spinal surgery through an analysis of randomized controlled trials (RCTs).

We conducted a systematic review based on the PRISMA guidelines using PubMed and Scopus databases. Inclusion criteria required studies to be RCTs in English that used TLIP blocks during spinal surgery and report both outcome measures. Outcome data includes postoperative opioid consumption and pain.

A total of 17 RCTs were included. The use of a TLIP block significantly decreases postoperative opioid use and pain compared to using general anesthesia (GA) plus 0.9% saline with no increase in complications. There were mixed outcomes when compared against wound infiltration with local anesthesia. When compared with erector spinae plane blocks (ESPB), TLIP blocks often decreased analgesic use, however, this did not always translate to decreased pain. The cTLIP and mTLP block methods had comparable postoperative outcomes but the mTLIP block had a significantly higher percentage of one-time block success.

The accumulation of the current literature demonstrates that TLIP blocks are superior to non-block procedures in terms of analgesia requirements and reported pain throughout the hospitalization in patients who underwent spinal surgery. The various levels of success seen with wound infiltration and ESPB could be due to the nature of the different spinal procedures. For example, studies that saw superiority with TLIP blocks included fusion surgeries which is a more invasive procedure resulting in increased postoperative pain compared to discectomies.

The results of our systematic review include moderate-quality evidence that show TLIP blocks provide effective pain control after spinal surgery. Although, the application of mTLIP blocks is more successful, more studies are needed to confirm that superiority of mTLIP over cTLIP blocks. Additionally, further high-quality research is needed to verify the potential benefit of TLIP blocks as a common practice for spinal surgeries.

Peer Review reports

Introduction

Spinal surgeries are often accompanied by excessive pain due to extensive tissue dissection and muscle retraction during the procedure [ 1 , 2 ]. Effective pain control is a crucial aspect of patient comfort and a pivotal determinant of overall surgical outcomes. Regional anesthesia techniques have gained prominence in the quest for optimal analgesia, with thoracolumbar interfascial plane (TLIP) blocks emerging as a noteworthy option.

Opioids are commonly used for post-spinal surgery pain management [ 3 , 4 ]. While opioids provide effective analgesia, their use is associated with reoperations and can lead to undesired outcomes such as long-term dependence, nausea, vomiting, and respiratory depression. Multimodal analgesic regimens are the use of 2 or more analgesics or techniques to reduce the dose of each individual drug and can help with the goal of reducing opioid use while providing adequate pain control [ 5 , 6 ]. While there is no one optimal analgesic combination, unless there are patient-specific contraindications, all patients should receive a combination of acetaminophen and nonsteroidal anti-inflammatory drugs (NSAIDs) perioperatively or intraoperatively and continued postoperatively as scheduled dosing. Furthermore, all patients should receive surgical site infiltration and/or regional anesthesia (interfascial plane or peripheral nerve block). While opioid use should be reduced, the role of opioid-free analgesia remains controversial. In the acute postoperative period, opioids should be administered only as a rescue agent. Intravenous (IV) analgesic should be limited with the goal of transferring patients to oral medications and not impede ambulation and rehabilitation [ 7 ]. Therefore, a multimodal pain regimen is key to improving patient outcomes and reducing total opioid consumption.

In 2015, Hand et al. [ 8 ] introduced the classical TLIP (cTLIP) block, which targets the dorsal rami of the thoracolumbar nerves.. This is a relatively recent addition to regional anesthesia techniques for spinal surgeries. It involves the precise administration of local anesthetics between the multifidus and longissimus paraspinal muscles at the third lumbar vertebra often assisted by ultrasound. It is often difficult to delineate between the two muscles, however, lumbar extension can help improve the visualization of the intended injection site. This technique is designed to selectively target the sensory innervation of the thoracolumbar region, potentially offering a valuable alternative to systemic opioids. To improve upon the challenges and difficulties seen with the cTLIP block, Ahiskalioglu et al. [ 9 ], in 2017, introduced the modified TLIP (mTLIP) block where anesthetics are instead injected between the longissimus and iliocostalis muscles. The erector spinae plane block (ESPB) is similar to the TLIP block, however, an ESPB targets both the ventral and dorsal rami of the thoracic and abdominal spinal nerves by injecting anesthetics between the erector spinae muscle and transverse processes of vertebrae. By targeting only the dorsal rami of spinal nerves, the TLIP block provides more focused dermatomal coverage for back muscles which could lead to better controlled postoperative pain [ 10 ].

Our systematic review endeavors to provide a comprehensive evaluation of the effectiveness of TLIP blocks in improving postoperative outcomes in spinal surgery. The primary objectives encompass a multifaceted exploration of the impact of TLIP blocks for patients undergoing lumbar spinal surgery, focusing on postoperative pain control, opioid consumption, and the incidence of complications. We aim to provide a nuanced understanding of how TLIP blocks fare in comparison to other anesthesia modalities commonly employed in spinal surgery through a meticulous analysis of randomized controlled trials.

We conducted a systematic review based on the Preferred Reporting items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed, Scopus, and clinicaltrials.gov were the databases used. The search strategy was focused on “thoracolumbar interfascial plane blocks” for “spine” surgeries. Multiple search phrases and keywords were used to limit bias and capture missed studies that may not have shown up using a single search. The snowball method was used to collect references from other systematic reviews for potentially relevant articles that were missed with the initial search. At the start, all abstracts were read in their entirety for initial screening. The full text of studies with potential for final inclusion were evaluated for eligibility based on inclusion and exclusion criteria. Each article was reviewed by two independent researchers to determine inclusion based on our pre-determined criteria, then was confirmed by a third reviewer.

Inclusion criteria required studies to be randomized control trials (RCTs) in the English language that evaluated the impact of TLIP blocks during spinal surgery on postoperative pain and analgesia. Cohort studies were not included because most were redacted, and case studies provided minimal quantifiable outcome measurements. Inclusion criteria included the use of TLIP blocks for any type of spine-based surgery and report standardized outcome measures of both postoperative analgesic use and pain. Studies that are not randomized control trials of human patients, do not report any outcome data, and involve surgery beyond the spine were excluded.

We collected data regarding age range, total number of participants, type of surgery, treatment characteristics, and type of anesthesia mixture. Outcome data include intraoperative and postoperative opioid consumption, time to postoperative analgesia, and postoperative pain. Complications were also collected for each study. Continuous variable data was reported as a mean ± standard deviations or a median (interquartile range). Categorical variable data were reported as frequency with percentages. Associations were reported with statistical significance at a p -value < 0.5. Studies were grouped based on the type of control TLIP blocks were compared against.

The critical appraisal of included studies was evaluated using the JBI assessment tool for risk of bias for randomized controlled trials [ 11 ]. This tool includes 11 items that aim to assess for a variety of biases such as selection, performance, and measurement. Each question can receive an answer of yes, no, unclear, or not applicable. Studies with a higher number of answers to be yes have a low risk of bias and those with a higher number of answers to be no have a high risk of bias.

A total of 17 RCTs were included in this study with the age of patients ranging from 18–74 years (Fig.  1 ). Risk of bias was moderate to high given there were several different areas where there were doubts if criteria were met for each study (Table  1 ). Only two studies, Chen et al. [ 12 ] and Ahiskalioglu et al., [ 13 ] met all criteria, leading to a low risk of bias. Additionally, only three studies met the criteria where those delivering the treatment were blind to treatment assignment (question 5).

figure 1

PRISMA flow diagram for study selection

The types of surgeries performed, most often at the lumbar level, include discectomies, fusions, and decompression/stabilization procedures. TLIP blocks were performed after induction with general anesthesia (GA) either by the modified or classical method. The TLIP blocks were often compared to either GA plus 0.9% saline ( n  = 5), wound infiltration ( n  = 4), ESPB ( n  = 4), quadratus lumborum block (QLB) ( n  = 1), or epidural analgesia ( n  = 1). Two studies compared the two modes of TLIP blocks, classical and modified (Table  2 ).

The common make-up of the anesthesia provided was bilateral injections of 20 mL of 0.25% bupivacaine ( n  = 10). Other compositions include bilateral injections of 30 mL 0.25% bupivacaine ( n  = 1), 30 mL of 0.375% ropivacaine ( n  = 2), and 20 mL of 0.2% ropivacaine ( n  = 1) along with a mixture of bupivacaine and lidocaine ( n  = 2) and a mixture of ropivacaine and lignocaine ( n  = 1).

The two main outcomes that were analyzed were postoperative pain and opioid consumption. Pain intensity was reported using the visual analog scale (VAS) or the numeric rating scale (NRS), both of which utilize a scale of 1 to 10. Analgesia consumption includes the amount of opioid use, time to first analgesia, percentage of patients requiring rescue analgesia, and frequency of PCA (patient-controlled analgesia) use. The most frequently reported complications of the anesthesia blocks were nausea and vomiting. The rate and incidence of complications were low and insignificant between treatment groups for most studies. Ahiskalioglu, [ 13 ] Ciftci [ 10 ] and Ekinci [ 26 ] were the only studies that reported a significant decrease in nausea with TLIP block.

Overall, the use of a TLIP block for spinal surgery significantly decreases postoperative opioid use and pain compared to using general anesthesia (GA) plus 0.9% saline only with no increase in complications. The time before analgesia was requested significantly increased for patients who received a TLIP block.

When TLIP blocks were compared against wound infiltration of local anesthesia, two studies, Ince et al. [ 19 ] and Bicak et al. [ 27 ], found wound infiltration was as effective as a TLIP block for postoperative pain relief. On the other hand, Ekinci et al. [ 26 ] and Pavithran et al. [ 15 ] found TLIP blocks to be superior.

There appeared to be varying levels of success when TLIP blocks were compared with ESPB. Kumar et al. [ 14 ] found patients who were given ESPBs reported significantly decreased total opioid consumption and decreased pain for up to 24 h. However, Ciftci et al. [ 10 ] saw no difference in analgesic efficacy between ESPBs and TLIP block groups, but compared to those who did not receive either block, postoperative opioid use was significantly decreased. Similarly, Tantri et al. [ 17 ] saw no difference in postoperative pain control between the two block groups. However, TLIP block provided a prolonged duration of analgesia as seen by a significantly increased length of time until first analgesia.

TLIP blocks were also compared against a posterior QLB and epidural analgesia. TLIP provided superior analgesia with quality of recovery score (QoR-40), Kaplan–Meier survival analysis, and postoperative pain control favoring patients who received TLIP blocks.

When the two different methods of TLIP blocks were compared against each other, there was no significant difference in terms of postoperative pain and opioid use. However, Ciftci et al. [ 21 ] showed that the mTLIP block method had a significantly higher percentage of success of one-time block at 90% compared to 40% with a cTLIP block.

Conventional spinal surgeries often involve extensive dissection of subcutaneous tissues, bones, and ligaments, resulting in a high degree of postoperative pain and a strikingly high use of opioid analgesics [ 1 , 28 ]. Long-term consequences of postoperative opioid analgesia surrounding dependence and addiction are well documented and a feared sequela of physicians prescribing these medications. One trial demonstrated opioid overuse in spine surgery, with an increase in postsurgical opioid dependence from 0% to nearly 48% of patients who underwent surgical fusion for degenerative scoliosis in the early 2000s to mid-2010s [ 28 ]. Effective pain control is thus an important aspect of postoperative care, supporting the clinical value of our study. The use of TLIP blocks during spine surgery has the possibility of providing better postsurgical pain control with the likelihood of decreasing the incidence of chronic pain. However, current studies only evaluate the effect of TLIP on pain during the first few days after surgery. Thus, further research with longer follow-up is needed to better evaluate its effect on chronic pain over the course of weeks to months after surgery.

The use of regional anesthesia is supported by the Enhanced Recovery After Surgery protocols with the goal of minimizing opioid consumption in patients. One novel technique includes the use of TLIP blocks, first introduced by Hand et al. [ 8 ]. TLIP blocks involve targeting the dorsal rami of thoracolumbar nerves as they pass through the paraspinal muscles. The TLIP block is analogous to the transversus abdominis plane (TAP) block for abdominal procedures where the ventral rami of the thoracolumbar nerves are targeted instead. Given the success of TAP blocks in providing analgesia, TLIP blocks were hypothesized to provide a similar benefit for spinal surgeries. The accumulation of the current literature demonstrates that TLIP blocks are superior to non-block procedures in terms of analgesia requirements (total opioid use and time to analgesia) and reported pain throughout the hospitalization in patients who underwent spinal surgery.

Hand et al. [ 8 ] developed what is now known as the cTLIP block, where the needle is injected at a 30 degree angle from the skin between the muscle (m.) multifidus and m. longissimus, and is advanced from a lateral to medial direction. Ahiskalioglu et al. [ 9 ] modified the TLIP block by injecting anesthetics at a 15 degree angle in a medial to lateral direction, between the m. longissimus and m. iliocostalis. The advantages of the mTLIP block are the elimination of the risk of inadvertent neuraxial injection and the increased success rate of the block as the m. longissimus is more easily discernible from the m. iliocostalis than the m. multifidus. Two studies directly comparing the two methods demonstrate similar postoperative analgesic effects, however, the block success rate was significantly higher with the modified version, supporting the conclusions of Ahiskalioglu et al [ 9 ]. However, given the limited reports comparing the two methods of the TLIP block, more RCT studies should be conducted to further validate the mTLIP block and its advantages. It is also important to note a proposal for the nomenclature for paraspinal interfascial plane (PIP) blocks given the new variations to the original TLIP block by Hand et al [ 8 ]. There is the complication that the paraspinal muscles of the cervical, thoracic, and lumbar region all have different anatomy, and thus a dorsal ramus block technique is specific to each area [ 29 ]. Naming the blocks after the target muscle fascia in PIP blocks could offer more clarity. For example, the TLIP block would include the thoracic multifidus plane (TMP) or lumbar multifidus plane (LMP) blocks while the mTLIP block would include the thoracic longissimus plane (TLP) and lumbar longissimus plane (LLP) blocks. The clinical efficiency of wound infiltration with local anesthetics is  questionable, given the various levels of success seen in studies. A systematic review [ 30 ] saw only a few RCTs showing a modest reduction in pain intensity, mainly immediately after the operation, and a minor reduction in opioid use with local anesthetic wound infiltration for lumbar spine surgeries. There were mixed reports among RCTs comparing wound infiltration against TLIP blocks. The varying levels of success may be in part due to the nature of the surgery. Ince et al. [ 19 ] and Bicak et al. [ 27 ] saw no difference in postoperative analgesics, which may be because discectomies are less invasive than spine fusion surgeries. The studies that saw superiority over wound infiltration included patients who underwent lumbar fusion surgeries, providing further support for the conclusion.

ESPBs are another type of fascial plane block where anesthetics are injected between the erector spinae muscles and thoracic transverse processes, blocking the dorsal and ventral rami of the thoracic and abdominal spinal nerves [ 31 ]. A RCT by Avis et al. [ 32 ] found that lumbar ESPB combined with the Enhance Recovery After Surgery (ERAS) program did not lead to decreased opioid use than with saline after major spine surgery. Furthermore, quality of life at 3 months between the control and treatment group was similar, further demonstrating the limited benefit of the block. On the other hand, several systematic reviews [ 33 , 34 , 35 ] found that ESPBs decreased postoperative pain and opioid consumption for those undergoing spinal surgery. However, much of the evidence is low-quality and is insufficient to support the widespread use of ESPBs for spine surgery. There were mixed results regarding the efficacy of TLIP blocks over ESPBs. All but one report saw a clear decrease in analgesia with TLIP blocks; however, this did not always translate to a decrease in pain intensity or difference in complication rate. The slight benefit of TLIP block may be due to the ability to provide more focused analgesia than ESPBs [ 36 ]. While the evidence shows that fascial plane blocks improve outcomes after spine surgery, it is difficult to conclude which block is superior given the limited reports available. The decision to perform one technique over the other may be based on physician and institution preference and expertise [ 37 ].

It is important to note that while some studies show TLIP blocks having a statistically significant decrease in pain, this change in pain perception does not appear to be clinically significant. A study by Smith et al. [ 38 ] looked at determining the magnitude of reduction in pain that is meaningful for patients with acute or chronic pain. A reduction in pain intensity by 10–20% is “minimally important”, by ≥ 30% is “moderately important” and by ≥ 50% is “substantially important” by patients. In our review, the mean difference in VAS/NRS pain scores across studies is rarely greater than one and never greater than two. Thus, pain is only reduced by 10–20% and is not likely to provide patients with a meaningful improvement in pain control. Therefore, the true value of TLIP blocks for spine surgery is likely in the reduction of analgesic and opioid consumption.

Our review includes 17 RCTs and provides updates to the previous systematic reviews that included studies now redacted or removed from publication. Such studies were not included in this report to increase the strength and validity of our findings. In general, our results are consistent with the previous conclusions, with some differences. Both meta-analyses found TLIP blocks to drastically reduce opioid use and provide effective pain control compared to no/sham blocks. However, both Ye et al. [ 39 ] and Long et al. [ 40 ] do not include any studies comparing TLIP blocks against other types of paraspinal blocks. Second, while the number of reports is limited, they also did not mention studies comparing the modified and classic versions of TLIP. Lastly, while the superiority of TLIP blocks over wound infiltration appeared to be dependent on the type of spinal surgery, Ye et al. found TLIP blocks to be superior overall.

Limitations

Our review has inevitable limitations. First, there is a lack of homogeneity across studies. The heterogeneity is due to differences in the characteristics of the subjects, anesthetic agents and protocol, postoperative analgesic protocol, and type of surgery. Different spinal surgeries with varying levels of invasiveness make comparison between studies more difficult, as less invasive procedures by nature are expected to result in less postoperative pain than their more invasive counterparts. Additionally, slight variations in the formulation of the anesthetic provided and mode of delivery may have resulted in some differences in effectiveness that we were unable to account for. Furthermore, data on outcome measures was reported in different types of metrics, and some variables like the need for rescue analgesia, QoR-40 score, and Bruggemann comfort scale score were sparse across studies. Lastly, there were limited studies that compared TLIP blocks against wound infiltration and other paraspinal blocks and that compared the two modes of TLIP blocks. Overall, the risk of bias among studies was moderate. Thus, the presence of bias lowers the overall quality and confidence of evidence and conclusion.

The results of our systematic review provide evidence of the effectiveness of TLIP blocks in improving postoperative pain control. TLIP blocks showed improved outcomes after surgery, including lower pain scores and decreased analgesic requirements compared to patients who received no block and wound infiltration. However, when comparing ESPB and TLIP blocks, it is difficult to ascertain the appropriate choice for a nerve block regarding spinal surgeries. mTLIP blocks appear to be superior to cTLIP blocks, but further research is needed to verify this.

Availability of data and materials

All data generated or analyzed during this study are included in this published article [and its supplementary information files].

Bajwa SJS, Haldar R. Pain management following spinal surgeries: An appraisal of the available options. J Craniovertebr Junction Spine. 2015;6(3):105–10. https://doi.org/10.4103/0974-8237.161589 .

Article   PubMed   PubMed Central   Google Scholar  

Prabhakar NK, Chadwick AL, Nwaneshiudu C, et al. Management of Postoperative Pain in Patients Following Spine Surgery: A Narrative Review. Int J Gen Med. 2022;15:4535–49. https://doi.org/10.2147/IJGM.S292698 .

Article   CAS   PubMed   PubMed Central   Google Scholar  

Warner NS, Habermann EB, Hooten WM, et al. Association Between Spine Surgery and Availability of Opioid Medication. JAMA Netw Open. 2020;3(6):e208974. https://doi.org/10.1001/jamanetworkopen.2020.8974 .

Yerneni K, Nichols N, Abecassis ZA, Karras CL, Tan LA. Preoperative Opioid Use and Clinical Outcomes in Spine Surgery: A Systematic Review. Neurosurg. 2020;86(6):E490–507. https://doi.org/10.1093/neuros/nyaa050 .

Article   Google Scholar  

Nicholas TA, Robinson R. Multimodal Analgesia in the Era of the Opioid Epidemic. Surg Clin North Am. 2022;102(1):105–15. https://doi.org/10.1016/j.suc.2021.09.003 .

Article   PubMed   Google Scholar  

O’Neill A, Lirk P. Multimodal Analgesia. Anesthesiol Clin. 2022;40(3):455–68. https://doi.org/10.1016/j.anclin.2022.04.002 .

Joshi GP. Rational Multimodal Analgesia for Perioperative Pain Management. Curr Pain Headache Rep. 2023;27(8):227–37. https://doi.org/10.1007/s11916-023-01137-y .

Hand WR, Taylor JM, Harvey NR, et al. Thoracolumbar interfascial plane (TLIP) block: a pilot study in volunteers. Can J Anesth/J Can Anesth. 2015;62(11):1196–200. https://doi.org/10.1007/s12630-015-0431-y .

Ahiskalioglu A, Alici HA, Selvitopi K, Yayik AM. Ultrasonography-guided modified thoracolumbar interfascial plane block: a new approach. Can J Anesth/J Can Anesth. 2017;64(7):775–6. https://doi.org/10.1007/s12630-017-0851-y .

Ciftci B, Ekinci M, Celik EC, Yayik AM, Aydin ME, Ahiskalioglu A. Ultrasound-Guided Erector Spinae Plane Block versus Modified-Thoracolumbar Interfascial Plane Block for Lumbar Discectomy Surgery: A Randomized. Controlled Study World Neurosurg. 2020;144:e849–55. https://doi.org/10.1016/j.wneu.2020.09.077 .

Barker TH, Stone JC, Sears K, et al. The revised JBI critical appraisal tool for the assessment of risk of bias for randomized controlled trials. JBI Evidence Synthesis. 2023;21(3):494–506. https://doi.org/10.11124/JBIES-22-00430 .

Chen K, Wang L, Ning M, Dou L, Li W, Li Y. Evaluation of ultrasound-guided lateral thoracolumbar interfascial plane block for postoperative analgesia in lumbar spine fusion surgery: a prospective, randomized, and controlled clinical trial. PeerJ. 2019;7:e7967. https://doi.org/10.7717/peerj.7967 .

Ahiskalioglu A, Yayik AM, Doymus O, et al. Efficacy of ultrasound-guided modified thoracolumbar interfascial plane block for postoperative analgesia after spinal surgery: a randomized-controlled trial. Can J Anaesth. 2018;65(5):603–4. https://doi.org/10.1007/s12630-018-1051-0 .

Kumar A, Sinha C, Kumar A, et al. Modified thoracolumbar Interfascial Plane Block Versus Erector Spinae Plane Block in Patients Undergoing Spine Surgeries: A Randomized Controlled Trial. J Neurosurg Anesthesiol. Published online January 9, 2023. https://doi.org/10.1097/ANA.0000000000000900

Pavithran P, Sudarshan PK, Eliyas S, Sekhar B, Kaniachallil K. Comparison of thoracolumbar interfascial plane block with local anaesthetic infiltration in lumbar spine surgeries - A prospective double-blinded randomised controlled trial. Indian J Anaesth. 2022;66(6):436–41. https://doi.org/10.4103/ija.ija_1054_21 .

Eltaher E, Nasr N, Abuelnaga ME, Elgawish Y. Effect of Ultrasound-Guided Thoracolumbar Interfascial Plane Block on the Analgesic Requirements in Patients Undergoing Lumbar Spine Surgery Under General Anesthesia: A Randomized Controlled Trial. J Pain Res. 2021;14:3465–74. https://doi.org/10.2147/JPR.S329158 .

Tantri AR, Rahmi R, Marsaban AHM, Satoto D, Rahyussalim AJ, Sukmono RB. Comparison of postoperative IL-6 and IL-10 levels following Erector Spinae Plane Block (ESPB) and classical Thoracolumbar Interfascial Plane (TLIP) block in a posterior lumbar decompression and stabilization procedure: a randomized controlled trial. BMC Anesthesiol. 2023;23(1):13. https://doi.org/10.1186/s12871-023-01973-w .

Tantri AR, Sukmono RB, LumbanTobing SDA, Natali C. Comparing the Effect of Classical and Modified Thoracolumbar Interfascial Plane Block on Postoperative Pain and IL-6 Level in Posterior Lumbar Decompression and Stabilization Surgery. Anesth Pain Med. 2022;12(2):e122174. https://doi.org/10.5812/aapm-122174 .

Ince I, Atalay C, Ozmen O, et al. Comparison of ultrasound-guided thoracolumbar interfascial plane block versus wound infiltration for postoperative analgesia after single-level discectomy. J Clin Anesth. 2019;56:113–4. https://doi.org/10.1016/j.jclinane.2019.01.017 .

Ammar MA, Taeimah M. Evaluation of thoracolumbar interfascial plane block for postoperative analgesia after herniated lumbar disc surgery: A randomized clinical trial. Saudi J Anaesth. 2018;12(4):559–64. https://doi.org/10.4103/sja.SJA_177_18 .

Çiftçi B, Ekinci M. A prospective and randomized trial comparing modified and classical techniques of ultrasound-guided thoracolumbar interfascial plane block. Agri. 2020;32(4):186–92. https://doi.org/10.14744/agri.2020.72325 .

Alver S, Ciftci B, Celik EC, et al. Postoperative recovery scores and pain management: a comparison of modified thoracolumbar interfascial plane block and quadratus lumborum block for lumbar disc herniation. Eur Spine J. Published online June 14, 2023. https://doi.org/10.1007/s00586-023-07812-3

Ozmen O, Ince I, Aksoy M, Dostbil A, Atalay C, Kasali K. The Effect of the Modified Thoracolumbar Interfacial Nerve Plane Block on Postoperative Analgesia and Healing Quality in Patients Undergoing Lumbar Disk Surgery: A Prospective. Randomized Study Medeni Med J. 2019;34(4):340–5. https://doi.org/10.5222/MMJ.2019.36776 .

Wang L, Wu Y, Dou L, Chen K, Liu Y, Li Y. Comparison of Two Ultrasound-guided Plane Blocks for Pain and Postoperative Opioid Requirement in Lumbar Spine Fusion Surgery: A Prospective, Randomized, and Controlled Clinical Trial. Pain Ther. 2021;10(2):1331–41. https://doi.org/10.1007/s40122-021-00295-4 .

Çelik EC, Ekinci M, Yayik AM, Ahiskalioglu A, Aydi ME, Karaavci NC. Modified thoracolumbar interfascial plane block versus epidural analgesia at closure for lumbar discectomy: a randomized prospective study. APIC. 2020;24(6):588–95. https://doi.org/10.35975/apic.v24i6.1396 .

Ekinci M, ÇIFTÇI B, ÇELIK E, Yayik A, Tahta A, Atalay Y. A Comparison of Ultrasound-Guided modified-Thoracolumbar Interfascial Plane Block and Wound Infiltration for Postoperative Pain Management in Lumbar Spinal Surgery Patients. Agri. Published online 2019. https://doi.org/10.14744/agri.2019.97759

Bicak M, Salik F, Aktas U, Akelma H, AktizBicak E, Kaya S. Comparison of thoracolumbar Ýnterfascial plane block with the application of local anesthesia in the management of postoperative pain in patients with lomber disk surgery. Turkish Neurosurg Published online. 2021. https://doi.org/10.5137/1019-5149.JTN.33017-20.2 .

Berardino K, Carroll AH, Kaneb A, Civilette MD, Sherman WF, Kaye AD. An Update on Postoperative Opioid Use and Alternative Pain Control Following Spine Surgery. Orthop Rev. 2021;13(2). https://doi.org/10.52965/001c.24978 .

Xu JL, Tseng V. Proposal to standardize the nomenclature for paraspinal interfascial plane blocks. Reg Anesth Pain Med. Published online June 19, 2019:rapm-2019–100696. https://doi.org/10.1136/rapm-2019-100696

Kjærgaard M, Møiniche S, Olsen KS. Wound infiltration with local anesthetics for post-operative pain relief in lumbar spine surgery: a systematic review. Acta Anaesthesiol Scand. 2012;56(3):282–90. https://doi.org/10.1111/j.1399-6576.2011.02629.x .

Article   CAS   PubMed   Google Scholar  

Jain K, Jaiswal V, Puri A. Erector spinae plane block: Relatively new block on horizon with a wide spectrum of application - A case series. Indian J Anaesth. 2018;62(10):809–13. https://doi.org/10.4103/ija.IJA_263_18 .

Avis G, Gricourt Y, Vialatte PB, et al. Analgesic efficacy of erector spinae plane blocks for lumbar spine surgery: a randomized double-blind controlled clinical trial. Reg Anesth Pain Med. 2022;47(10):610–6. https://doi.org/10.1136/rapm-2022-103737 .

Liang X, Zhou W, Fan Y. Erector spinae plane block for spinal surgery: a systematic review and meta-analysis. Korean J Pain. 2021;34(4):487–500. https://doi.org/10.3344/kjp.2021.34.4.487 .

Qiu Y, Zhang TJ, Hua Z. Erector Spinae Plane Block for Lumbar Spinal Surgery: A Systematic Review. J Pain Res. 2020;13:1611–9. https://doi.org/10.2147/JPR.S256205 .

Elias E, Nasser Z, Elias C, et al. Erector Spinae Blocks for Spine Surgery: Fact or Fad? Systematic Review of Randomized Controlled Trials. World Neurosurg. 2022;158:106–12. https://doi.org/10.1016/j.wneu.2021.11.005 .

Hamilton DL. Does Thoracolumbar Interfascial Plane Block Provide More Focused Analgesia Than Erector Spinae Plane Block in Lumbar Spine Surgery? J Neurosurg Anesthesiol. 2021;33(1):92–3. https://doi.org/10.1097/ANA.0000000000000643 .

McCracken S, Lauzadis J, Soffin EM. Ultrasound-guided fascial plane blocks for spine surgery. Curr Opin Anaesthesiol. 2022;35(5):626–33. https://doi.org/10.1097/ACO.0000000000001182 .

Smith SM, Dworkin RH, Turk DC, et al. Interpretation of chronic pain clinical trial outcomes: IMMPACT recommended considerations. Pain. 2020;161(11):2446–61. https://doi.org/10.1097/j.pain.0000000000001952 .

Ye Y, Bi Y, Ma J, Liu B. Thoracolumbar interfascial plane block for postoperative analgesia in spine surgery: A systematic review and meta-analysis. PLoS ONE. 2021;16(5):e0251980. https://doi.org/10.1371/journal.pone.0251980 .

Long G, Liu C, Liang T, Zhan X. The efficacy of thoracolumbar interfascial plane block for lumbar spinal surgeries: a systematic review and meta-analysis. J Orthop Surg Res. 2023;18(1):318. https://doi.org/10.1186/s13018-023-03798-2 .

Download references

Acknowledgements

Not applicable.

Author information

Authors and affiliations.

Carle Illinois College of Medicine, Champaign, IL, USA

Tarika D. Patel, Meagan N. McNicholas & Peyton A. Paschell

Carle Neuroscience Institute, Carle Foundation Hospital, Urbana, IL, USA

Paul M. Arnold

Department of Anesthesiology, Carle Foundation Hospital Urbana, Illinois, USA

Cheng-ting Lee

You can also search for this author in PubMed   Google Scholar

Contributions

TP extracted, analyzed and interpreted the data, and was a major contributor in writing and editing the manuscript. MM extracted and analyzed the data, and wrote several sections of the manuscript. PP extracted and analyzed the data, and wrote several sections of the manuscript. PA edited the manuscript. CTL reviewed the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tarika D. Patel .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Patel, T.D., McNicholas, M.N., Paschell, P.A. et al. Thoracolumbar Interfascial Plane (TLIP) block verses other paraspinal fascial plane blocks and local infiltration for enhanced pain control after spine surgery: a systematic review. BMC Anesthesiol 24 , 122 (2024). https://doi.org/10.1186/s12871-024-02500-1

Download citation

Received : 21 January 2024

Accepted : 15 March 2024

Published : 28 March 2024

DOI : https://doi.org/10.1186/s12871-024-02500-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Thoracolumbar interfascial plane block
  • Spine surgery
  • Pain management
  • Systematic review

BMC Anesthesiology

ISSN: 1471-2253

level of evidence systematic literature review

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Dermatol
  • v.59(2); Mar-Apr 2014

Understanding and Evaluating Systematic Reviews and Meta-analyses

Michael bigby.

From the Department of Dermatology, Harvard Medical School, Beth Israel Deaconess Medical Center, Boston, MA 02215, USA

A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and structured presentation of the results. A systematic review that incorporates quantitative pooling of similar studies to produce an overall summary of treatment effects is a meta-analysis. A systematic review should have clear, focused clinical objectives containing four elements expressed through the acronym PICO (Patient, group of patients, or problem, an Intervention, a Comparison intervention and specific Outcomes). Explicit and thorough search of the literature is a pre-requisite of any good systematic review. Reviews should have pre-defined explicit criteria for what studies would be included and the analysis should include only those studies that fit the inclusion criteria. The quality (risk of bias) of the primary studies should be critically appraised. Particularly the role of publication and language bias should be acknowledged and addressed by the review, whenever possible. Structured reporting of the results with quantitative pooling of the data must be attempted, whenever appropriate. The review should include interpretation of the data, including implications for clinical practice and further research. Overall, the current quality of reporting of systematic reviews remains highly variable.

Introduction

A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and structured presentation of the results. A systematic review can be distinguished from a narrative review because it will have explicitly stated objectives (the focused clinical question), materials (the relevant medical literature) and methods (the way in which studies are assessed and summarized).[ 1 , 2 ] A systematic review that incorporates quantitative pooling of similar studies to produce an overall summary of treatment effects is a meta-analysis.[ 1 , 2 ] Meta-analysis may allow recognition of important treatment effects by combining the results of small trials that individually might lack the power to consistently demonstrate differences among treatments.[ 1 ]

With over 200 speciality dermatology journals being published, the amount of data published just in the dermatologic literature exceeds our ability to read it.[ 3 ] Therefore, keeping up with the literature by reading journals is an impossible task. Systematic reviews provide a solution to handle information overload for practicing physicians.

Criteria for reporting systematic reviews have been developed by a consensus panel first published as Quality of Reporting of Meta-analyses (QUOROM) and later refined as Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA).[ 4 , 5 ] This detailed, 27-item checklist contains items that should be included and reported in high quality systematic reviews and meta-analyses. The methods for understanding and appraising systematic reviews and meta-analyses presented in this paper are a subset of the PRISMA criteria.

The items that are the essential features of a systematic review include having clear objectives, explicit criteria for study selection, an assessment of the quality of included studies, criteria for which studies can be combined, appropriate analysis and presentation of results and practical conclusions that are based on the evidence evaluated [ Table 1 ]. Meta-analysis is only appropriate if the included studies are conceptually similar. Meta-analyses should only be conducted after a systematic review.[ 1 , 6 ]

Criteria for evaluating a systematic review or the meta-analysis

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g001.jpg

A Systematic Review Should Have Clear, Focused Clinical Objectives

A focused clinical question for a systematic review should contain the same four elements used to formulate well-built clinical questions for individual studies, namely a Patient, group of patients, or problem, an Intervention, a Comparison intervention and specific Outcomes.[ 7 ] These features can be remembered by the acronym PICO. The interventions and comparison interventions should be adequately described so that what was done can be reproduced in future studies and in practice. For diseases with established effective treatments, comparisons of new treatments or regimens to established treatments provide the most useful information. The outcomes reported should be those that are most relevant to physicians and patients.[ 1 ]

Explicit and Thorough Search of the Literature

A key question to ask of a systematic review is: “Is it unlikely that important, relevant studies were missed?” A sound systematic review can be performed only if most or all of the available data are examined. An explicit and thorough search of the literature should be performed. It should include searching several electronic bibliographic databases including the Cochrane Controlled Trials Registry, which is part of the Cochrane Library, Medline, Embase and Literatura Latino Americana em Ciências da Saúde. Bibliographies of retrieved studies, review articles and textbooks should be examined for studies fitting inclusion criteria. There should be no language restrictions. Additional sources of data include scrutiny of citation lists in retrieved articles, hand-searching for conference reports, prospective trial registers (e.g., clinical trials.gov for the USA and clinical trialsregister.eu for the European union) and contacting key researchers, authors and drug companies.[ 1 , 8 ]

Reviews should have Pre-defined Explicit Criteria for what Studies would be Included and the Analysis should Include Only those Studies that Fit the Inclusion Criteria

The overwhelming majority of systematic reviews involve therapy. Randomized, controlled clinical trials should therefore be used for systematic reviews of therapy if they are available, because they are generally less susceptible to selection and information bias in comparison with other study designs.[ 1 , 9 ]

Systematic reviews of diagnostic studies and harmful effects of interventions are increasingly being performed and published. Ideally, diagnostic studies included in systematic reviews should be cohort studies of representative populations. The studies should include a criterion (gold) standard test used to establish a diagnosis that is applied uniformly and blinded to the results of the test(s) being studied.[ 1 , 9 ]

Randomized controlled trials can be included in systematic reviews of studies of adverse effects of interventions if the events are common. For rare adverse effects, case-control studies, post-marketing surveillance studies and case reports are more appropriate.[ 1 , 9 ]

The Quality (Risk of Bias) of the Primary Studies should be Critically Appraised

The risk of bias of included therapeutic trials is assessed using the criteria that are used to evaluate individual randomized controlled clinical trials. The quality criteria commonly used include concealed, random allocation; groups similar in terms of known prognostic factors; equal treatment of groups; blinding of patients, researchers and analyzers of the data to treatment allocation and accounting for all patients entered into the trial when analyzing the results (intention-to-treat design).[ 1 ] Absence of these items has been demonstrated to increase the risk of bias of systematic reviews and to exaggerate the treatment effects in individual studies.[ 10 ]

Structured Reporting of the Results with Quantitative Pooling of the Data, if Appropriate

Systematic reviews that contain studies that have results that are similar in magnitude and direction provide results that are most likely to be true and useful. It may be impossible to draw firm conclusions from systematic reviews in which studies have results of widely different magnitude and direction.[ 1 , 9 ]

Meta-analysis should only be performed to synthesize results from different trials if the trials have conceptual homogeneity.[ 1 , 6 , 9 ] The trials must involve similar patient populations, have used similar treatments and have measured results in a similar fashion at a similar point in time.

Once conceptual homogeneity is established and the decision to combine results is made, there are two main statistical methods by which results are combined: random-effects models (e.g., DerSimonian and Laird) and fixed-effects models (e.g., Peto or Mantel-Haenszel).[ 11 ] Random-effects models assume that the results of the different studies may come from different populations with varying responses to treatment. Fixed-effects models assume that each trial represents a random sample of a single population with a single response to treatment [ Figure 1 ]. In general, random-effects models are more conservative (i.e., random-effects models are less likely to show statistically significant results than fixed-effects models). When the combined studies have statistical homogeneity (i.e., when the studies are reasonably similar in direction, magnitude and variability), random-effects and fixed-effects models give similar results.

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g002.jpg

Fixed-effects models (a) assume that each trial represents a random sample (colored curves) of a single population with a single response to treatment. Random-effects models (b) assume that the different trials’ results (colored curves) may come from different populations with varying responses to treatment.

The point estimates and confidence intervals of the individual trials and the synthesis of all trials in meta-analysis are typically displayed graphically in a forest plot [ Figure 2 ].[ 12 ] Results are most commonly expressed as the odds ratio (OR) of the treatment effect (i.e., the odds of achieving a good outcome in the treated group divided by the odds of achieving a good result in the control group) but can be expressed as risk differences (i.e., difference in response rate) or relative risk (probability of achieving a good outcome in the treated group divided by the probability in the control group). An OR of 1 (null) indicates no difference between treatment and control and is usually represented by a vertical line passing through 1 on the x-axis. An OR of greater or less than 1 implies that the treatment is superior or inferior to the control respectively.

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g003.jpg

Annotated results of a meta-analysis of six studies, using random effects models reported as odd ratios using MIX version 1.7 (Bax L, Yu LM, Ikeda N, Tsuruta H, Moons KGM. Development and validation of MIX: comprehensive free software for meta-analysis of causal research data. BMC Med Res Methodol http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1626481/ ). The central graph is a typical Forest Plot

The point estimate of individual trials is indicated by a square whose size is proportional to the size of the trial (i.e., number of patients analyzed). The precision of the trial is represented by the 95% confidence interval that appears in Forest Plots as the brackets surrounding point estimate. If the 95% confidence interval (brackets) does not cross null (OR of 1), then the individual trial is statistically significant at the P = 0.05 level.[ 12 ] The summary value for all trials is shown graphically as a parallelogram whose size is proportional to the total number of patients analyzed from all trials. The lateral tips of the parallelogram represent the 95% confidence interval and if they do not cross null (OR of 1), then the summary value of the meta-analysis is statistically significant at the P = 0.05 level. ORs can be converted to risk differences and numbers needed to treat (NNTs) if the event rate in the control group is known [ Table 2 ].[ 13 , 14 ]

Deriving numbers needed to treat from a treatment's odds ratio and the observed or expected event rates of untreated groups or individuals

An external file that holds a picture, illustration, etc.
Object name is IJD-59-134-g004.jpg

The difference in response rate and its reciprocal, the NNT, are the most easily understood measures of the magnitude of the treatment effect.[ 1 , 9 ] The NNT represents the number of patients one would need to treat in order to achieve one additional cure. Whereas the interpretation of NNT might be straightforward within one trial, interpretation of NNT requires some caution within a systematic review, as this statistic is highly sensitive to baseline event rates.[ 1 ]

For example, if a treatment A is 30% more effective than treatment B for clearing psoriasis and 50% of people on treatment B are cleared with therapy, then 65% will clear with treatment A. These results correspond to a rate difference of 15% (65-50) and an NNT of 7 (1/0.15). This difference sounds quite worthwhile clinically. However if the baseline clearance rate for treatment B in another trial or setting is only 30%, the rate difference will be only 9% and the NNT now becomes 11 and if the baseline clearance rate is 10%, then the NNT for treatment A will be 33, which is perhaps less worthwhile.[ 1 ]

Therefore, NNT summary measures within a systematic review should be interpreted with caution because “control” or baseline event rates usually differ considerably between studies.[ 1 , 15 ] Instead, a range of NNTs for a range of plausible control event rates that occur in different clinical settings should be given, along with their 95% confidence intervals.[ 1 , 16 ]

The data used in a meta-analysis can be tested for statistical heterogeneity. Methods to tests for statistical heterogeneity include the χ 2 and I.[ 2 , 11 , 17 ] Tests for statistical heterogeneity are typically of low power and hence detecting statistical homogeneity does not mean clinical homogeneity. When there is evidence of heterogeneity, reasons for heterogeneity between studies – such as different disease subgroups, intervention dosage, or study quality – should be sought.[ 11 , 17 ] Detecting the source of heterogeneity generally requires sub-group analysis, which is only possible when data from many or large trials are available.[ 1 , 9 ]

In some systematic reviews in which a large number of trials have been performed, it is possible to evaluate whether certain subgroups (e.g. children versus adults) are more likely to benefit than others. Subgroup analysis is rarely possible in dermatology, because few trials are available. Subgroup analyses should always be pre-specified in a systematic review protocol in order to avoid spurious post hoc claims.[ 1 , 9 ]

The Importance of Publication Bias

Publication bias is the tendency that studies that show positive effects are more likely to be published and are easier to find.[ 1 , 18 ] It results from allowing factors other than the quality of the study to influence its acceptability for publication. Factors such as the sample size, the direction and statistical significance of findings, or the investigators’ perception of whether the findings are “interesting,” are related to the likelihood of publication.[ 1 , 19 , 20 ] Negative studies with small sample size are less likely to be published.[ 1 , 19 , 20 ] Studies published are often dominated by the pharmaceutical company sponsored trials of new, expensive treatments often compared with the placebo.

For many diseases, the studies published are dominated by drug company-sponsored trials of new, expensive treatments. Such studies are almost always “positive.”[ 1 , 21 , 22 ] This bias in publication can result in data-driven systematic reviews that draw more attention to those medicines. Systematic reviews that have been sponsored directly or indirectly by industry are also prone to bias through over-inclusion of unpublished “positive” studies that are kept “on file” by that company and by not including or not finishing registered trials whose results are negative.[ 1 , 23 ] The creation of study registers (e.g. http://clinicaltrials.gov ) and advance publication of research designs have been proposed as ways to prevent publication bias.[ 1 , 24 , 25 ] Many dermatology journals now require all their published trials to have been registered beforehand, but this policy is not well policed.[ 1 ]

Language bias is the tendency for studies that are “positive” to be published in an English-language journal and be more quickly found than inconclusive or negative studies.[ 1 , 26 ] A thorough systematic review should therefore not restrict itself to journals published in English.[ 1 ]

Publication bias can be detected by using a simple graphic test (funnel plot), by calculating the fail-safe N, Begg's rank correlation method, Egger regression method and others.[ 1 , 9 , 11 , 27 , 28 ] These techniques are of limited value when less than 10 randomized controlled trials are included. Testing for publication bias is often not possible in systematic reviews of skin diseases, due to the limited number and sizes of trials.[ 1 , 9 ]

Question-driven systematic reviews answer the clinical questions of most concern to practitioners. In many cases, studies that are of most relevance to doctors and patients have not been done in the field of dermatology, due to inadequate sources of independent funding.[ 1 , 9 ]

The Quality of Reporting of Systematic Reviews

The quality of reporting of systematic reviews is highly variable.[ 1 ] One cross-sectional study of 300 systematic reviews published in Medline showed that over 90% were reported in specialty journals. Funding sources were not reported in 40% of reviews. Only two-thirds reported the range of years that the literature was searched for trials. Around a third of reviews failed to provide a quality assessment of the included studies and only half of the reviews included the term “systematic review” or “meta-analysis” in the title.[ 1 , 29 ]

The Review should Include Interpretation of the Data, Including Implications for Clinical Practice and Further Research

The conclusions in the discussion section of a systematic review should closely reflect the data that have been presented within that review. Clinical recommendations can be made when conclusive evidence is found, analyzed and presented. The authors should make it clear which of the treatment recommendations are based on the review data and which reflect their own judgments.[ 1 , 9 ]

Many reviews in dermatology, however, find little evidence to address the questions posed. The review may still be of value even if it lacks conclusive evidence, especially if the question addressed is an important one.[ 1 , 30 ] For example, the systematic review may provide the authors with the opportunity to call for primary research in an area and to make recommendations on study design and outcomes that might help future researchers.[ 1 , 31 ]

Source of Support: Nil

Conflict of Interest: Nil.

Clinical and economic burden of acute otitis media caused by Streptococcus pneumoniae in European children, after widespread use of PCVs-A systematic literature review of published evidence

Affiliations.

  • 1 Quantify Research, Stockholm, Sweden.
  • 2 Center for Observational and Real-World Evidence, MSD, Madrid, Spain.
  • 3 Center for Observational and Real-World Evidence, MSD, Athens, Greece.
  • PMID: 38564583
  • DOI: 10.1371/journal.pone.0297098

Background: Acute otitis media (AOM) is a common childhood disease frequently caused by Streptococcus pneumoniae. Pneumococcal conjugate vaccines (PCV7, PCV10, PCV13) can reduce the risk of AOM but may also shift AOM etiology and serotype distribution. The aim of this study was to review estimates from published literature of the burden of AOM in Europe after widespread use of PCVs over the past 10 years, focusing on incidence, etiology, serotype distribution and antibiotic resistance of Streptococcus pneumoniae, and economic burden.

Methods: This systematic review included published literature from 31 European countries, for children aged ≤5 years, published after 2011. Searches were conducted using PubMed, Embase, Google, and three disease conference websites. Risk of bias was assessed with ISPOR-AMCP-NPC, ECOBIAS or ROBIS, depending on the type of study.

Results: In total, 107 relevant records were identified, which revealed wide variation in study methodology and reporting, thus limiting comparisons across outcomes. No homogenous trends were identified in incidence rates across countries, or in detection of S. pneumoniae as a cause of AOM over time. There were indications of a reduction in hospitalization rates (decreases between 24.5-38.8% points, depending on country, PCV type and time since PCV introduction) and antibiotic resistance (decreases between 14-24%, depending on country), following the widespread use of PCVs over time. The last two trends imply a potential decrease in economic burden, though this was not possible to confirm with the identified cost data. There was also evidence of an increase in serotype distributions towards non-vaccine serotypes in all of the countries where non-PCV serotype data were available, as well as limited data of increased antibiotic resistance within non-vaccine serotypes.

Conclusions: Though some factors point to a reduction in AOM burden in Europe, the burden still remains high, residual burden from uncovered serotypes is present and it is difficult to provide comprehensive, accurate and up-to-date estimates of said burden from the published literature. This could be improved by standardised methodology, reporting and wider use of surveillance systems.

Copyright: © 2024 Ricci Conesa et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Publication types

  • Systematic Review
  • Financial Stress
  • Otitis Media* / epidemiology
  • Otitis Media* / prevention & control
  • Pneumococcal Infections* / epidemiology
  • Pneumococcal Infections* / prevention & control
  • Pneumococcal Vaccines / therapeutic use
  • Streptococcus pneumoniae
  • Vaccines, Conjugate / therapeutic use
  • Pneumococcal Vaccines
  • Vaccines, Conjugate

COMMENTS

  1. Research Guides: Systematic Reviews: Levels of Evidence

    The hierarchy of evidence (also known as the evidence-based pyramid) is depicted as a triangular representation of the levels of evidence with the strongest evidence at the top which progresses down through evidence with decreasing strength. At the top of the pyramid are research syntheses, such as Meta-Analyses and Systematic Reviews, the ...

  2. The Levels of Evidence and their role in Evidence-Based Medicine

    None of the studies reached statistical significance. Therefore, higher level evidence from cohort studies does not provide evidence of any risk of lymphoma. Finally, a systematic review was performed that combined the evidence from the retrospective cohorts. 27 The results found an overall standardized incidence ratio of 0.89 (95% CI 0.67-1. ...

  3. Systematic Reviews: Levels of evidence and study design

    Secondary sources are not evidence, but rather provide a commentary on and discussion of evidence. e.g. systematic review. Primary sources contain the original data and analysis from research studies. No outside evaluation or interpretation is provided. An example of a primary literature source is a peer-reviewed research article. Other primary ...

  4. Levels of evidence

    Understand the different levels of evidence. Meta Analysis - systematic review that uses quantitative methods to synthesize and summarize the results. Systematic Review - summary of the medical literature that uses explicit methods to perform a comprehensive literature search and critical appraisal of individual studies and that uses ...

  5. Levels of Evidence

    Level I: Evidence from a systematic review of all relevant randomized controlled trials. Level II: Evidence from a meta-analysis of all relevant randomized controlled trials. Level III: Evidence from evidence summaries developed from systematic reviews. Level IV: Evidence from guidelines developed from systematic reviews.

  6. Systematic Reviews and Meta-analysis: Understanding the Best Evidence

    A systematic review is a summary of the medical literature that uses explicit and reproducible methods to systematically search, critically appraise, and synthesize on a specific issue. ... and present balanced important summary of findings with due consideration of any flaws in the evidence. Systematic review and meta-analysis is a way of ...

  7. Introduction to systematic review and meta-analysis

    A systematic review collects all possible studies related to a given topic and design, ... (RCTs), which have a high level of evidence (Fig. 1). Since 1999, various papers have presented guidelines for reporting meta-analyses of RCTs. ... When performing a systematic literature review or meta-analysis, if the quality of studies is not properly ...

  8. How to Do a Systematic Review: A Best Practice Guide for Conducting and

    Systematic reviews are characterized by a methodical and replicable methodology and presentation. They involve a comprehensive search to locate all relevant published and unpublished work on a subject; a systematic integration of search results; and a critique of the extent, nature, and quality of evidence in relation to a particular research question.

  9. Guidance on Conducting a Systematic Literature Review

    Literature reviews establish the foundation of academic inquires. However, in the planning field, we lack rigorous systematic reviews. In this article, through a systematic search on the methodology of literature review, we categorize a typology of literature reviews, discuss steps in conducting a systematic literature review, and provide suggestions on how to enhance rigor in literature ...

  10. What Level of Evidence Is a Systematic Review

    In most evidence hierarchies, well-conducted systematic reviews and meta-analyses are at the top. As such, in the hierarchy of evidence, systematic reviews including meta-analysis of methodologically sound RCTs with consistent results, are considered the highest level of evidence [5]. This is due to the fact that systematic reviews not only ...

  11. PDF Evidence Pyramid

    Level 1: Systematic Reviews & Meta-analysis of RCTs; Evidence-based Clinical Practice Guidelines. Level 2: One or more RCTs. Level 3: Controlled Trials (no randomization) Level 4: Case-control or Cohort study. Level 5: Systematic Review of Descriptive and Qualitative studies. Level 6: Single Descriptive or Qualitative Study.

  12. Evidence-Based Research: Levels of Evidence Pyramid

    One way to organize the different types of evidence involved in evidence-based practice research is the levels of evidence pyramid. The pyramid includes a variety of evidence types and levels. Filtered resources: pre-evaluated in some way. systematic reviews. critically-appraised topics. critically-appraised individual articles.

  13. Levels of Evidence

    When searching for evidence-based information, one should select the highest level of evidence possible--systematic reviews or meta-analysis. Systematic reviews, meta-analysis, and critically-appraised topics/articles have all gone through an evaluation process: they have been "filtered".

  14. Levels of Evidence

    Qualitative study or systematic review, with or without meta-analysis. Level IV Opinion of respected authorities and/or nationally recognized expert committees/consensus panels based on scientific evidence. Includes: Clinical practice guidelines. Consensus panels. Level V Based on experiential and non-research evidence. Includes: Literature reviews

  15. Easy guide to conducting a systematic review

    A systematic review is a type of study that synthesises research that has been conducted on a particular topic. Systematic reviews are considered to provide the highest level of evidence on the hierarchy of evidence pyramid. Systematic reviews are conducted following rigorous research methodology. To minimise bias, systematic reviews utilise a ...

  16. Reviews: From Systematic to Narrative: Introduction

    Most reviews fall into the following types: literature review, narrative review, integrative review, evidenced based review, meta-analysis and systematic review. ... Presented below are three pyramids that show the different levels of evidence sources and explanations of each level. Of course, the best are at the top and as the pyramids ...

  17. Levels of Evidence

    Evidence from well-designed case-control or cohort studies. Level 5. Evidence from systematic reviews of descriptive and qualitative studies (meta-synthesis) Level 6. Evidence from a single descriptive or qualitative study, EBP, EBQI and QI projects. Level 7. Evidence from the opinion of authorities and/or reports of expert committees, reports ...

  18. LibGuides: Nursing

    Level C: Qualitative, descriptive, or correlational studies, integrative or systematic reviews, or RCTs with inconsistent results. Level D: Peer-reviewed professional organizational standards, with clinical studies to support recommendations. Level E: Theory-based evidence from expert opinion or multiple case reports.

  19. Levels of Evidence

    Meta-Analysis: A systematic review that uses quantitative methods to summarize the results.(Level 1) Systematic Review: A comprehensive review that authors have systematically searched for, appraised, and summarized all of the medical literature for a specific topic (Level 1) Randomized Controlled Trials: RCT's include a randomized group of patients in an experimental group and a control group.

  20. Definition of levels of evidence (LoE) and overall strength of evidence

    Level of evidence ratings for Cochrane reviews and other systematic reviews are assigned a baseline score of HIGH if RCTs were used, LOW if observational studies were used. The rating can be upgraded or downgraded based on adherence to the core criteria for methods, qualitative, and quantitative analyses for systematic reviews (there is a ...

  21. Comparing Integrative and Systematic Literature Reviews

    A systematic literature review is commonly used in social sciences and organization studies as it is characterized by "being methodical, comprehensive, transparent, and replicable" (Siddaway et al., 2019, p. 751) so that bias can be minimized (Briner & Walshe, 2014).Conducting systematic reviews means applying the same level of rigor to the process of reviewing the literature as applied to ...

  22. Research Guides: Evidence-Based Practice: Levels of Evidence

    Systematic Review; Evidence-Based Database Resources ; ... J. Keating, E. Schemitsch. Grading the evidence: Levels of evidence and grades of recommendation, Injury, Volume 37, Issue 4, 2006, Pages 321-327, ISSN 0020-1383, ... Levels Of Evidence Ratings In The Urological Literature: An Assessment Of Interobserver Agreement ...

  23. Systematic review and evidence gap assessment of the clinical, quality

    This systematic literature review (SLR) characterized the clinical, health-related quality of life (HRQoL), and economic burden associated with α-thalassemia and assessed evidence gaps, based on available literature over the past decade to reflect contemporary understanding of the disease during a period when conventional management options ...

  24. Barriers and enablers to the implementation of patient-reported outcome

    Review design and registration. An umbrella review of systematic and scoping reviews will be conducted following the guidelines of the Joanna Briggs Institute (JBI) [35, 36].The umbrella review is a form of evidence synthesis that aims to address the challenge of collating, assessing, and synthesizing evidence from multiple reviews on a specific topic [].

  25. Effect of level of sedation on outcomes in critically ill adult

    In this systematic review with meta-analyses and trial sequential analysis of data from 15 randomised clinical trials and 4352 participants with moderate-level evidence, we showed that level of sedation did not seem to affect the risk of death in critically ill adults, based on studies conducted to 13 June 2023.

  26. Top-funded digital health companies offering lifestyle interventions

    Corresponding published clinical evidence was collected through a systematic literature review and analyzed regarding study purpose, results, quality of results, and level of clinical evidence. Findings: The ten top-funded companies offering DDLS received a total funding of EUR 128.52 million, of which three companies collected more than 75%.

  27. A slight degree of osteoarthritis appears to be present after anterior

    Purpose. The aim of the present systematic review was to quantitatively synthesize the best literature evidence regarding osteoarthritis developing after anterior cruciate ligament reconstruction (ACLR), including only studies with a follow-up duration of at least 20 years.

  28. Thoracolumbar Interfascial Plane (TLIP) block verses other paraspinal

    The results of our systematic review provide evidence of the effectiveness of TLIP blocks in improving postoperative pain control. TLIP blocks showed improved outcomes after surgery, including lower pain scores and decreased analgesic requirements compared to patients who received no block and wound infiltration.

  29. Understanding and Evaluating Systematic Reviews and Meta-analyses

    Understanding and Evaluating Systematic Reviews and Meta-analyses. A systematic review is a summary of existing evidence that answers a specific clinical question, contains a thorough, unbiased search of the relevant literature, explicit criteria for assessing studies and structured presentation of the results.

  30. Clinical and economic burden of acute otitis media caused by ...

    Methods: This systematic review included published literature from 31 European countries, for children aged ≤5 years, published after 2011. Searches were conducted using PubMed, Embase, Google, and three disease conference websites.