Jump to navigation

Home

Cochrane Training

Chapter v: overviews of reviews.

Michelle Pollock, Ricardo M Fernandes, Lorne A Becker, Dawid Pieper, Lisa Hartling

Key Points:

  • Cochrane Overviews of Reviews (Overviews) use explicit and systematic methods to search for and identify multiple systematic reviews on related research questions in the same topic area for the purpose of extracting and analysing their results across important outcomes.
  • Overviews are similar to reviews of interventions, but the unit of searching, inclusion and data analysis is the systematic review rather than the primary study.
  • Overviews can describe the current body of systematic review evidence on a topic of interest, or they can address a new review question that wasn’t a focus in the included systematic reviews.
  • Overviews can present outcome data exactly as they appear in the included systematic reviews, or they can re-analyse the systematic review outcome data in a way that differs from the analyses conducted in the systematic reviews.
  • Prior to conducting an Overview, authors should ensure that the Overview format is the best fit for their review question and that they are prepared to address diverse methodological challenges they are likely to encounter.

This chapter should be cited as: Pollock M, Fernandes RM, Becker LA, Pieper D, Hartling L. Chapter V: Overviews of Reviews. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

V.1 Introduction

Systematic reviews became commonplace partly because of the rapidly increasing number of primary research studies. In turn, the rapidly increasing number of systematic reviews have led many to perform reviews of these reviews. Variously known as ‘overviews’, ‘umbrella reviews’, ‘reviews of reviews’ and ‘meta-reviews’, attempts have been made to formalize the methodology for these pieces of work. Overviews are an increasingly popular form of evidence synthesis, as they aim to provide ‘user-friendly’ summaries of the breadth of research relevant to a decision without decision makers needing to assimilate the results of multiple systematic reviews themselves (Hartling et al 2012). Overviews are often broader in scope than any individual systematic review, meaning that they can examine a broad range of treatment options in ways that can be aligned with the choices that decision makers often make. In comparison to the length of time and resources required to address similar questions from a synthesis of primary studies, Overviews can also be conducted more quickly (Caird et al 2015).

In this chapter we describe the particular type of review of reviews that appears in the Cochrane Database of Systematic Reviews (CDSR): the Cochrane Overview. The chapter begins by discussing the definition and characteristics of Cochrane Overviews. It then presents information designed to help Cochrane authors determine whether the Overview format is a good fit for their research question and the nature of the available research evidence. The bulk of the chapter provides methodological guidance for conducting each stage of the Overview process. We conclude by discussing format and reporting guidelines for Cochrane Overviews, and guidance for updating Overviews.

V.2 What is a Cochrane Overview of Reviews?

V.2.1 definition of a cochrane overview.

Cochrane Overviews of Reviews (Cochrane Overviews) use explicit and systematic methods to search for and identify multiple systematic reviews on related research questions in the same topic area for the purpose of extracting and analysing their results across important outcomes. Thus, the unit of searching, inclusion and data analysis is the systematic review. Cochrane Overviews are typically conducted to answer questions related to the prevention or treatment of various disorders (i.e. questions about healthcare interventions). They can search for and include Cochrane Reviews of interventions and systematic reviews published outside of Cochrane (i.e. non-Cochrane systematic reviews). The target audience for Cochrane Overviews is healthcare decision makers; this includes healthcare providers, policy makers, researchers, funding agencies, informed patients and caregivers, and/or other informed consumers (Cochrane Editorial Unit 2015).

V.2.2 Components of a Cochrane Overview

Cochrane Overviews should contain five components (modified from Pollock et al (2016)).

  • They should contain a clearly formulated objective designed to answer a specific research question, typically about a healthcare intervention.
  • They should intend to search for and include only systematic reviews (with or without meta-analyses).
  • They should use explicit and reproducible methods to identify multiple systematic reviews that meet the Overview’s inclusion criteria and assess the quality/risk of bias of these systematic reviews.
  • They should intend to collect, analyse and present the following data from included systematic reviews: descriptive characteristics of the systematic reviews and their included primary studies; risk of bias of primary studies; quantitative outcome data (i.e. narratively reported study-level data and/or meta-analysed data); and certainty of evidence for pre-defined, clinically important outcomes (i.e. GRADE assessments).
  • They should discuss findings as they relate to the purpose, objective(s) and specific research question(s) of the overview, including: a summary of main results, overall completeness and applicability of evidence, quality of evidence, potential biases in the overview process, and agreements and/or disagreements with other studies and/or reviews.

See Section V.4 for additional detail about each of these components.

V.2.3 Types of research questions addressed by a Cochrane Overview

Cochrane Overviews often address research questions that are broader in scope than those examined in individual systematic reviews. Cochrane Overviews can address five different types of questions related to healthcare interventions. Specifically, they can summarize evidence from two or more systematic reviews:

  • of different interventions for the same condition or population;
  • that address different approaches to applying the same intervention for the same condition or population;
  • of the same intervention for different conditions or populations;
  • about adverse effects of an intervention for one or more conditions or populations; or
  • of the same intervention for the same condition or population, where different outcomes or time points are addressed in different systematic reviews.

Table V.2.a gives examples of, and additional information about, these five types of questions. Note that a Cochrane Overview may restrict its attention to a subset of the evidence included in the systematic reviews identified. For example, an Overview question may be restricted to children only, and some relevant systematic reviews may include primary studies conducted in both children and adults. In this case, the Overview authors may choose to assess each systematic review’s primary studies against the Overview’s inclusion criteria and include only those primary studies (or subsets of studies) that were conducted in children.

Table V.2.a Types of research questions about healthcare interventions that are suitable for publication as a Cochrane Overview*

Examine evidence from two or more systematic reviews of .

Pain management for women in labour: an overview of systematic reviews (Jones 2012).

An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes ((Flodgren et al 2011)).

Interventions for fatigue and weight loss in adults with advanced progressive illness (Payne et al 2012)

This is the question addressed by Cochrane Overviews.

Examine evidence from two or more systematic reviews that address .

Sumatriptan (all routes of administration) for acute migraine attacks in adults - overview of Cochrane reviews (Derry et al 2014).

This question is for publication as a Cochrane Overview. This type of question may be most applicable to drug interventions, where differences in dosage, timing, frequency, route of administration, duration, or number of courses administered are addressed in separate systematic reviews.

Examine evidence from two or more systematic reviews of .

Interventions to improve safe and effective medicines use by consumers: an overview of systematic reviews (Ryan et al 2014).

Neuraxial blockade for the prevention of postoperative mortality and major morbidity: an overview of Cochrane systematic reviews (Guay et al 2014)

This question is for publication as a Cochrane Overview. This type of question examines the efficacy and/or safety of the same or similar interventions across different conditions or populations.

Examine evidence about from two or more systematic reviews of use of an intervention for one or more conditions or populations.

Safety of regular formoterol or salmeterol in children with asthma: an overview of Cochrane reviews (Cates et al 2012)

Adverse events associated with single-dose oral analgesics for acute postoperative pain in adults - an overview of Cochrane reviews (Moore et al 2014)

This question is uncommon but for publication as a Cochrane Overview. This type of question may help identify and characterize the occurrence of rare events.

Examine evidence from two or more systematic reviews of in different systematic reviews.

The CDSR does not currently contain an example of this type of Overview.

Cochrane Reviews of interventions should include all outcomes that are important to decision makers. However, different outcomes may sometimes be reported in different systematic reviews. Thus, this type of question is uncommon but for publication as a Cochrane Overview.

* Overview authors may find other uses for Overviews that are different from those described above.

 Authors must be careful to avoid making inappropriate ‘informal’ indirect comparisons across the different interventions (see Section ).

V.3 When should a Cochrane Overview of Reviews be conducted?

V.3.1 when not to conduct a cochrane overview.

There are several instances where authors should not conduct a Cochrane Overview. Overviews do not aim to:

  • repeat or update the searches or eligibility assessment of the included systematic reviews;
  • conduct a study-level search for primary studies not included in any systematic review;
  • conduct a new systematic review within the Overview;
  • use systematic reviews as a starting point to locate relevant studies with the intent of then extracting and analysing data from the primary studies (this would be considered a systematic review, or an update of a systematic review, and not an Overview);
  • search for and include narrative reviews, textbook chapters, government reports, clinical practice guidelines, or any other summary reports that do not meet their pre-defined definition of a systematic review;
  • extract and present just the conclusions of the included systematic reviews (instead, actual outcome data – narratively reported study-level data and/or meta-analysed data – should be extracted and analysed, and Overview authors are encouraged to interpret these outcome data themselves, in light of the Overview’s research questions and objectives);
  • present detailed outcome data for primary studies not included in any included systematic review; or
  • conduct network meta-analyses (see Section V.3.2 ).

V.3.2 Choosing between a Cochrane Overview and a Cochrane Review s of interventions

The primary reason for conducting Cochrane Overviews is that using systematic reviews as the unit of searching, inclusion, and data analysis allows authors to address research questions that are broader in scope than those examined in individual systematic reviews (also see Section V.2.3 ). However, some research questions that can be addressed by conducting an Overview may also be addressed by conducting a systematic review of primary studies. Reviewing the primary study literature may be preferred in these cases because more information will likely be available. However, the resources required to conduct a full systematic review of all relevant primary studies may not always be available, especially when time is short and the research questions are broad. Thus, a second reason for conducting a Cochrane Overview is that they may be associated with time and resource savings, since the component systematic reviews have already been conducted. A third reason for conducting a Cochrane Overview is in cases where it is important to understand the diversity present in the extant systematic review literature.

Alternatively, it is preferable to conduct a Cochrane Review of interventions if authors anticipate the need to conduct searches for primary studies (i.e. many relevant primary studies are not included in systematic reviews) or to extract data directly from primary studies (i.e. the anticipated analyses cannot be conducted on the basis of information provided in the systematic reviews). Using primary studies as the unit of searching, inclusion and data analysis allows authors to extract all data of interest directly from the primary studies and to report these data in a standardized way. It is also preferable to conduct a Cochrane Review of interventions if authors wish to conduct network meta-analyses, which allow authors to rank order interventions and determine which work ‘best’. The rationale is explained in detail in Chapter 11 .

In order to decide whether or not conducting a Cochrane Overview is appropriate for the research question(s) of interest, authors of Cochrane Overviews will require some knowledge of the existing systematic reviews. Therefore, authors should conduct a preliminary search of the Cochrane Database of Systematic Reviews (CDSR) to gain a general idea of the amount and nature of the available Cochrane evidence. Authors with content expertise may already possess this knowledge. Overview authors should recognize that there will be some heterogeneity in the included systematic reviews and should consider whether or not the extent and nature of the heterogeneity precludes the utility of the Overview. Authors may find it helpful to consider whether:

  • the systematic reviews are, or are likely to be, sufficiently up-to-date;
  • the systematic reviews are, or are likely to be, sufficiently homogeneous in terms of their populations, interventions, comparators, and/or outcome measures (i.e. such that it would make sense from the end-user’s perspective that the individual systematic reviews were presented in a single product);
  • the systematic reviews are, or are likely to be, sufficiently homogeneous in terms of what and how outcome data are presented (such that they provide a useful resource for healthcare decision making);
  • the amount and type of outcome data presented is, or is likely to be, sufficient to inform the Overview’s research question and/or objectives; and
  • the systematic reviews are, or are likely to be, of sufficiently low risk of bias or high methodological quality (i.e. authors should have reasonable confidence that results can be believed or that estimates of effect are near the true values for outcomes, see Chapter 7, Section 7.1.1 ).

V.4 Methods for conducting a Cochrane Overview of Reviews

Overview methods evolved from systematic review methods, which have well-established standards of conduct to ensure rigour, validity and reliability of results. However, because the unit of searching, inclusion and data extraction is the systematic review (and not the primary study), methods for conducting Overviews and systematic reviews necessarily differ. The key differences between the methods used to conduct these two types of knowledge syntheses are summarized in Table V.4.a . Methods for conducting Cochrane Overviews are described in detail in the sections below. When conducting an Overview, it is highly desirable that screening and inclusion, methodological quality/risk of bias assessments, and data extraction be conducted independently by two reviewers, with a process in place for resolving discrepancies. This is in line with the current methodological expectations for Cochrane Reviews of interventions (see Chapter 5, Section 5.6 ). All methods for conducting the Overview should be considered in advance and detailed in a protocol.

Table V.4.a Comparison of methods between Cochrane Overviews of Reviews and Cochrane Reviews of interventions

 

Objective

To summarize evidence from examining effects of interventions.

To summarize evidence from examining effects of interventions.

Selection criteria

Describe clinical and methodological inclusion and exclusion criteria. The study design of interest is the .

Describe clinical and methodological inclusion and exclusion criteria. The study design of interest is the .

Search

Comprehensive search for relevant .

Comprehensive search for relevant .

Inclusion

Include all that fulfil eligibility criteria.

Include all that fulfil eligibility criteria.

Assessment of methodological quality/risk of bias*

Assess risk of bias of included .

Assess methodological quality/risk of bias of included . Also report risk of bias assessments for primary studies contained within included systematic reviews.

Data collection

From included .

From included .

Analysis

Synthesize results across included for each important outcome using meta-analyses, network meta-analyses, and/or narrative summaries.

Summarize and/or re-analyse outcome data that are contained within included

Certainty of evidence (e.g. GRADE)

Assess certainty of evidence across analyses of for each important outcome.

Report the assessments presented in , if possible. Otherwise, consider assessing certainty of evidence using data reported in systematic reviews.

* Methodological quality refers to critical appraisal of a study or systematic review and the extent to which study authors conducted and reported their research to the highest possible standard. Bias refers to systematic deviation of results or inferences from the truth. These deviations can occur as a result of flaws in design, conduct, analysis, and/or reporting. It is not always possible to know whether an estimate is biased even if there is a flaw in the study; further, it is difficult to quantify and at times to predict the direction of bias. For these reasons, reviewers refer to ‘risk of bias’ ( ).

V.4.1 A note regarding important methodological limitations of Cochrane Overviews

Although Overviews often present evidence from two or more systematic reviews of different interventions for the same condition or population, they should rarely be used to draw inferences about the comparative effectiveness of multiple interventions . This means that they should not directly compare interventions that have been examined in different systematic reviews with the intent of determining which intervention works ‘best’ or which intervention is ‘safest’. For example, imagine an Overview that includes two systematic reviews. Systematic review 1 includes studies comparing intervention A with intervention B, and finds that A is more effective than B. Systematic review 2 includes studies comparing intervention B with intervention C, and finds that B is more effective than C. It would be tempting for the Overview authors to conclude that A was more effective than C. However, this would require an indirect comparison, a statistical procedure that compares two interventions (i.e. A vs. C) via a common comparator (i.e. B) despite the fact that the two interventions have never been compared directly against each other within a primary study (Glenny et al 2005).

We discourage indirect comparisons in Overviews. This is especially relevant for authors conducting Overviews that examine multiple interventions for the same condition or population; it is also relevant for authors regardless of whether the systematic reviews included in the Overview present their data using meta-analysis or simple narrative summaries of results. The reason is that the assumption underlying indirect comparison – the transitivity assumption – can rarely be assessed using only the information provided in the systematic reviews (see Section V.3.2 ).

Overviews that examine multiple interventions for the same condition or population will often juxtapose data from different systematic reviews. Sometimes, these data appear in the same table or figure. Overviews that present data in this way can inadvertently encourage readers to make their own indirect comparisons. In cases where Overviews may facilitate inappropriate informal indirect comparisons, Overview authors must avoid ‘comparing’ across systematic reviews. This can be achieved in the following ways:

  • Use properly worded research question(s) and objectives (e.g. ‘Which interventions are effective in treating disorder X?’ as opposed to ‘Which intervention works best for treating disorder X?’).
  • Interpret results and conclusions appropriately (e.g. ‘Compared to placebo, interventions A and D seem to be effective in treating disorder X, while interventions B and C do not seem to be effective’).
  • Provide a clear explanation of the dangers associated with informal indirect comparisons to readers (e.g. ‘It may be tempting to conclude that intervention A is more effective than intervention C since the effect estimate for A versus placebo was twice as large as that for C versus placebo; however, the studies assessing both interventions differed in a number of ways, and we strongly urge readers against making this type of inappropriate informal indirect comparison’). Similar caveats can also be provided in data tables and figures.

V.4.2 Defining the research question(s)

Overview authors should begin by clearly defining the scope of the Overview. Overviews are typically broader in scope than reviews of interventions, but their research question(s) should still be specific, focused, and well-defined. An Overview’s research question should include a clear description of the populations, interventions, comparators, outcome measures, time periods, and settings. For Overviews that examine different interventions for the same condition or population , the primary objective of the Overview may be stated in the following form: ‘To summarize systematic reviews that assess the effects of [interventions or comparisons] for [health problem] for/in [types of people, disease or problem, and setting]’.

Because Overviews are typically broad in scope, it may be necessary to restrict the research question(s) if there is substantial variation in the questions posed by the different systematic reviews. For example, authors may wish to restrict to a single disorder (instead of multiple disorders) or to specific participant characteristics (such as a specific age group, disease severity, setting, or type of co-morbidity). When deciding whether and how to restrict the scope, authors must keep in mind the perspective of the decision maker to ensure that the research question(s) remain clinically appropriate and useful. There should be adequate justification for any restrictions.

Overviews are constrained by the eligibility criteria of their included systematic reviews. It is therefore possible that Overview authors will need to modify or refine their research question(s) (and perhaps also their methodology) as their knowledge of the underlying systematic reviews evolves. Authors should avoid introducing bias when making post-hoc modifications, and all modifications should be documented with a rationale (see Chapter 1 ).

V.4.3 Developing criteria for including systematic reviews

The research question(s) specified in Section V.4.2 should be used to directly inform the inclusion criteria. The inclusion criteria should include a clear description of all relevant characteristics (i.e. populations, interventions, comparators, outcome measures, time periods, settings) as well as information about the study design that will be included (i.e. systematic reviews). Chapter 3 provides useful advice about developing criteria for including studies. Though it is written for authors of reviews of interventions, much of the guidance is relevant to Overview authors as well.

The following three considerations also apply when including systematic reviews:

First, Overview authors must clearly specify the criteria they will use to determine whether publications are considered ‘systematic reviews’. Chapter 1, Section 1.1 provides a definition of a systematic review; however, Overview authors will need to add specific criteria to the definition to guide inclusion decisions (e.g. define “explicit, reproducible methodology”, comprehensive search, acceptable methods for assessing validity of included studies, etc). While Cochrane Reviews of interventions will adhere to the Cochrane definition of a systematic review; non-Cochrane publications show variation in the use of the term ‘systematic review’. Not every non-Cochrane publication that is labelled as a ‘systematic review’ will meet a given definition of a systematic review, while some publications that are not labelled as ‘systematic reviews’ might meet a given definition of a systematic review. Therefore, a focus on pre-established criteria should take priority when making decisions around inclusion.

Second, Overview authors must consider whether to include systematic reviews of randomized controlled trials only, or systematic reviews that include variable study designs such as observational studies. Current guidance does not recommend combining data from randomized trials and observational studies (Shea et al 2017); therefore, if Overview authors are to analyse data from different study designs separately, then they will only be able to do this if the data from systematic reviews are also presented (or available) separately.

Third, Overview authors are likely to encounter groups of two or more systematic reviews that examine the same intervention for the same disorder and that include some of the same primary studies. Authors must consider in advance whether and how to include these ‘overlapping reviews’ in the Overview. This consideration is described in detail in Section V.4.4 , as it has methodological implications for all subsequent stages of the Overview process.

V.4.4 Managing overlapping systematic reviews

As the number of published systematic reviews increases (Page et al 2016), it is becoming common for Overview authors to identify two or more relevant systematic reviews that address the same (or very similar) research questions, and that include many (but not all) of the same underlying primary studies. There are two main challenges associated with including these overlapping reviews in Overviews (Thomson et al 2010, Smith et al 2011, Cooper and Koenka 2012, Baker et al 2014, Conn and Coon Sells 2014, Pieper et al 2014, Caird et al 2015, Biondi-Zoccai 2016, Pollock et al 2016, Ballard and Montgomery 2017, Pollock et al 2017a, Pollock et al 2019b):

First, including overlapping reviews may introduce bias by including the same primary study’s outcome data in an Overview multiple times because the study was included in multiple systematic reviews. If the Overview authors intend to summarize outcome data (see Section V.4.13 ), double-counting outcome data will give data from some primary studies too much influence. If the Overview authors intend to re-analyse outcome data (see Section V.4.13 ), double-counting outcome data gives data from some primary studies too much statistical weight and produces overly precise estimates of intervention effect.

Second, Overviews that contain overlapping reviews are complex. All stages of the Overview process will necessarily become more time- and resource-intensive as Overview authors determine how to search for, identify, include, assess the quality of, extract data from, and analyse and report the results of overlapping reviews in a systematic and transparent way. This is especially true when the overlapping reviews are of variable conduct, quality, and reporting, or when they have discordant results and/or conclusions.

To date, Overview authors have used several approaches, described below, to manage overlapping reviews. The most appropriate approach may depend on the purpose of the Overview and on the method of data analysis (see Section V.4.12 ). For example, if the purpose is to answer a new review question about a subpopulation of the participants included in the existing systematic reviews, authors may wish to re-extract and re-analyse outcome data from a set of non-overlapping reviews. However, if the purpose is to present and describe the current body of systematic review evidence on a topic, it may be appropriate to include the results of all relevant systematic reviews, regardless of topic overlap.

Figure V.4.a contains an evidence-based decision tool to help authors determine whether and how to include overlapping reviews in an Overview (modified from Pollock et al (2019b)). The main decision points, inclusion decisions, and considerations are summarized below. See Pollock et al (2019b) and Pollock et al (2019a) for full details. Note that the decision tool is based on the assumption that Overview authors are motivated to avoid double-counting primary study outcome data.

Decision point 1: Do Cochrane reviews of interventions likely examine all relevant intervention comparisons and available data? If the relevant Cochrane reviews of interventions are deemed comprehensive, it may be possible to avoid the issue of overlapping reviews altogether by including only Cochrane Reviews of interventions . This is because Cochrane attempts to avoid duplication of effort by publishing only one review of interventions on any given topic, whereas multiple non-Cochrane systematic reviews may exist. This may be desirable as Cochrane Reviews of interventions are more likely to: be up-to-date (Shojania 2007); be of higher methodological quality (Pollock et al 2017b); assess and report the risk of bias of their included primary studies (Hopewell et al 2013); assess and report the certainty of evidence for important outcomes (Akl et al 2015); and have more standardized conduct and reporting (Peters et al 2015). However, Cochrane Reviews of interventions are also fewer in number than non-Cochrane systematic reviews, and they often include less diverse study designs and fewer primary studies and interventions (Page et al 2016). As such, they may not provide comprehensive coverage of the topic area in question (Page et al 2016). If Overview authors are unsure whether the Cochrane reviews of interventions are comprehensive, they may opt to search for and identify Cochrane and/or non-Cochrane systematic reviews (see Sections V.4.5 and V.4.6 for guidance) and reassess.

Decision points 2 and 3: Do the included Cochrane and non-Cochrane systematic reviews overlap? If Overview authors suspect that the Cochrane Reviews of interventions are not comprehensive, an appropriate next step is to search for and identify non-Cochrane systematic reviews and assess whether the included systematic reviews contain overlapping primary studies. If there is no overlap, authors can include all relevant Cochrane and non-Cochrane systematic reviews without concern for double-counting primary study outcome data. However, this situation is likely to be rare (Pollock et al 2019a). If Overview authors are unsure whether or how much overlap exists between the Cochrane and non-Cochrane systematic reviews, they may opt to assess primary study overlap (see Section V.4.7 for guidance) and reassess.

Decision point 4: Are authors prepared and able to avoid double-counting outcome data from overlapping reviews, by ensuring that each primary study’s outcome data are extracted from overlapping reviews only once? If there is overlap between the relevant systematic reviews, authors can include all relevant Cochrane and non-Cochrane systematic reviews and take care to avoid double-counting outcome data from overlapping primary studies. This is the only way to ensure that all outcome data from all relevant systematic reviews are included in the Overview. However, as described above, this inclusion decision is time-intensive and methodologically complex. Alternatively, authors who are not prepared or able to avoid double-counting outcome data from overlapping reviews, but who still wish to include non-Cochrane systematic reviews in the Overview, may choose to avoid including overlapping reviews by using pre-defined criteria to prioritize specific systematic reviews for inclusion when faced with multiple overlapping reviews. Authors can achieve this by including all non-overlapping reviews, and selecting the Cochrane, most recent, highest quality, “most relevant”, or “most comprehensive” systematic review for groups of overlapping reviews. This inclusion decision may represent a trade-off between the above-mentioned inclusion decisions by maximizing the amount of outcome data included in the Overview while also avoiding potential challenges related to overlapping reviews.

As previously mentioned, authors who are unable to avoid double-counting outcome data for methodological or logistical reasons may still opt to include all relevant Cochrane and non-Cochrane systematic reviews in the Overview. In these cases, authors should provide methodological justification, assess and document the extent of the primary study overlap (see Section V.4.7 ), and discuss the potential limitations of this approach.

In summary, the potential inclusion decisions are to:

  • include only Cochrane reviews of interventions (to avoid double-counting outcome data);
  • include all Cochrane and non-Cochrane systematic reviews (and avoid double-counting outcome data);
  • include all Cochrane and non-Cochrane systematic reviews (regardless of double-counting outcome data);
  • include all non-overlapping systematic reviews, and for groups of overlapping reviews include the Cochrane, most recent, highest quality, “most relevant”, or “most comprehensive” systematic review (to avoid double-counting outcome data).

Authors wishing to exclude poorly conducted systematic reviews from an Overview may also opt to use results of quality/risk of bias assessments as an exclusion criterion before applying one of the above sets of inclusion criteria (Pollock et al 2017b). Guidance for assessing the methodological quality/risk of bias of systematic reviews can be found in Section V.4.9 .

Figure V.4.a Decision tool to help researchers make inclusion decisions in Overviews. Modified from Pollock et al (2019b) licensed under CC BY 4.0 .

overview of reviews methodology

* See Section V.4.7 for guidance on assessing primary study overlap. †  Researchers should operationalize the criteria used to define “most recent”, “highest quality”, “most relevant” or “most comprehensive”.

V.4.5 Searching for systematic reviews

Once Overview authors have developed a protocol, including defining the research question, developing criteria for including systematic reviews, and considering how they will address issues related to overlapping systematic reviews, the next step is to conduct a literature search that is comprehensive and reproducible. Note that authors may have already conducted the literature search if they wished to use this information to help inform their decision about how to address overlapping reviews in their Overview (see ‘Decision point 1’ of the decision tool presented in Section V.4.4 ). Though written for authors of reviews of interventions, much of the guidance on conducting literature searches provided in Chapter 4 is relevant to Overview authors as well. Notable differences are discussed below.

Overviews that only include Cochrane Reviews of interventions will only need to search the CDSR. If non-Cochrane systematic reviews will be included in the Overview, additional databases and systematic review repositories will need to be searched (Aromataris et al 2015, Caird et al 2015, Biondi-Zoccai 2016, Pollock et al 2016, Pollock et al 2017a). In general, MEDLINE/PubMed and Embase index most systematic reviews (Hartling et al 2016). Authors may also search additional regional and subject-specific databases (e.g. LILACS, CINAHL, PsycINFO) and systematic review repositories such as Epistemonikos and KSR Evidence .

Many databases that contain non-Cochrane systematic reviews index a wide variety of study designs, including, but not limited to, systematic reviews. Authors should therefore attempt as much as possible to restrict their searches to capture systematic reviews while simultaneously minimizing the capture of non-systematic review publications (Smith et al 2011, Cooper and Koenka 2012, Aromataris et al 2015, Biondi-Zoccai 2016, Pollock et al 2016, Pollock et al 2017a). Authors can do this by using search terms and MeSH headings specific to the systematic review study design (e.g. ‘systematic review’, ‘meta-analysis’) and by using validated systematic review search filters. A list of validated search filters is available here .

V.4.6 Selecting systematic reviews for inclusion

V.4.6.1 identifying systematic reviews that meet the inclusion criteria.

Each document retrieved by the literature search must be assessed to see whether it meets the eligibility criteria of the Overview. Note that authors may have already selected systematic reviews for inclusion if they wished to use this information to help inform their decision about how to address overlapping reviews in their Overview (see ‘Decision point 1’ of the decision tool presented in Section V.4.4 ). Chapter 4, Section 4.6 describes the key steps involved in the inclusion process. Though it is written for authors of reviews of interventions, much of the guidance is relevant to Overview authors as well. Notable differences are discussed below.

There are two considerations related to assessing Cochrane Reviews of interventions for inclusion in Overviews. First, the search of the CDSR may retrieve Protocols. Second, there may be times when a review of interventions is not sufficiently up-to-date. In both of these cases, Overview authors should contact Cochrane Community Support ([email protected]) and/or author team(s) to ask whether the relevant reviews of interventions are close to completion or in the process of being updated. If so, it may be possible to obtain pre-publication versions of the new or updated reviews of interventions, which can then be assessed for inclusion in the Overview. Authors should include any outstanding Protocols in the reference list of the Overview under the heading ‘Characteristics of reviews awaiting assessment’ (see Section V.5 ). When assessing non-Cochrane systematic reviews for inclusion, Overview authors must adhere to their pre-specified definition of a ‘systematic review’ (see Section V.4.3 ).

In cases where the Overview’s scope is narrower than the scope of one or more of the relevant systematic reviews, it is possible that only a subset of primary studies contained within the systematic reviews will meet the Overview’s eligibility criteria. Thus, the primary studies, as reported within the included systematic reviews, should be assessed for inclusion against the Overview’s inclusion criteria. Only the subset of primary studies that fulfil the Overview’s inclusion criteria should be included in the Overview. For example, Cates et al (2012) conducted an Overview examining safety of regular formoterol or salmeterol in children, but many relevant systematic reviews contained primary studies that were conducted in adults. Therefore, within the included systematic reviews, the authors only included those primary studies conducted in children.

V.4.6.2 Conducting supplemental searches for primary studies

Occasionally, after identifying all systematic reviews that meet the inclusion criteria, important gaps in coverage will remain (e.g. an important intervention may not be examined in any included systematic review, or a systematic review on an important intervention may be out-of-date). In rare cases, authors may consider conducting a supplemental search for primary studies that can overcome the deficiency in the included systematic reviews. However, authors considering this option should re-consider the appropriateness of the Overview format due to the additional complexities involved when working with both systematic reviews and primary studies within the same Overview. As stated in Section V.3.1 , Overviews should not conduct study level searches or new systematic reviews within an Overview, so doing this would be at variance with standard methodological expectations of this review format. Additionally, there is no existing guidance on how to incorporate additional primary studies into Overviews appropriately.

V.4.7 Assessing primary study overlap within the included systematic reviews

An important step once authors have their final list of included systematic reviews is to map out which primary studies are included in which systematic reviews. Note that authors may have already assessed primary overlap within the included systematic reviews if they wished to use this information to help inform their decision about how to address overlapping systematic reviews in their Overview (see ‘Decision point 2’ of the decision tool presented in Section V.4.4 ).

At a minimum, authors may find it useful to create a citation matrix similar to Table V.4.b to visually demonstrate the amount of overlap. Authors should also narratively describe the number and size of the overlapping primary studies, and the amount of weight they contribute to the analyses. Authors may also wish to calculate the ‘corrected covered area’, which provides a numerical measure of the extent of primary study overlap between the systematic reviews. Pieper et al (2014) provides detailed instructions for creating citation matrices, describing overlap, and calculating the corrected covered area. If the included systematic reviews contain multiple intervention comparisons, Overview authors may wish to assess the amount of primary study overlap separately for each comparison. Information on the extent and nature of the primary study overlap should be clearly reported in the published Overview, especially for Overviews that are unable to avoid double-counting primary study data for methodological or logistical reasons.

When mapping the extent of overlap, note that the overlapping primary studies may be easily identifiable across systematic reviews because the references are the same. However, overlapping primary studies may not be easily identifiable across systematic reviews if different references are cited in different systematic reviews to describe different aspects of the same primary study (e.g. different subgroups, comparisons, outcomes, and/or time points).

Table V.4.b Template for a table mapping the primary studies contained within included systematic reviews*

 

Primary study 1

         

Primary study 2

         

Primary study 3

         

[...]

         

Primary study ‘X’

         

* Place an ‘X’, ‘Yes’, ‘Included’, or similar note in relevant cells to indicate which systematic reviews include which primary studies.

V.4.8 Collecting, analysing, and presenting data from included systematic reviews: An introduction

Several types of data must be extracted from the systematic reviews included in an Overview, including: data to inform risk of bias assessment of systematic reviews (and their included primary studies); descriptive characteristics of systematic reviews (and their included primary studies); quantitative outcome data; and certainty of evidence for important outcomes (Pollock et al 2016, Ballard and Montgomery 2017). It is highly desirable that methodological quality/risk of bias assessments and data extraction be conducted independently by two reviewers, with a process in place for resolving discrepancies, using piloted forms (see Chapter 5, Section 5.6 ).

Overview authors, especially those including non-Cochrane systematic reviews, should consider in advance how they will proceed if data they are interested in extracting are missing from, inadequately reported in, or reported differently across, systematic reviews (Pollock et al 2016, Ballard and Montgomery 2017). Authors might simply note the gap in coverage in their Overview and state that certain data were not available in the systematic reviews. Alternatively, they might choose to extract the missing data directly from the underlying primary studies. Referring back to underlying primary studies can enhance the comprehensiveness and rigour of the Overview, but will also require additional time and resources. If authors find they are extracting a large amount of data from primary studies, they should re-consider the appropriateness of the Overview format and may consider conducting a systematic review instead.

The next sections contain methodological guidance for collecting, analysing, and presenting data from included systematic reviews.

V.4.9 Assessing methodological quality/risk of bias of included systematic reviews

Overview authors can use one of three tools to assess the methodological quality or risk of bias of systematic reviews included in Overviews. Methodological quality refers to critical appraisal of a systematic review and the extent to which authors conducted and reported their research to the highest possible standard. Bias refers to systematic deviation of results or inferences from the truth. These deviations can occur as a result of flaws in design, conduct, analysis, and/or reporting. It is not always possible to know whether an estimate is biased even if there is a flaw in the study; further, it is difficult to quantify and at times to predict the direction of bias. For these reasons, reviewers refer to ‘risk of bias’ ( Chapter 7, Section 7.2 ). Note that authors may have already assessed methodological quality/risk of bias of included systematic reviews if they wished to use this information to help inform their decision about how to address overlapping systematic reviews in their Overview (see ‘Decision point 4’ of the decision tool presented in Section V.4.4 ).

The AMSTAR tool (Shea et al 2007) was designed to assess methodological quality of systematic reviews of randomized controlled trials, and to date has been the most commonly used tool in Overviews (Hartling et al 2012, Pieper et al 2012, Pollock et al 2016). It was intended to be “a practical critical appraisal tool for use by health professionals and policy makers who do not necessarily have advanced training in epidemiology, to enable them to carry out rapid and reproducible assessments of the quality of conduct of systematic reviews of randomised controlled trials of interventions” (Shea et al 2007). Researchers wishing to use this tool can refer to Pollock et al (2017b) for empirical evidence and recommendations on using AMSTAR in Overviews.

The AMSTAR 2 tool (Shea et al 2017) is an updated version of the original AMSTAR tool. It can be used to assess methodological quality of systematic reviews that include both randomized and non-randomized studies of healthcare interventions. AMSTAR2 should assist in identifying high quality systematic reviews (Shea et al 2017) and includes the following critical domains: protocol registered before start of review; adequacy of literature search; justification for excluded studies; risk of bias for included studies; appropriateness of meta-analytic methods; consideration of risk of bias when interpreting results; and assessing presence and likely impact of publication bias (Shea et al 2017). The tool provides guidance to rate the overall confidence in the results of a review (high, moderate, low or critically low depending on the number of critical flaws and/or non-critical weaknesses). Detailed guidance on using the AMSTAR2 tool is available here . Given that this is an updated version of AMSTAR with the intent to improve upon AMSTAR and clarify some points, this tool may be preferred for use in future Overviews.

Lastly, the recently developed ROBIS tool (Whiting et al 2016) can be used by authors wishing to assess risk of bias of systematic reviews in Overviews. ROBIS was designed to be used for systematic reviews within healthcare settings that address questions related to interventions, diagnosis, prognosis and aetiology (Whiting et al 2016). The tool involves three phases: 1) assessing relevance (which is considered optional but may be used to assist with selecting systematic reviews for inclusion; see Section V.4.6 ); 2) identifying concerns with the systematic review process; and 3) judging overall risk of bias for the systematic review (low, high, unclear). The second phase includes four domains which may be sources of bias in the systematic review process: study eligibility criteria, identification and selection of studies, data collection and study appraisal, and synthesis and findings. The tool is available on the ROBIS website . This website also contains pre-formatted data extraction forms and data presentation tables.

We cannot currently recommend one tool over another due to a lack of empirical evidence on this topic. However, regardless of which tool is used, Overview authors should include: a table that provides a breakdown of how each systematic review was rated on each question of the tool, the rationale behind the assessments, and an overall rating for each systematic review (if appropriate). Authors can then use the results of the quality/risk of bias assessments to help contextualize the Overview’s evidence base (e.g. by assessing whether and to what extent SR methods may have affected the Overview’s comprehensiveness and results).

V.4.10 Collecting and presenting data on risk of bias of primary studies contained within included systematic reviews

When conducting an Overview, authors should extract and report the domain-specific and/or overall quality/risk of bias assessments for the relevant primary studies contained within each included systematic review. Chapter 7 and Chapter 8 provide a comprehensive discussion of approaches to assessing risk of bias, the Cochrane ‘Risk of bias’ tool, risk of bias domains, and how to summarize and present risk of bias assessments in a review of interventions. The key risk of bias domains cover bias arising from the randomization process, bias due to deviations from intended interventions, bias due to missing outcome data, bias in measurement of the outcome, and bias in selection of the reported result. Other chapters in the Handbook provide information on risk of bias assessments and critical appraisal of evidence from other study designs (e.g. non-randomized studies) and type of data (e.g. qualitative research).

Ideally, authors should extract the assessments that are presented in each included systematic review (i.e. they should not repeat or update the risk of bias assessments that have already been conducted by systematic review authors). They can then present the assessments in narrative and/or tabular summaries (Bialy et al 2011, Foisy et al 2011a). However, it is possible that different systematic reviews, especially non-Cochrane systematic reviews, may have used different tools, or different parts of tools, to assess methodological quality/risk of bias. In these situations, authors should extract the disparate quality/risk of bias assessments to the best of their ability, despite the variability across systematic reviews. Authors then have two options (Cooper and Koenka 2012, Conn and Coon Sells 2014, Caird et al 2015, Biondi-Zoccai 2016, Pollock et al 2016, Ballard and Montgomery 2017). They can provide narrative and/or tabular summaries of the assessments (Bialy et al 2011, Foisy et al 2011a). Or, they can supplement the existing assessments by referring to the original primary studies and extracting data pertaining to the missing quality/risk of bias domains (Foisy et al 2011b, Pollock et al 2017c).

V.4.11 Collecting and presenting data on descriptive characteristics of included systematic reviews (and their primary studies)

Overview authors must extract information about the descriptive characteristics of each systematic review included in the Overview. As a starting point, for each systematic review, it may be useful to extract the information listed in Box V.4.a (Thomson et al 2010, Smith et al 2011, Conn and Coon Sells 2014, Aromataris et al 2015, Biondi-Zoccai 2016, Pollock et al 2016, Pollock et al 2017a). This information can then be reported in a ‘Characteristics of included reviews’ table (Foisy et al 2011a, Jones 2012). Additional descriptive data may need to be extracted, depending upon the specific requirements or objectives of the Overview. Authors should also note in the text any discrepancies between the outcomes included in the systematic reviews and those pre-specified in the Overview.

Box V.4.a Descriptive characteristics of systematic reviews (and their primary studies) that Overview authors may wish to extract from included systematic reviews

V.4.12 Collecting, analysing, and presenting quantitative outcome data

There are two main ways to analyse outcome data in an Overview modified from Pollock et al (2016) and Ballard and Montgomery (2017). Summarizing outcome data involves presenting data in the Overview exactly as they are presented in the included systematic reviews; this applies to both narratively reported study-level data, as well as meta-analysed data. Re-analysing outcome data involves extracting outcome data from the included systematic reviews, analysing the data in a way that differs from the analyses conducted in the systematic reviews, and presenting the re-analysed data in the Overview. The most appropriate method of data analysis will likely depend upon the purpose of the Overview, the specific topic area, and the characteristics of the included systematic reviews. For example, if the purpose is to answer a new review question about a subpopulation of the participants included in the existing systematic reviews, authors may wish to extract outcome data for only those participants of interest and re-analyse the data. However, if the purpose is to present and describe the current body of systematic review evidence on a topic, it may be appropriate to include the results of all relevant systematic reviews as they were presented in the underlying systematic reviews. Both methods of data analysis can be used regardless of whether the Overview includes Cochrane and/or non-Cochrane systematic reviews; however, authors may find that they encounter more issues when re-analysing outcome data from non-Cochrane systematic reviews. Both methods are discussed below. For clarity, the methods are presented as distinct approaches to analysing outcome data, though in reality these two approaches lie on a continuum.

V.4.12.1 Summarizing outcome data

Summarizing outcome data provides readers with a map of the available evidence by presenting individual narrative summaries of the data contained within each included systematic review (including effect estimates and 95% confidence intervals). The purpose is to describe and summarize a group of related systematic reviews (and their outcome data) so that readers are presented with the content and results of the systematic reviews. The purpose may also be to identify and describe the interventions, comparators, outcomes and/or results among related systematic reviews.

When summarizing outcome data, data should be extracted as they were reported in the underlying systematic reviews and then reformatted and presented in text, tables and/or figures, as appropriate. The effect estimates, 95% confidence intervals, and measures of heterogeneity (if studies are pooled) should all be extracted. Overview authors should rely on the analyses reported in the included systematic reviews as much as possible. There should be limited re-analysis or re-synthesis of outcome data (see Section V.4.12.2 ).

Examples of Overviews that summarized outcome data are Farquhar et al (2015) and Welsh et al (2015).

V.4.12.2 Re-analysing outcome data

Re-analysing outcome data involves extracting relevant outcome data from included systematic reviews and re-analysing this data (e.g. using meta-analysis) in a way that differs from the original analyses conducted in the systematic reviews. Overview authors may choose to re-analyse outcome data for several reasons. First, if the objective of the Overview is to answer a different clinical question, authors may select and re-analyse only the data specific to that question (e.g. effect of interventions in children, but not adults). Second, if most, but not all, of the systematic reviews have analysed specific populations or subgroups, Overview authors may apply these analyses to the remainder of the systematic reviews so that consistent information are reported across the systematic review topics. Third, Overview authors may choose to re-analyse data if different summary measures or models were used across the included systematic reviews, as this can allow authors to present results in a consistent fashion across the systematic review topics (e.g. present all estimates as relative or absolute). Lastly, Overview authors may choose to analyse data where they were not previously meta-analysed in a systematic review. Care should be taken in these last two instances, as systematic review authors have likely selected their approach to analysis based on approved methods and in-depth knowledge of individual studies. Overview authors should understand the reasons behind the systematic review authors’ choice of analytic methods when determining whether their desired methods of re-analysing outcome data are appropriate.

Overview authors who re-analyse outcome data should use the standard meta-analytic principles described in Chapter 10 . Note that authors wishing to re-analyse outcome data may only be able to do so if the clinical parameters and statistical aspects of the included systematic reviews are sufficiently reported. When conducting this type of analysis, authors should try as much as possible to present re-analysed outcomes in a standardized way (e.g. using fixed or random effects modelling and using a consistent measure of effect for each outcome). Overview authors must also guard against making inappropriate informal indirect comparisons about the comparative effectiveness of two or more interventions (see Section V.4.1 ). Authors with access to the CDSR can download Review Manager files for included Cochrane Reviews of interventions to help expedite data extraction.

Examples of Overviews that re-analysed outcome data are Bialy et al (2011), Cates et al (2012), Cates et al (2014), Pollock et al (2017c).

More detail on re-analysing outcome data can be found in Thomson et al (2010), Cooper and Koenka (2012), Pollock et al (2016), Ballard and Montgomery (2017), Pollock et al (2017a).

V.4.12.3 Presenting outcome data

Overview authors can present their summarized or re-analysed outcome data narratively and in results tables. There is no specific format for the tables, but authors should follow the principles for displaying outcome data outlined in Chapter 14 . Overview authors could:

  • Present narrative summaries, with or without corresponding tables, of the outcome data contained within the systematic reviews. For example, Overview authors could present each outcome measure in turn across systematic reviews (Brown and Farquhar 2014, Farquhar et al 2015, Welsh et al 2015), or they could present the results from each systematic review in turn (Jones 2012, Hindocha et al 2015). Overview authors could also present groups of similar systematic reviews and/or outcome measures together (Bialy et al 2011, Foisy et al 2011a, Payne et al 2012, Pollock et al 2017c); this may allow authors to group similar populations, interventions, or outcome measures together, while still presenting outcome data sequentially.
  • Organize results into categories (e.g. ‘clinically important’ or ‘not clinically important’; or ‘effective interventions’, ‘promising interventions’, ‘ineffective interventions’, ‘probably ineffective interventions’ and ‘no conclusions possible’), avoiding the categorization of results into statistically significant vs not significant categories, and use these data to provide a map of the available evidence (Flodgren et al 2011, Worswick et al 2013, Farquhar et al 2015).
  • Present a new conceptual framework, or modify an existing framework. For example, authors could present a grid of interventions versus outcomes; they could then indicate how many primary studies and subjects contribute outcome data, and the direction of effect for each outcome (Flodgren et al 2011). Authors could also map their included systematic reviews to specific taxonomies of interventions and describe the effectiveness of each category of interventions (Ryan et al 2014). Any frameworks used to present outcome data should be specified a priori at the protocol stage, or indicated as post hoc in the report.

Additional suggestions for presenting outcome data, with examples, are provided in Ryan et al (2009), Smith et al (2011), Thomson et al (2013), Biondi-Zoccai (2016), Pollock et al (2017a).

Table V.4.c contains a template for a ‘Summary of findings’ table that authors may wish to use. The table layout and terminology are explained in Chapter 14 , and assessing certainty of evidence using the GRADE tool is explained in Section V.4.13 . When creating these tables, authors should also include references where appropriate to indicate which outcome data come from which systematic reviews. When creating ‘Summary of findings’ tables, we caution Overview authors against selectively reporting only statistically significant outcomes. Also note that Overview authors who choose to juxtapose data from different systematic reviews in a single table or figure may be inviting readers to make their own informal indirect comparisons; tables of this sort should only be used if Overview authors: avoid ‘comparing’ across systematic reviews, appropriately interpret results, and describe the caveats to readers (see Section V.4.1 ).

Table V.4.c Template for a ‘Summary of findings’ table

Outcome #1

 

Intervention and comparator 1

         
 

Intervention and comparator 2

         
 

[…]

         
 

Intervention and comparator ‘X’

         

Outcome #2

 

Intervention and comparator 1

         
 

Intervention and comparator 2

         
 

[…]

         
 

Intervention and comparator ‘X’

         

Outcome ‘X’

 

Intervention and comparator 1

         
 

Intervention and comparator 2

         
 

[…]

         
 

Intervention and comparator ‘X’

         

V.4.13 Assessing certainty of evidence of quantitative outcome data using the GRADE tool

Similar to Cochrane reviews of interventions, Cochrane Overviews should use the GRADE tool (Guyatt et al 2008) to assess and report the certainty of evidence (i.e. the confidence we have in the effect estimate) for each pre-defined, clinically important outcome of interest in the Overview. If possible, Overview authors should extract and report the GRADE assessments presented in the included systematic reviews. However, there may be caveats involved, especially when non-Cochrane systematic reviews are included in Overviews. For example, some systematic reviews may not contain GRADE assessments, may contain limited GRADE assessments, may present aggregated (instead of individual) assessments, or may use tools other than GRADE to assess certainty of evidence. Further, if Overviews re-extract and re-analyse outcome data from systematic reviews, the GRADE assessments in the systematic reviews may no longer be relevant. In these cases, Overview authors must determine whether they will need to conduct GRADE assessments themselves using the information reported in the systematic reviews (Biondi-Zoccai 2016, Pollock et al 2016). See Meader et al (2014) for tips on assessing GRADE in systematic reviews.

V.5 Format and reporting guidelines for Cochrane Overviews of Reviews

As the format and reporting guidelines for Cochrane Overviews (and protocols) are similar to those for Cochrane reviews of interventions (and protocols), Overview authors can refer to Chapter III for general guidance on reporting. However, authors should remain mindful that Cochrane Overviews will have certain unique reporting requirements. For example: titles should contain the phrase ‘an Overview of Reviews’; titles should state whether Cochrane reviews of interventions and/or non-Cochrane systematic reviews are included; relevant section headings should refer to ‘reviews’ instead of ‘studies’; and there should be separate subheadings discussing the methodological quality of included systematic reviews and that of their underlying primary studies. The sections of a Cochrane Overview and protocol are listed in Box V.5.a and Box V.5.b .

Further, Overviews will have unique limitations that should be mentioned in the Discussion. As with Cochrane Reviews of interventions, authors should comment on factors that might be within or outside of the control of the Overview authors, including whether all relevant systematic reviews were identified and included in the Overview, any gaps in coverage of existing reviews (and potential priority areas for systematic reviews), whether all relevant data could be obtained (and implications for missing data), and whether the methods used (for example, searching, study selection, data collection and analysis at both the systematic review (see Chapter 4, Section 4.5 ) and overview levels) could have introduced bias.

Box V.5.a Sections of a protocol for a Cochrane Overview of Reviews

Authors

Contact person

Dates

Background

Objectives

Methods:

Criteria for selecting reviews for inclusion:*

Types of reviews*

Types of participants

Types of interventions

Types of outcome measures

Search methods for identification of reviews*

Data collection and analysis

Quality of included reviews*

Risk of bias of primary studies included in reviews*

Quality of evidence in included reviews*

Additional information:

Acknowledgements

Contributions of authors

Declarations of interest

Sources of support

Registration and protocol

Data, code and other materials

What’s new

History

Published notes

References:

Search strategies

Other supplementary materials

* Note that these headers refer to ‘systematic reviews’ instead of ‘primary studies’.

Box V.5.b Sections of a Cochrane Overview of Reviews

Authors

Contact person

Dates

Background

Objectives

Methods

Main results

Authors’ conclusions

Funding

Registration

Plain language title

Summary text

Background

Objectives

Methods:

Criteria for selecting reviews for inclusion:*

Types of reviews*

Types of participants

Types of interventions

Types of outcome measures

Search methods for identification of reviews*

Data collection and analysis

Quality of included reviews*

Risk of bias of primary studies included in reviews*

Quality of evidence in included reviews*

Results:

Description of included reviews*

Methodological quality of included reviews:*

Quality of included reviews*

Risk of bias of primary studies included in reviews*

Effects of interventions

Discussion

Summary of main results

Overall completeness and applicability of evidence

Quality of the evidence

Potential biases in the overview process

Agreements and disagreements with other studies and/or reviews

Authors’ conclusions:

Implication for practice

Implication for research

Additional information:

Acknowledgements

Contributions of authors

Declarations of interest

Sources of support

Registration and protocol

Data, code and other materials

What’s new

History

Published notes

References:

Other published versions of this review

Figures: 

Tables:

Search strategies

Characteristics of included reviews

Characteristics of excluded reviews

Characteristics of reviews awaiting assessment

Characteristics of ongoing reviews

Data and Analyses

Data package

Other supplementary materials

V.6 Updating a Cochrane Overview

Regular updating of Cochrane Overviews is very important and follows the same process as updating Cochrane Reviews of interventions (see Chapter IV ). In many cases, only minor changes to the Cochrane Overview will be required. However, when new eligible systematic reviews are published, or when the results of any of the included Cochrane Reviews of interventions change, the Overview will require more extensive revisions.

V.7 Chapter information

Authors: Michelle Pollock, Ricardo M Fernandes, Lorne A Becker, Dawid Pieper, and Lisa Hartling.

Acknowledgements : This chapter builds on a previous version, which was authored by Lorne A Becker and Andy Oxman. Methods for Cochrane Overviews were originally developed by the Umbrella Reviews Methods Group (now called the Comparing Multiple Interventions Methods Group). We thank Tianjing Li, Andrew Booth, Miranda Cumpston, Gerald Gartlehner, Julian Higgins, and Penny Whiting for providing feedback on the current version of the chapter.

Funding: This work was supported in part by the Canadian Institutes of Health Research, including an operating grant and new investigator salary award.

V.8 References

Akl E, Carrasco-Labra A, Brignardello-Petersen R, Neumann I, Johnston BC, Sun X, Briel M, Busse JW, Ebrahim S, Granados CE, Iorio A, Irfan A, Garcia LM, Mustafa RA, Ramirez-Morera A, Selva A, Sola I, Sanabria AJ, Tikkinen KAO, Vandvik PO, Vernooij RWM, Zazueta OE, Zhou Q, Guyatt GH, Alonso-Coello P. Reporting, handling and assessing the risk of bias associated with missing participant data in systematic reviews: a methodological survey. BMJ Open 2015; 5 : 8.

Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. International Journal of Evidence-Based Healthcare 2015; 13 : 8.

Baker P, Costello J, Dobbins M, Waters E. The benefits and challenges of conducting an overview of systematic reviews in public health: a focus on physical activity. Journal of Public Health 2014; 36 : 4.

Ballard M, Montgomery P. Risk of bias in overviews of reviews: a scoping review of methodological guidance and four-item checklist. Research Synthesis Methods 2017; 8 : 16.

Bialy L, Foisy M, Smith M, Fernandes R. The Cochrane Library and the treatment of bronchiolitis in children: an overview of reviews. Evidence-Based Child Health: A Cochrane Review Journal 2011; 6 : 17.

Biondi-Zoccai G. Umbrella Reviews: Evidence Synthesis with Overviews of Reviews and Meta-Epidemiologic Studies. Switzerland: Springer International Publishing; 2016.

Brown J, Farquhar C. Endometriosis: an overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2014; 3 : CD009590.

Caird J, Sutcliffe K, Kwan I, Dickson K, Thomas J. Mediating policy-relevant evidence at speed: are systematic reviews of systematic reviews a useful approach? Evidence and Policy 2015; 11 : 16.

Cates C, Wieland L, Oleszczuk M, Kew K. Safety of regular formoterol or salmeterol in children with asthma: an overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2012; 10 : CD010005.

Cates C, Wieland L, Oleszczuk M, Kew K. Safety of regular formoterol or salmeterol in adults with asthma: an overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2014; 2 : CD010314.

Cochrane Editorial Unit. Cochrane Editorial and Publishing Policy Resource. 2015. http://community.cochrane.org/editorial-and-publishing-policy-resource

Conn VS, Coon Sells TG. WJNR welcomes umbrella reviews. Western Journal of Nursing Research 2014; 36 : 147.

Cooper H, Koenka AC. The overview of reviews: unique challenges and opportunities when research syntheses are the principal elements of new integrative scholarship. American Psychologist 2012; 67 : 16.

Derry C, Derry S, Moore R. Sumatriptan (all routes of administration) for acute migrains attacks in adults - overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2014; 5 : CD009108.

Farquhar C, Rishworth J, Brown J, Nelen W, Marjoribanks J. Assisted reproductive technology: an overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2015; 7 : CD010537.

Flodgren G, Eccles M, Shepperd S, Scott A, Parmelli E, Beyer F. An overview of reviews evaluating the effectiveness of financial incentives in changing healthcare professional behaviours and patient outcomes. Cochrane Database of Systematic Reviews 2011; 7 : CD009255.

Foisy M, Boyle R, Chalmers J, Simpson E, Williams H. The prevention of eczema in infants and children: an overview of Cochrane and non-Cochrane reviews. Evidence-Based Child Health: A Cochrane Review Journal 2011a; 6 : 1322.

Foisy M, Ali S, Geist R, Weinstein M, Michail S, Thakkar K. The Cochrane Library and the treatment of chronic abdominal pain in children and adolescents: an overview of reviews. Evidence-Based Child Health: A Cochrane Review Journal 2011b; 6 : 1027.

Glenny A, Altman D, Song F, Sakarovitch C, Deeks J, D'Amico R, Bradburn M, Eastwood A. Indirect comparisons of competing interventions. Health Technology Assessment 2005; 9 : 134, iii-iv.

Guay J, Choi P, Suresh S, Albert N, Kopp S, Pace N. Neuraxial blockade for the prevention of postoperative mortality and major morbidity: an overview of Cochrane systematic reviews. Cochrane Database of Systematic Reviews 2014; 1 : CD010108.

Guyatt G, Oxman A, Vist G, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann H. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 3.

Hartling L, Chisholm A, Thomson D, Dryden D. A descriptive analysis of overviews of reviews published between 2000 and 2011. Plos One 2012; 7 : e49667.

Hartling L, Featherstone R, Nuspl M, Shave K, Dryden D, Vandermeer B. The contribution of databases to the results of systematic reviews: a cross-sectional study. BMC Medical Research Methodology 2016; 16 : 13.

Hindocha A, Beere L, Dias S, Watson A, Ahmad G. Adhesion prevention agents for gynaecological surgery: an overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2015; 1 : CD011254.

Hopewell S, Boutron I, Altman D, Ravaud P. Incorporation of assessments of risk of bias of primary studies in systematic reviews of randomised trials: a cross-sectional study. BMJ Open 2013; 3 : 8.

Jones L. Pain management for women in labour: an overview of systematic reviews. Journal of Evidence-Based Medicine 2012; 5 : 2.

Meader N, King K, Llewellyn A, Norman G, Brown J, Rodgers M, Moe-Byrne T, Higgins J, Sowden A, Stewart G. A checklist designed to aid consistency and reproducibility of GRADE assessments: development and pilot validation. Systematic Reviews 2014; 3 : 82.

Moore R, Derry S, Aldington D, Wiffen P. Adverse events associated with single dose oral analgesics for acute postoperative pain in adults-an overview of Cochrane reviews. Cochrane Database of Systematic Reviews 2014; 10 : CD011407.

Page M, Shamseer L, Altman D, Tetzlaff J, Sampson M, Tricco A, Catalá-López F, Li L, Reid E, Sarkis-Onofre R. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Medicine 2016; 13 : 30.

Payne C, Martin S, Wiffen P. Interventions for fatigue and weight loss in adults with advanced progressive illness. Cochrane Database of Systematic Reviews 2012; 3 : CD008427.

Peters J, Hooft L, Grolman W, Stegeman I. Reporting quality of systematic reviews and meta-analyses of otorhinolaryngologic articles based on the PRISMA Statement. Plos One 2015; 10 : 11.

Pieper D, Buechter R, Jerinic P, Eikermann M. Overviews of reviews often have limited rigor: a systematic review. Journal of Clinical Epidemiology 2012; 65 : 6.

Pieper D, Antoine S, Mathes T, Neugebauer E, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. Journal of Clinical Epidemiology 2014; 67 : 7.

Pollock A, Campbell P, Brunton G, Hunt H, Estcourt L. Selecting and implementing overview methods: implications from five exemplar overviews. Systematic Reviews 2017a; 6 : 145.

Pollock M, Fernandes R, Becker L, Featherstone R, Hartling L. What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary. Systematic Reviews 2016; 5 : 15.

Pollock M, Fernandes R, Hartling L. Evaluation of AMSTAR to assess the methodological quality of systematic reviews in overviews of reviews of healthcare interventions. BMC Medical Research Methodology 2017b; 17 : 13.

Pollock M, Sinha I, Hartling L, Rowe B, Schreiber S, Fernandes R. Inhaled short-acting bronchodilators for managing emergency childhood asthma: an overview of reviews. Allergy 2017c; 72 : 17.

Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. The impact of different inclusion decisions on the comprehensiveness and complexity of overviews of reviews of healthcare interventions. Systematic Reviews 2019a; 8 : 18.

Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. A decision tool to help researchers make decisions about including systematic reviews in overviews of reviews of healthcare interventions. Systematic Reviews 2019b; 8 : 29.

Ryan R, Kaufman C, Hill S. Building blocks for meta-synthesis: data integration tables for summarising, mapping, and synthesising evidence on interventions for communicating with health consumers. BMC Medical Research Methodology 2009; 9 : 11.

Ryan R, Santesso N, Lowe D, Hill S, Grimshaw J, Prictor M, Kaufman C, Cowie G, Taylor M. Interventions to improve safe and effective medicines use by consumers: an overview of systematic reviews. Cochrane Database Systematic Reviews 2014; 4 : CD007768.

Shea B, Grimshaw J, Wells G, Boers M, Andersson N, Hamel C, Porter A, Tugwell P, Moher D, Bouter L. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 2007; 7 : 10.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, Henry DA. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ 2017; 358 : j4008.

Shojania K. Updating systematic reviews. Rockville, MD: Agency for Healthcare Research and Quality, U.S. Dept. of Health and Human Services, Public Health Service; 2007.

Smith V, Devane D, Begley C, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Medical Research Methodology 2011; 11 : 15.

Thomson D, Russell K, Becker L, Klassen T, Hartling L. The evolution of a new publication type: steps and challenges of producing overviews of reviews. Research Synthesis Methods 2010; 1 : 13.

Thomson D, Foisy M, Oleszczuk M, Wingert A, Chisholm A, Hartling L. Overview of reviews in child health: evidence synthesis and the knowledge base for a specific population. Evidence-Based Child Health: A Cochrane Review Journal 2013; 8 : 7.

Welsh E, Evans D, Fowler S, Spencer S. Interventions for bronchiectasis: an overview of Cochrane systematic reviews. Cochrane Database of Systematic Reviews 2015; 7 : CD010337.

Whiting P, Savović J, Higgins J, Caldwell D, Reeves B, Shea B, Davies P, Kleijnen J, Churchill R. ROBIS: a new tool to assess risk of bias in systematic reviews was developed. Journal of Clinical Epidemiology 2016; 69 : 9.

Worswick J, Wayne S, Bennett R, Fiander M, Mayhew A, Weir M, Sullivan K, Grimshaw J. Improving quality of care for persons with diabetes: an overview of systematic reviews-what does the evidence tell us? Systematic Reviews 2013; 2 : 26.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Cochrane Methods Comparing Multiple Interventions

Overviews of reviews.

Cochrane Overviews of Reviews (Overviews) use explicit and systematic methods to search for and identify multiple systematic reviews on a similar topic for the purpose of extracting and analyzing their results across important outcomes. Thus, the unit of searching, inclusion and data analysis is the systematic review rather than the primary study. Overviews can search for and include both Cochrane and non-Cochrane systematic reviews. 

Overviews can present outcome data exactly as they appear in the included systematic reviews, or they can re-analyze the outcome data. However, we recommend against undertaking network meta-analyses in the context of an Overview because the types of analysis and inference can be drawn are very different when study-level data are not sought. 

The table below presents the differences between an Overview and a standard intervention review when addressing multiple interventions for the same condition. 

Review type

Cochrane Overview

Cochrane intervention review with network meta-analysis

Aim

Collate multiple systematic reviews about the effectiveness of interventions for the same condition to extract and analyse their results across important outcomes. Overviews may also be used for other purposes (see Handbook chapter).

Re-analyse data from randomized trials of multiple interventions for the same condition to make inferences about their comparative effectiveness or safety

Focus of search strategy

Systematic reviews

Primary studies such as individual randomized trials

Focus of statistical synthesis

Review data

Study data

Focus of data collection

Summary estimates based on existing meta-analyses from the included reviews

Estimates from primary studies

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Systematic Review | Definition, Example, & Guide

Systematic Review | Definition, Example & Guide

Published on June 15, 2022 by Shaun Turney . Revised on November 20, 2023.

A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.

They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”

In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.

Table of contents

What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.

A review is an overview of the research that’s already been completed on a topic.

What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:

  • Formulate a research question
  • Develop a protocol
  • Search for all relevant studies
  • Apply the selection criteria
  • Extract the data
  • Synthesize the data
  • Write and publish a report

Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.

Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.

Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.

Prevent plagiarism. Run a free check.

Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.

A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .

A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.

Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.

Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.

However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.

Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.

A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.

To conduct a systematic review, you’ll need the following:

  • A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
  • If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
  • Access to databases and journal archives. Often, your educational institution provides you with access.
  • Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
  • Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.

A systematic review has many pros .

  • They minimize research bias by considering all available evidence and evaluating each study for bias.
  • Their methods are transparent , so they can be scrutinized by others.
  • They’re thorough : they summarize all available evidence.
  • They can be replicated and updated by others.

Systematic reviews also have a few cons .

  • They’re time-consuming .
  • They’re narrow in scope : they only answer the precise research question.

The 7 steps for conducting a systematic review are explained with an example.

Step 1: Formulate a research question

Formulating the research question is probably the most important step of a systematic review. A clear research question will:

  • Allow you to more effectively communicate your research to other researchers and practitioners
  • Guide your decisions as you plan and conduct your systematic review

A good research question for a systematic review has four components, which you can remember with the acronym PICO :

  • Population(s) or problem(s)
  • Intervention(s)
  • Comparison(s)

You can rearrange these four components to write your research question:

  • What is the effectiveness of I versus C for O in P ?

Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .

  • Type of study design(s)
  • The population of patients with eczema
  • The intervention of probiotics
  • In comparison to no treatment, placebo , or non-probiotic treatment
  • The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
  • Randomized control trials, a type of study design

Their research question was:

  • What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?

Step 2: Develop a protocol

A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.

Your protocol should include the following components:

  • Background information : Provide the context of the research question, including why it’s important.
  • Research objective (s) : Rephrase your research question as an objective.
  • Selection criteria: State how you’ll decide which studies to include or exclude from your review.
  • Search strategy: Discuss your plan for finding studies.
  • Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.

If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.

It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .

Step 3: Search for all relevant studies

Searching for relevant studies is the most time-consuming step of a systematic review.

To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:

  • Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
  • Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
  • Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
  • Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.

At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .

  • Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
  • Handsearch: Conference proceedings and reference lists of articles
  • Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
  • Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics

Step 4: Apply the selection criteria

Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.

To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.

If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.

You should apply the selection criteria in two phases:

  • Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
  • Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.

It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .

Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.

When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.

Step 5: Extract the data

Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:

  • Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
  • Your judgment of the quality of the evidence, including risk of bias .

You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .

Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.

They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.

Step 6: Synthesize the data

Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:

  • Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
  • Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.

Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.

Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.

Step 7: Write and publish a report

The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.

Your article should include the following sections:

  • Abstract : A summary of the review
  • Introduction : Including the rationale and objectives
  • Methods : Including the selection criteria, search method, data extraction method, and synthesis method
  • Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
  • Discussion : Including interpretation of the results and limitations of the review
  • Conclusion : The answer to your research question and implications for practice, policy, or research

To verify that your report includes everything it needs, you can use the PRISMA checklist .

Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.

In their report, Boyle and colleagues concluded that probiotics cannot be recommended for reducing eczema symptoms or improving quality of life in patients with eczema. Note Generative AI tools like ChatGPT can be useful at various stages of the writing and research process and can help you to write your systematic review. However, we strongly advise against trying to pass AI-generated text off as your own work.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Student’s  t -distribution
  • Normal distribution
  • Null and Alternative Hypotheses
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Data cleansing
  • Reproducibility vs Replicability
  • Peer review
  • Prospective cohort study

Research bias

  • Implicit bias
  • Cognitive bias
  • Placebo effect
  • Hawthorne effect
  • Hindsight bias
  • Affect heuristic
  • Social desirability bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, November 20). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved July 8, 2024, from https://www.scribbr.com/methodology/systematic-review/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.

  • Open access
  • Published: 04 November 2020

Guidance for overviews of reviews continues to accumulate, but important challenges remain: a scoping review

  • Michelle Gates   ORCID: orcid.org/0000-0002-9941-9981 1 ,
  • Allison Gates 2 ,
  • Samantha Guitard 3 ,
  • Michelle Pollock 4 &
  • Lisa Hartling 5  

Systematic Reviews volume  9 , Article number:  254 ( 2020 ) Cite this article

6589 Accesses

13 Altmetric

Metrics details

Overviews of reviews (overviews) provide an invaluable resource for healthcare decision-making by combining large volumes of systematic review (SR) data into a single synthesis. The production of high-quality overviews hinges on the availability of practical evidence-based guidance for conduct and reporting.

Within the broad purpose of informing the development of a reporting guideline for overviews, we aimed to provide an up-to-date map of existing guidance related to the conduct of overviews, and to identify common challenges that authors face when undertaking overviews.

We updated a scoping review published in 2016 using the search methods that had produced the highest yield: ongoing reference tracking (2014 to March 2020 in PubMed, Scopus, and Google Scholar), hand-searching conference proceedings and websites, and contacting authors of published overviews. Using a qualitative meta-summary approach, one reviewer extracted, organized, and summarized the guidance and challenges presented within the included documents. A second reviewer verified the data and synthesis.

We located 28 new guidance documents, for a total of 77 documents produced by 34 research groups. The new guidance helps to resolve some earlier identified challenges in the production of overviews. Important developments include strengthened guidance on handling primary study overlap at the study selection and analysis stages. Despite marked progress, several areas continue to be hampered by inconsistent or lacking guidance. There is ongoing debate about whether, when, and how supplemental primary studies should be included in overviews. Guidance remains scant on how to extract and use appraisals of quality of the primary studies within the included SRs and how to adapt GRADE methodology to overviews. The challenges that overview authors face are often related to the above-described steps in the process where evidence-based guidance is lacking or conflicting.

The rising popularity of overviews has been accompanied by a steady accumulation of new, and sometimes conflicting, guidance. While recent guidance has helped to address some of the challenges that overview authors face, areas of uncertainty remain. Practical tools supported by empirical evidence are needed to assist authors with the many methodological decision points that are encountered in the production of overviews.

Peer Review reports

By systematically identifying and synthesizing all available evidence for a particular research question, systematic reviews are considered foundational to evidence-based healthcare [ 1 ]. It is estimated that 8000 systematic reviews were published in 2014 [ 2 ], more than three times the yearly publication rate recorded 10 years earlier [ 3 ]. Around the turn of the century overviews of reviews, which compile data from multiple systematic reviews, emerged to deal with the growing volume of published systematic reviews [ 4 , 5 ]. By taking advantage of existing syntheses, overviews of reviews can create efficiencies [ 6 ] and answer broader research questions [ 7 ].

Many of the methods used to undertake systematic reviews are suitable for overviews of reviews, but their conduct also presents unique methodological challenges [ 7 , 8 ]. Many methods to conduct the various stages of overviews of reviews have been suggested; however, much of the guidance is inconsistent, and evidence-based reporting guidance is lacking [ 9 ]. The relative lack of evidence and consistency in recommendations may underpin the inadequate and inconsistent conduct and reporting of overviews to date [ 4 , 5 , 10 ]. As the science of overviews of reviews continues to develop, authors will need to keep up-to-date with the latest methods research and reporting guidelines [ 11 ].

In an effort to collate available guidance for overview conduct and to inform future evaluations aimed at advancing the science of overviews, in 2016 our team published a scoping review of guidance documents for researchers conducting overviews of reviews of healthcare interventions [ 12 ]. In addition, we completed a methodological systematic review examining the quality of reporting of a sample of overviews of reviews of healthcare interventions published from 2012 to 2016 [ 13 ], updating earlier work by our team [ 4 , 5 ]. To address the gap in guidance for reporting, in 2017 we registered our intent to develop an evidence-based and consensus-based reporting guideline for overviews of reviews (Preferred Reporting Items for Overviews of Reviews (PRIOR)) with the Equator Network [ 14 ]. We used evidence from our aforementioned reviews to inform the preliminary list of items for PRIOR [ 9 , 15 ]. In order to ensure that the items were informed by the most up-to-date available guidance, herein we have updated our existing scoping review to include new guidance documents that have become available in the past four years [ 14 ]. In the future, this work may be extended to develop minimum standards of methodological conduct for overviews.

The aims of this updated scoping review were to (1) locate, access, compile, and map documents that provide explicit methodological guidance for conducting overviews of reviews; (2) identify and describe areas where guidance for conducting overviews of reviews is clear and consistent, and areas where guidance is conflicting or missing; and (3) document common challenges involved in conducting overviews of reviews and determine whether existing guidance can help researchers overcome these challenges [ 12 ].

We updated the scoping review published by Pollock et al. in 2016 [ 12 ]. In doing so, we followed very similar methodology to the original scoping review, with the exception of alterations to the search to increase feasibility. We adhered to the methodological framework described by Arksey and O’Malley [ 15 ] and refined by Levac et al. [ 16 ]. We reported our intent to update the 2016 review in our protocol for the development of the PRIOR guideline [ 9 ]. Reporting adheres to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for scoping reviews (PRISMA-ScR; checklist in Additional file 1 ) [ 17 ].

Eligibility criteria

We included documents produced in any format, language, or year that either (a) provided explicit guidance related to the context or process of any aspect of conducting overviews of reviews examining the efficacy, effectiveness, and/or safety of healthcare interventions, or (b) described an author team’s experience in conducting overviews of reviews of healthcare interventions. When selecting documents for inclusion, we used a pre-established definition of overviews of reviews (Table 1 ). This definition was recently published by Cochrane [ 18 ], and was informed by Pollock’s 2016 scoping review [ 12 ]. We excluded existing overviews, documents that were not about overviews of health interventions (including those reporting on different types of overviews, e.g., diagnostic or etiology), and those that were not intended as methods guidance. We excluded guidance for the reporting of overviews, because we viewed reporting as a distinct concept that is informed by guidance about overview conduct. We also excluded guidance documents that had been updated and superseded since the 2016 review, and conference abstracts for which a full version of the document was available that provided additional information.

We conducted an iterative and extensive search to ensure breadth and comprehensiveness of coverage [ 15 , 16 , 19 , 20 ], with the assistance of a research librarian (Additional file 2 ). The searches for the original scoping review covered the period from January 2010 to December 2013 (for databases), and to 2015 for snowballing and other searches (described in detail within the publication) [ 12 ]. The review included a search of online databases (Medline and Embase via Ovid, DARE and the Cochrane Methods Study Database via the Cochrane Library, Medline via Web of Science), reference tracking in Scopus and PubMed, article alerts from Google Scholar and Web of Science, hand-searching 26 websites and conference proceedings for three conferences, and contacting producers of overviews [ 12 ]. Based on the experience of the lead author of the original scoping review, for feasibility we included the search methods with the highest yield in this update (i.e., reference tracking and hand-searching, while eliminating the term-based database search). In the previous scoping review, 71% of included documents were located using these methods [ 12 ]. This allowed us to complete the scoping review on an expedited timeline, while including the most recent guidance as it became available.

On 7 March 2019, we conducted an iterative reference tracking (“snowballing”) search [ 19 , 20 ]. We used 46 target articles, including all published articles and abstracts cited in the 2016 scoping review [ 21 ], as well as other recent relevant articles known to the research team. For each target article, we searched for “citing” references in Google Scholar and Scopus and for “similar articles” in PubMed from 1 January 2014 to present. Following the initial searches, the “citing” references search in Scopus and “similar articles” search in PubMed were turned into monthly e-mail alerts. We augmented the reference tracking with searches of Google Scholar, using terms that are commonly used to describe overviews, such as “overview of reviews,” “umbrella review,” and “review of systematic reviews.” The initial search was run on 1 March 2019 and restricted to documents available since 2014, corresponding to the end date of the previous database searches. The search was then turned into an e-mail alert. The last date searched for all electronic sources was 1 March 2020.

In addition to the electronic searches, on 6–12 February 2019, we (MG, AG, SG) hand-searched the websites of 59 organizations (33 additional since the original review) that had conducted at least one overview of reviews and of major evidence synthesis centres (Additional file 2 ). We also hand-searched the conference proceedings from four international conferences: the International Cochrane Colloquium (2015–present), Health Technology Assessment International (2017–present), the Canadian Agency for Drugs and Technologies in Health Symposium (2015–present), and the Global Evidence Summit (added in this iteration; 2017). These searches were updated on 3–5 February 2020. We also reviewed the reference lists of newly included documents.

On 26 February 2019, we e-mailed content experts to inquire about additional relevant studies. We contacted the primary or senior authors of a random sample of 100 overviews published between 2012 and 2016; this was the same sample used in our team’s aforementioned methodological systematic review examining the quality of reporting of overviews [ 13 ]. We also contacted 22 managing editors of Cochrane Review Groups that had published at least one overview of reviews. If we did not receive a reply, we sent a second e-mail on 27 March 2019 before ceasing contact. We received responses from 51 authors and 19 managing editors.

Document selection

Two independent reviewers (MG and SG) screened the titles and abstracts of documents retrieved by the electronic searches in Excel. We retrieved the full texts of all potentially relevant documents identified by either of the reviewers. One reviewer (MG) scanned the websites and retrieved the full texts of potentially relevant documents, while another (SG) retrieved the full texts of documents recommended by content experts. The two reviewers independently scanned the conference proceedings and retrieved the full texts for all believed to be potentially relevant by either reviewer. Both reviewers independently reviewed all full texts and agreed on those that were ultimately included, with disagreements resolved through discussion or the involvement of a third reviewer (AG).

Data extraction and synthesis

Either of three reviewers (MG, SG, or AG) read the included documents and extracted and synthesized relevant text using a qualitative meta-summary approach [ 22 , 23 ]. This is a quantitatively oriented approach to aggregating qualitative findings that includes extracting, editing, grouping, formatting, and presenting findings across groups of related documents [ 22 , 23 , 24 ]. The first reviewer read each of the documents and highlighted text providing guidance on any stage of the overview of reviews process and/or describing challenges in undertaking an overview of reviews. Each full text was then read by a second reviewer who confirmed and/or edited the highlighting and extracted relevant text to a data extraction file in Microsoft Excel. The first reviewer then verified the data extraction and corrected errors and/or omissions. Finally, one reviewer edited the guidance and challenges extracted from all documents to ensure that they were presented in a way that was accessible to readers while preserving their underlying content and meaning [ 24 ].

Next, we followed a two-stage approach to group similar findings. We began by grouping all documents produced by the same research group to avoid giving extra weight to statements included in multiple documents from the same group [ 24 ]. Then, we grouped statements across research groups by stage of the overview process [ 24 ] in a way that aligned with the 2016 version of this scoping review [ 12 ]. These stages included items related to the context of conducting an overview (e.g., types of questions that may be answered, items to consider, author team composition, and target audience), and items related to the process of conducting overviews (e.g., protocol development, search, selection, data extraction, quality appraisal, synthesis, assessing certainty, developing conclusions, updating the overview). Finally, within each group, we developed a refined list of guidance statements by editing the findings to organize topically similar statements together and to eliminate redundancies, while conserving ambiguities.

Finally, we developed a narrative summary of the extracted guidance by stage of the overview process, and challenges. For the guidance statements, we also calculated the frequency and intensity effect sizes [ 23 , 24 ]. We calculated frequency effect sizes by dividing the number of research groups contributing guidance on a topic area by the total number of research groups. We calculated intensity effect sizes by dividing the number of topic areas addressed by each research group by the total number of topic areas.

The searches retrieved 6173 records that were screened by title and abstract, after which 5969 were excluded. After incorporating the 52 documents included in the original scoping review (41 reporting guidance and 11 reporting experiences), we assessed the full text of 254 and included 28 new documents (Fig. 1 ; studies excluded at full-text review in Additional file 3 ). Four guidance documents that were included in the original scoping review were excluded and replaced with three new documents that updated and superseded the previous guidance. There are now 77 documents produced by 34 research groups included in this scoping review, which were published (or became available) between 2009 and 2020. These documents, with their abbreviations (used within the results and tables), are listed in Additional file 4 , and labelled as “A1,” “A2”… throughout the review.

figure 1

Flow diagram of document selection

Of these, 59 documents (21 new since the previous iteration) produced by 24 research groups provided explicit methods guidance for conducting overviews of reviews, and 20 documents (9 new) produced by 16 research groups described author teams’ experience conducting published overviews of reviews (Table 2 ). Two documents reported both methodological guidance and on authors’ experiences conducting overviews (A17, A47). There were 30 (39%) conference presentations, 27 (35%) journal articles, 9 (12%) internal documents, 5 (6%) book sections or chapters, 2 (3%) websites, and one each of editorials, dissertations, case reports, and interview transcripts. In the sections that follow, we summarize the guidance and challenges provided in these documents; this includes all documents located to date (i.e., incorporating both those located in the Pollock 2016 review [ 12 ] and our update).

Guidance for conducting overviews of reviews

Since the previous version of this review [ 12 ], 21 new guidance documents related to the context or conduct of overviews of reviews became available (ARCHE: A1, A2, A3; KCE: A4; CSU: A5; CChile: A17; CMIMG: A23, A24, A25, A27, A28; GCU: A42; HarvU: A43; JBI: A44; KCL: A47; NEST: A49; SUR: A50; UConn: A53, A54; UCyp: A55; UOx: A56). The new documents contained guidance relating to all of the 15 topic areas included in the previous review. We also added two new topic areas: developing and registering the overview of reviews protocol; and updating the overview of reviews. Table 3 shows a map of the guidance provided by these documents. The number of topics addressed by each research group was median (range) 8 (1 to 17); three groups addressed ≥ 15 of the topic areas in their guidance documents (CMIMG, JBI, SUR). The number of groups reporting on each topic area was median (range) 11 (3 to 21). Within the following sections, for each stage of the overview process, we provide a narrative summary of the guidance available.

Types of questions that can be answered using the overview of reviews format

There is limited guidance on the types of questions that can be answered using the overview of reviews format (CMIMG: A27, SUR: A50), with most ( n = 7) groups citing CMIMG guidance in their documents (CHF: A12, DukeU: A39, GCU: A42, JBI: A44, NCHS: A49, TCD: A51, UBirm: A52). Cochrane indicates that overviews of reviews can be used to summarize information on “different interventions for the same condition; different outcomes for the same intervention in the same condition; the same intervention for different conditions or populations; adverse effects across multiple conditions” (CMIMG: A27). Chapter 5 of Biondi-Zoccai’s book “Umbrella Reviews” cites similar questions, with the addition of summarizing information on the “adverse effects of multiple interventions for a specific condition” (SUR: A50).

Choosing between conducting an overview of reviews and a systematic review

The available guidance states that overviews of reviews may be considered when the purpose is to map, synthesize, or explore discrepancies in the available systematic review evidence (JBI: A44, UOx: A56, SUR: A50). Overviews of reviews might be most appropriate when the scope of the research question is broad (CMIMG: A27, EPPI: A40) and an expedited approach is needed (KCE: A4, CMIMG: A27, EPPI: A40). A pre-requisite to performing an overview of reviews is the availability of multiple high quality, up-to-date systematic reviews covering all interventions of interest (KCE: A4, CMIMG: A27, JBI: A44, UOx: A56, UConn: A53). Overviews of reviews are rarely appropriate for identifying research gaps (UOx: A56), ranking interventions, or making indirect comparisons (CMIMG: A27). Decision tools aimed at assisting authors in deciding between conducting a systematic review and an overview of reviews are available in Ballard 2017 (A56) and from Cochrane (A33).

Items to consider before conducting an overview of reviews

Before conducting an overview, several groups recommend first ensuring that the topic is clinically important (CHF: A15, CSU: A5, GCU: A42, KCL: A47). Overviews of reviews might not be the best approach when the field is new or rapidly evolving (EPPI: A40), but can be ideal to explore inconclusive evidence across multiple systematic reviews (CSU: A5, HarvU: A43, KCL: A47). Potential authors should scope the literature to ensure that there are up-to-date, high-quality systematic reviews available on all key interventions (CHF: A15, CSU: A5, CMIMG: A27, JBI: A44, KCL: A47, SUR: A50, UConn: A53, UOx: A56, WJNR: A57, WHU: A59), and that it would make sense to combine these in an overview of reviews (CHF: A12, CMIMG: A27). Authors also need to search for existing overviews of reviews in the production phases to prevent research waste (SUR: A50). Important resource and organizational factors to consider include the software that will be used for data management, a realistic time frame, and the size and composition of the author team (SUR: A50, TCD: A51, UConn: A53).

Author team composition and roles

Several groups recommend assembling a multidisciplinary author team, which ideally would include a project coordinator (CHF: A11), methodologist (CHF: A16, CMIMG: A27, JBI: A44, TCD: A51, UConn: A53, WJNR: A57), content expert (e.g., clinician) (CHF: A16, CMIMG: A27, DukeU: A39, SUR: A50, TCD: A51), and relevant stakeholders (e.g., patients, decision-makers) (SUR: A50). An information specialist (CHF: A16, CMIMG: A27, SUR: A50, UConn: A53) and/or statistician (CHF: A16, CMIMG: A27, SUR: A50, UConn: A53) may also be needed. At least two authors should be directly involved in day-to-day operations, because many steps should be verified or performed independently in duplicate (JBI: A44, SUR: A50, UConn: A53). If non-English-language systematic reviews are included, it may be necessary to engage first-language speakers (SUR: A50).

Target audience of the overview of reviews

Available guidance indicates that the target audience for the overview of reviews may include clinicians and other healthcare providers (CHF, CMIMG: A27, EPOC: A37, CPHG: A38, TCD: A51, WJNR: A57, WHU: A59), researchers (CMIMG: A27, EPOC: 37, DukeU: A39, WJNR: A57), informed consumers (e.g., patients and caregivers) (CMIMG: A27, WHU: A58), policymakers and other healthcare decision-makers (CHF: A7, CMIMG: A27, EPOC: A37, CPHG: A38, EPPI: A40, GCU:A42, JBI: A44, SUR: A50, WJNR: A57, WHU: A59, UCyp: A55), and funding agencies (CMIMG: A27).

Developing and registering an overview of reviews protocol

Guidance documents specify that all pre-planned methods should be developed in collaboration with key stakeholders, and be clearly defined (CMIMG: A27, GCU: A42, JBI: A44, KCL: A47, SUR: A50, UConn: A53, UCyp: A55). The protocol should also delineate the goals of the overview of reviews (GCU: A42), the outcomes and effect measures of interest (CMIMG: A27), and the knowledge translation strategy (GCU: A42). Several guidance documents indicate that the protocol should be peer-reviewed and/or published (JBI: A44, KCL: A47, UConn: 53, UCyp: A55), and most recommend that it be registered in an open-access database (HarvU: A43, JBI: A44, KCL: A47, SUR: A50, UConn: A53, UCyp: A55).

Specifying the scope of the overview of reviews

Several groups indicate that the scope should be specific and pre-defined based on elements of the populations, interventions, comparators, and outcomes of interest (CMIMG: A27, EPOC: A37, CPHG: A38, JBI: A44, NOKC: A49, SUR: A50, TCD: A51, WJNR: A57). The scope may be narrow, but is often broad, such that the included systematic reviews could be diverse (CMIMG: A27, CPHG: A38, DukeU: A39, JBI: A44, NEST: A48, SUR: A50). In deciding the scope, authors should be aware that there may be full or partial overlap with the scope of potentially eligible systematic reviews (EPPI: A40). The scope should therefore be determined with time and resource limits in mind (UConn: A53). When there is substantial heterogeneity in the questions posed by individual systematic reviews, it might become necessary to restrict the scope of the overview of reviews (CMIMG: A27).

Searching for systematic reviews (and potentially primary studies)

Guidance on search procedures indicates that Cochrane systematic reviews can be retrieved via the Cochrane Database of Systematic Reviews (CHF: A15, KCE: A4, CMIMG: A27, TCD: A51, UConn: A53). To locate non-Cochrane systematic reviews, it is recommended that authors search multiple databases (e.g., Medline, EMBASE) (CHF: A15, KCE: A4, CMIMG: A27, EPOC: A37, JBI: A44, NEST: A48, SUR: A50, UConn: A53, WJNR: A57) and registries (e.g., Epistemonikos, PROSPERO) (CMIMG: A27, KCE: A4, CPHG: A38, JBI: A44, NEST: A48, SUR: A50, TCD: A51, UConn: A53), hand-search relevant sources (e.g., webpages) (KCE: A4, JBI: A44, SUR: A50, TCD: A51), screen reference lists (CMIMG: A27, JBI: A44, TCD: A51), and contact relevant individuals and organizations (CMIMG: A27) to find published and non-commercially published systematic reviews. To improve the precision of database searches, systematic review-specific search terms, MeSH headings, and validated filters should be used (CHF: A15, KCE: A4, CMIMG: A27, EPOC: A37, DukeU: A39, JBI: A44, NEST: A48, SUR: A50, TCD: A51, UConn: A53). Authors may consider having search strategies peer-reviewed prior to implementation (TCD: A51). There is a lack of agreement about imposing restrictions based on publication status or language (CMIMG: A27, JBI: A44, SUR: A50, TCD: A51, UConn: A53). Several groups indicate that imposing a date restriction (e.g., past 10 years; pre-1990) could be appropriate (CPHG: A38, JBI: A44, NEST: A48, SUR: A50, TCD: A51. UBirm: A52). There is debate about whether authors should search for primary studies to fill “gaps” in systematic review evidence or to ensure the up-to-dateness of the overview of reviews (CSU: A5, CMIMG: A27, CPHG: A38, DukeU: A39, EPPI: A40, NOKC: A49, SUR: A50, UCyp: A55, WHU: A59).

Selecting systematic reviews for inclusion (and potentially primary studies)

Guidance on selecting systematic reviews (and potentially primary studies) for inclusion indicates the importance of clear pre-defined clinical and methodological criteria (ARCHE, KCE, CHF, CMIMG, CSU, EPOC, CPHG, DukeU, EPPI, HarvU, JBI, NEST, NOKC, SUR, TCD, UConn, UOx, WJNR, WHU, UCyp). Authors need to define “systematic reviews” and/or other types of research syntheses that will be included (CSU: A5, CMIMG: A27, EPOC: A37, HarvU: A43, JBI: A44, SUR: A50, UConn: A53, UCyp: A55). Screening should be a transparent and objective two-stage (titles/abstracts, full texts) process (KCE: A4, JBI: A44, NEST: A48, TCD: A51, UConn: A53), preceded by pilot testing (KCE: A4). The process should be performed independently by at least two reviewers, with a procedure in place to resolve disagreements (KCE: A4, EPOC: A37, JBI: A44, SUR: A50, TCD: A51, UConn: A53). When the scope of the overview of reviews differs from the available systematic reviews, authors may need to assess the relevance of their included primary studies, and include only those that match the overview or reviews’ objective (CHF: A15, CMIMG: A27). Several groups indicate that overview of reviews authors may decide to include only high-quality systematic reviews (CHF: A15, DukeU: A39, EPPI: A40, JBI: A44, NEST: A48, NOKC: A49, SUR: A50, TCD: A51, UConn: A53, UOx: A56, WHU: A58), but this risks introducing bias (EPPI: A40, SUR: A50, UConn: A53, UOx: A56). There is diverse guidance about how best to manage overlapping and/or discordant systematic reviews (ARCHE: A3, CHF: A15, CMIMG: A27, DukeU: A39, SUR: A50, UConn: 54). Authors may decide to include all systematic reviews regardless of overlap, or only include the most recent, most comprehensive, most relevant, or highest quality systematic reviews (ARCHE: A3, CHF: A15, CMIMG: A27, DukeU: A39, SUR: A50). An evidence-based decision tool is now available to help researchers consider these options (ARCHE: A3).

Should an overview of reviews include non-Cochrane systematic reviews?

Few research groups provided guidance on whether overviews of reviews should be restricted to Cochrane systematic reviews (CHF, CMIMG, EPPI, JBI). Two groups associated with Cochrane advocate for including only Cochrane systematic reviews if possible, but non-Cochrane systematic reviews might be considered if the available Cochrane reviews do not cover all of the important interventions (CHF: A15, CMIMG: A27). Two other groups advocate for the inclusion of both Cochrane and non-Cochrane systematic reviews, to ensure the breadth of coverage that is desired in the overview (EPPI: A40, JBI: A44). Including non-Cochrane systematic reviews increases comprehensiveness, but these systematic reviews might be of lower quality with less detailed reporting, and are likely to introduce primary study overlap, which adds complexity to the overview of reviews (CHF: A15, CMIMG: A27).

Assessing the quality or risk of bias of the included systematic reviews

Much of the available guidance indicates that it is important to appraise the quality of the included systematic reviews using a validated tool (ARCHE, KCE, CHF, CMIMG, CSU, EPOC, CPHG, DukeU, EPPI, HarvU, JBI, KCL, NEST, NOKC, SUR, TCD, UBirm, UConn, WJNR, UCyp). Several groups recommend independent assessment by at least two reviewers, with a process for resolving discrepancies (CMIMG: A27, CPHG: A38, DukeU: A39, JBI: A44, NEST: A48, NOKC: A49). Two groups recommend pilot testing (CMIMG: A27, UConn: A53), and another notes that authors should develop pre-defined decision rules (ARCHE: A2). There was no consensus on the ideal tool to use; fourteen groups mentioned AMSTAR (ARCHE: A2, KCE: A4, CHF: A15, EPOC: A37, CPHG: A38, DukeU: A39, JBI: A44, KCL: A47, NEST: A48, SUR: A50, TCD: A51, UConn: A53, WJNR: A57, UCyp: A55), with more recent guidance emphasizing AMSTAR 2 and ROBIS (KCE: A4, CHF: A13, JBI: A44, UConn: A53) which were released in 2017 and 2016 respectively. One group recommends assessing the quality of the systematic reviews as a whole, awarding points only if the amount and quality of information is sufficient for use at the overview of reviews level, and in the case of systematic reviews with multiple research questions, assessing only the quality for the comparison-outcome of interest for the overview of reviews (ARCHE: A2).

Collecting and presenting data on descriptive characteristics of included systematic reviews (and their primary studies)

Several groups recommend that data on descriptive characteristics be collected independently by at least two reviewers, with a process in place for resolving discrepancies (CMIMG: A27, EPOC: A37, JBI: A44, SUR: A50, UConn: A53). One group indicates that one reviewer with verification might occasionally be adequate (SUR: A50), and five recommend using a pilot-tested form (CMIMG: A27, EPOC: A37, JBI: A44, SUR: A50, UConn: A53, WJNR: A57). Important data to be collected from the systematic reviews includes citation details, search information, objectives, populations, setting, scope, risk of bias tool used, analysis methods, and outcomes of the included systematic reviews, as well as information about their included studies (KCE: A4, CHF: A15, CMIMG: A27, JBI: A44, NEST: A48, SUR: A50, TCD: A51). Descriptive characteristics of the systematic reviews should be presented narratively and/or in a table in adequate detail to support each systematic review’s inclusion in the overview of reviews, and inform the applicability of their findings (CHF: A15, CMIMG: A27, EPOC: A37, JBI: A44, NEST: A48, SUR: A50).

Collecting and presenting data on quality of primary studies contained within the included systematic reviews

Available guidance documents specify the importance of collecting and presenting data on the quality of the primary studies contained within the included systematic reviews (KCE, CHF, CMIMG, DukeU, EPPI, HarvU, JBI, NOKC, SUR, WJNR), but specific direction on how to do so is scant and conflicting. Six groups recommend preferentially extracting risk of bias assessments directly as reported in the included systematic reviews (CMIMG: A27, CHF: A15, NOKC: A49, EPPI: A40, SUR: A50, WJNR: A57). Three groups provide advice on dealing with systematic reviews that fail to report quality assessments, or assessments that seem unreliable, are discordant, or have been done using heterogeneous tools (CHF: A15, CMIMG: A27, SUR: A50). In these cases, authors could consider supplementing existing quality assessments (i.e., performing assessments for studies where this information is missing), or re-doing all quality assessments at the overview of reviews level (CHF: A15, CMIMG: A27, SUR: A50). One group indicates that it is important to extract and present domain-specific assessments when possible (CMIMG: A27), while others indicate that a summary of overall quality would be adequate (JBI: A44, NOKC: A49, SUR: A50).

Collecting, analyzing, and presenting outcome data

Several groups recommend that data be collected independently by at least two reviewers, with a process in place for resolving discrepancies (CMIMG: A27, EPOC: 37, JBI: A44, SUR: A50, UConn: A53). One group indicates that one reviewer with verification might occasionally be adequate (SUR: A50), and five recommend using a pilot-tested form (CMIMG: A27, EPOC: A37, JBI: A44, SUR: A50, UConn: A53, WJNR: A57). Most guidance documents recommend extracting data from the systematic reviews themselves (KCE: A4, CHF: A15, CMIMG: A27, EPOC: A37, EPPI: A40, JBI: A44, SUR: A50, UConn: A53). However, it is also noted that when important information is missing, authors may consider contacting systematic review authors, re-extracting data directly from the primary studies, or simply acknowledging the missing information (KCE: A4, CHF: A15, CMIMG: A27, EPPI: A40, JBI: A44, SUR: A50, UConn: A53). Prior to embarking on synthesis, twelve groups highlight the importance of authors investigating systematic reviews for primary study overlap, to avoid double-counting (KCE: A4, CHF: A15, CMIMG: A27, EPOC: A37, DukeU: A39, EPPI: A40, JBI: A44, SUR: A50, TCD: A51, UConn: A54, WHU: A58). Four groups recommend developing a citation matrix to visually map overlap, and calculating the corrected covered area (CCA) (CChile: A17, CMIMG: A27, UConn: A54, WHU: A58). Recent guidance recommends further investigation by calculating the CCA per pair of systematic reviews (CChile: A17) or per outcome (CMIMG: A27, UConn: A54), and examining overlapping systematic reviews further to understand whether reasons for overlap and/or discordant findings can be established (UConn: A54). An explanation about the size and number of overlapping studies, and the weight that these contribute to each analysis should be included in the presentation of results and/or discussion (CChile: A17, CMIMG: A27, WHU: A59).

The available guidance recommended two main methods of data analysis and presentation. The first is to simply summarize the data as they are originally presented within the systematic reviews (KCE: A4, CSU: A5, CHF: A15, CMIMG: A27, EPOC: A37, DukeU: A39, EPPI: A40, HarvU: A43, JBI: A44, NEST: A48, NOKC: A49, SUR: A50, WJNR: A57). If choosing this approach, it can be helpful to convert the results presented across the systematic reviews to one common summary statistic (CSU: A5, CMIMG: A27, HarvU: A43, SUR: A50, UConn: A53, UCyp: 55). The second method is to re-analyze the data in a different way than it has been analyzed and presented in the included systematic reviews (CHF: A15, CMIMG: A27, EPOC: A37, DukeU: A39, EPPI: A40, HarvU: A43, KCL: A47, SUR: A50, TCD: A51, UBirm: A52, UCyp: A55). Guidance from Cochrane recommends presenting the outcome data in a way that prevents making informal indirect comparisons across the systematic reviews (CMIMG: A27). One guidance document recommended that a brief, easily accessible, and easy to share summary of the evidence should be made available (GCU: A42).

Assessing the certainty/quality of the body of evidence

Most ( n = 10/13, 77%) of the available guidance recommends using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) approach to appraise the certainty of the body of evidence (KCE: A4, CHF: A15, CMIMG: A27, DukeU: A39, JBI: A44, NEST: A48, NOKC: A49, SUR: A50, TCD: A51, WHU: A59), though formal guidance on how to apply GRADE in the context of an overview of reviews is not yet available (SUR: A50, WHU: 59). Two groups indicate that GRADE appraisals should be presented for each pre-defined important outcome (CMIMG: A27, CHF: A15). Three groups indicate that GRADE assessments should ideally be extracted directly from the included systematic reviews, but when these are unavailable, authors might consider re-doing GRADE assessments themselves (CHF: A15, CMIMG: A27, SUR: A50). One group (CMIMG: A27) indicated that authors might also need to consider re-performing GRADE appraisals when data have been re-analyzed, the scope of the overview of reviews differs from the included systematic reviews, or there are concerns about the quality of the appraisals presented (CMIMG: A27).

Interpreting outcome data and drawing conclusions

Guidance on interpreting outcome data and drawing conclusions is relatively sparse. The available documents indicate that conclusions should provide direct answers to the overview of review’s objectives (JBI: A44), comment on the quality and quantity of available information (KCE: A4, EPOC: A37, DukeU: A39, HarvU: A43, WJNR: A57), and be warranted based on the strengths and weaknesses of the included systematic reviews and their findings (KCE: A4, EPOC: A37, DukeU: A39, JBI: A44, WJNR: A57). Recommendations for both research and practice should be provided (JBI: A44, WJNR: A57). As previously mentioned in the “Collecting, analyzing, and presenting outcome data” section, authors should not make informal indirect comparisons across systematic reviews, or use wording that may encourage readers to make these types of comparisons (CMIMG: A27). Authors should indicate whether further research is likely to alter the conclusions (WHU: A59), or whether no further research is needed (WJNR: A57, WHU: A59).

Updating the overview

There are few guidance documents that address updating the overview; the ones that exist indicate that overviews of reviews should be regularly updated (CMIMG, GCU, SUR), but how and when this should be done is unclear. One group recommends that overviews of reviews should be updated when the conclusions of any of their included systematic reviews change, or new systematic reviews of relevance are published (CMIMG: A27). It is unclear how authors would keep apprised of such occurrences.

Since the previous version of this review [ 9 ], we identified 9 new documents identifying challenges related to undertaking overviews of reviews of healthcare interventions (A17, A47, A60, A66, A70, A71, A74, A75, A76). The challenges described in these documents, in addition to those from methodological guidance documents, expand upon those previously reported in Pollock et al.’s 2016 review [ 12 ]. The majority of documents report on challenges related to selecting systematic reviews, and potentially primary studies, for inclusion ( n = 15); collecting and presenting descriptive characteristics ( n = 11); assessing the certainty of evidence ( n = 11); and collecting, analyzing, and presenting outcome data ( n = 23). These challenges tend to mirror areas in which consensus remains lacking among currently available guidance. In particular, authors are still challenged by whether to include primary studies in their overviews, and how best to identify, address, and present information about primary study overlap either at the study selection or data extraction and analysis phases of the overview of reviews. A summary of all reported challenges is shown in Table 4 .

This scoping review has revealed a steady accumulation of new guidance and provides a single source where author teams can locate information to help them decide if, when, and how to embark on an overview of reviews. New guidance that has become available over the past 5 years has helped to resolve some common challenges inherent in the production of overviews of reviews. Important developments include a decision tool for selecting systematic reviews for inclusion in overviews of reviews [ 25 ] and expanded guidance on handling primary study overlap at the analysis stage [ 26 , 27 ]. Despite marked progress, several areas continue to be characterized by inconsistent or insufficient guidance. For example, there is ongoing debate about whether, when, and how supplemental primary studies should be included in overviews of reviews. Empirical evidence is lacking on the optimal tool for assessing risk of bias or methodological quality of included systematic reviews, and how these tools might best be applied in overviews of reviews [ 28 , 29 ]. Guidance remains limited on how to extract and use appraisals of the quality of primary studies within the included systematic reviews and how to adapt GRADE methodology to overviews of reviews [ 7 , 21 ]. The challenges that overview authors reportedly face are often related to the steps where guidance is inadequate or conflicting.

Authors report facing challenges in the more complex steps of the overview process (and those that may differ most from systematic reviews), where guidance is either lacking (e.g., how to apply GRADE methodology to overviews) or where there is still no consensus on the preferred approach (e.g., how to best identify, manage, and present information on overlap). When guidance is available, it most often enumerates options on how to deal with these challenges that balance methodological rigor, comprehensiveness, and feasibility. There is insufficient empirical evidence, however, to fully understand how many of these methodological decisions may impact reviewer workload, the validity of results and conclusions of overviews of reviews, and their relevance for healthcare decision-makers [ 30 ]. Since there does not yet exist a minimum standard of conduct and reporting, published overviews of reviews use highly heterogeneous methodologies [ 11 , 30 , 31 , 32 ] and are often poorly and inconsistently reported [ 4 , 5 , 10 ]. The propagation of substandard overviews of reviews has the potential to undermine their legitimacy as an important tool for healthcare decision-making, and substantiates the urgent need to develop evidence-based conduct and reporting standards akin to what exists for systematic reviews [ 33 , 34 ]. Studies evaluating the impact of methodological decisions on the aforementioned outcomes have recently begun to emerge [ 35 ]. Authors would benefit from practical decision tools to guide them through the rigor-to-feasibility trade-offs that are common in overviews of reviews.

Researchers wishing to undertake an overview of reviews of healthcare interventions in 2020 are still challenged by a fragmented body of guidance documentation, but this should not overshadow the substantial developments in the science of overviews of reviews that have occurred over the past few years. In particular, both Cochrane [ 18 ] and the Joanna Briggs Institute [ 36 ] have released much needed updated handbook chapters that incorporate the most recent empirical evidence for producing overviews of reviews. Authors may use these stand-alone guidance documents to inform the planning of all stages of the overview of reviews. A decision tool published in 2019 can help researchers make informed decisions about managing primary study overlap at the selection stage of the overview of reviews [ 25 ]. How overview of reviews authors might best explore and present data on primary study overlap has become an area of increased research interest [ 26 , 27 , 37 , 38 ]. An evidence-based and consensus-based reporting guideline for overviews of reviews is currently in development [ 9 ]. The ongoing synthesis of accruing guidance for overviews of reviews, and primary research studies assessing the impact of methodological decisions in the more highly debated steps of overviews of reviews, will support the development of an evidence-based and consensus-based set of minimum methodological expectations for their conduct. The development of these minimum standards will, in turn, help overview authors to overcome many of the current challenges in the overview process.

Producing overviews of reviews is inherently demanding given the need to make sense of multiple levels of evidence (i.e., the systematic review level and primary study level) and overcome challenges for which there is often no agreed-upon solution [ 7 ]. One of the proposed advantages of overviews of reviews is that they can create efficiencies by making use of evidence already compiled in systematic reviews [ 6 , 7 ]. As guidance has accrued to assist authors in surmounting common challenges, however, it has become increasingly clear that suggested methods for undertaking overviews of reviews can require substantial expertise, time, and resources. Indeed, authors report challenges at all phases of the overview process. Data extraction, quality appraisal, and synthesis of data from systematic reviews can be extremely challenging and time-consuming because the reporting quality of systematic reviews is highly variable [ 2 ]. When authors are unable to extract all of the desired information from systematic reviews themselves, they may decide to return to the primary studies, but this can extend the timeline and overall work required for the overview substantially. Otherwise, authors must accept that the overview may be missing important information. Even when extracting information directly from the available systematic reviews, making sense of discordant results and conclusions can be tedious. For these reasons, it is important for authors to develop a good understanding of the available systematic reviews before embarking on the overview, and plans to deal with missing or discordant information should be devised at the protocol stage [ 2 , 32 , 33 , 34 ].

Strengths and limitations

We used a transparent and rigorous approach to summarize information from all available guidance documents for overviews of reviews of healthcare interventions, and reports of author experiences. The guidance summarized herein may not be directly applicable to other types of overviews of reviews (e.g., diagnostic accuracy, qualitative). We used the search strategies that offered the highest yield in the original version of this scoping review, and located much of the guidance within the grey literature (e.g., websites, conference proceedings). It is possible that some guidance has been missed by not employing term-based databased searches, and that the results may have differed if another set of seed articles were used. We limited this possibility by employing an iterative and rigorous search strategy (i.e., alerts in multiple databases and hand-searching multiple sources).

The rising popularity of overviews of reviews has been accompanied by a steady accumulation of new and sometimes conflicting guidance, yet several areas of uncertainty remain. These findings are being used to inform the development of a reporting guideline for overviews of reviews, which aims to support the high quality and transparency of reporting that is needed to substantiate overviews as a robust source of evidence for healthcare decision-making. Empirical research is needed to provide the data necessary to support the development of a minimum set of methodological expectations for the conduct of overviews of reviews.

Availability of data and materials

The datasets analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Corrected covered area

Grading of Recommendations, Assessment, Development and Evaluation

Preferred Reporting Items for Overviews of Reviews

Sur RL, Dahm P. History of evidence-based medicine. Indian J Urol. 2011;27(4):487–9.

Google Scholar  

Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and reporting characteristics of systematic reviews of biomedical research: a cross-sectional study. PLoS Med. 2016;13(5):e1002028.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Hartling L, Chisholm A, Thomson D, Dryden DM. A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One. 2012;7(11):e49667–e.

CAS   Google Scholar  

Pieper D, Buechter R, Jerinic P, Eikermann M. Overviews of reviews often have limited rigor: a systematic review. J Clin Epidemiol. 2012;65(12):1267–73.

Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15.

McKenzie JE, Brennan SE. Overviews of systematic reviews: great promise, greater challenge. Syst Rev. 2017;6(1):185.

Hunt H, Pollock A, Campbell P, Estcourt L, Brunton G. An introduction to overviews of reviews: planning a relevant research question and objective for an overview. Syst Rev. 2018;7(1):39.

Pollock M, Fernandes RM, Pieper D, Tricco AC, Gates M, Gates A, et al. Preferred Reporting Items for Overviews of Reviews (PRIOR): a protocol for development of a reporting guideline for overviews of reviews of healthcare interventions. Syst Rev. 2019;8(1):335.

Lunny C, Brennan SE, Reid J, McDonald S, McKenzie JE. Overviews of reviews incompletely report methods for handling overlapping, discordant, and problematic data. J Clin Epidemiol. 2020;118:69–85.

Pollock A, Campbell P, Brunton G, Hunt H, Estcourt L. Selecting and implementing overview methods: implications from five exemplar overviews. Syst Rev. 2017;6(1):145.

Pollock M, Fernandes RM, Becker LA, Featherstone R, Hartling L. What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary. Syst Rev. 2016;5(1):190.

Pieper D, Pollock M, Fernandes RM, Büchter RB, Hartling L. Epidemiology and reporting characteristics of overviews of reviews of healthcare interventions published 2012-2016: protocol for a systematic review. Syst Rev. 2017;6(1):73.

EQUATOR Network. Reporting guidelines under development for systematicreviews. EQUATOR. 2018. https://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-systematic-reviews/ . Accessed 14 July 2020.

Arksey H, O'Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Levac D, Colquhoun H, O'Brien KK. Scoping studies: advancing the methodology. Implement Sci. 2010;5(1):69.

Tricco AC, Lillie E, Zarin W, O'Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Pollock M, Fernandes R, Becker L, Pieper D, Hartling L. Chapter V: Overviews of reviews. In: Higgins JPT, Chandler J, Cumpston MS, Li T, Page MJ, Welch V, editors. Cochrane handbook for systematic reviews of interventions. London: Cochrane; 2020.

Greenhalgh T, Peacock RJB. Effectiveness and efficiency of search methods in systematic reviews of complex evidence: audit of primary sources. BMJ. 2005;331(7524):1064–5.

Horsley T, Dingwall O, Sampson M. Checking reference lists to find additional studies for systematic reviews. Cochrane Database Syst Rev. 2011;8:MR000026.

Pollock A, Farmer SE, Brady MC, Langhorne P, Mead GE, Mehrholz J, et al. An algorithm was developed to assign GRADE levels of evidence to comparisons within systematic reviews. J Clin Epidemiol. 2016;70:106–10.

Ribeiro DM, Cardoso M, Silva FQB, França C. Using qualitative metasummary to synthesize empirical findings in literature reviews. Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. Torino: Association for Computing Machinery; 2014.

Sandelowski M, Barroso J, Voils CI. Using qualitative metasummary to synthesize qualitative and quantitative descriptive findings. Res Nurs Health. 2007;30(1):99–111.

Sandelowski M, Barroso J. Chapter 6: synthesizing qualitative research findings: qualitative metasummary. In: Sandelowski M, Barroso J, editors. Handbook for synthesizing qualitative research. New York: Springer Publishing Company; 2006. p. 151–97.

Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. A decision tool to help researchers make decisions about including systematic reviews in overviews of reviews of healthcare interventions. Syst Rev. 2019;8(1):29.

Hennessy EA, Johnson BT. Examining overlap of included studies in meta-reviews: Guidance for using the corrected covered area index. Res Syn Meth. 2020;11:134–45.

Pérez-Bracchiglione J, Niño de Guzmán E, Roqué Figuls M, Urrútia G. Graphical representation of overlap degree of primary studies in systematic reviews included in overviews. Abstracts of the 26th Cochrane Colloquium. Santiago: Cochrane Database Syst Rev; 2020. p. 151–2.

Pollock M, Fernandes RM, Hartling L. Evaluation of AMSTAR to assess the methodological quality of systematic reviews in overviews of reviews of healthcare interventions. BMC Med Res Methodol. 2017;17(1):48.

Gates M, Gates A, Duarte G, Cary M, Becker M, Prediger B, et al. Quality and risk of bias appraisals of systematic reviews are inconsistent across reviewers and centres. J Clin Epidemiol. 2020;125:9–15.

Lunny C, Brennan SE, McDonald S, McKenzie JE. Toward a comprehensive evidence map of overview of systematic review methods: paper 1—purpose, eligibility, search and data extraction. Syst Rev. 2017;6(1):231.

Lunny C, Brennan SE, McDonald S, McKenzie JE. Toward a comprehensive evidence map of overview of systematic review methods: paper 2—risk of bias assessment; synthesis, presentation and summary of the findings; and assessment of the certainty of the evidence. Syst Rev. 2018;7(1):159.

Li L, Tian J, Tian H, Sun R, Liu Y, Yang K. Quality and transparency of overviews of systematic reviews. J Evid Based Med. 2012;5(3):166–73.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Higgins JPT, Lasserson T, Chandler J, Tovey D, Thomas J, Flemyng E, et al. Methodological Expectations of Cochrane Intervention Reviews (MECIR), version March 2020. Cochrane. 2020; https://community.cochrane.org/mecir-manual . Accessed 14 July 2020.

Pollock M, Fernandes RM, Newton AS, Scott SD, Hartling L. The impact of different inclusion decisions on the comprehensiveness and complexity of overviews of reviews of healthcare interventions. Syst Rev. 2019;8(1):18.

Aromataris E, Fernandez R, Godfrey C, Holly C, Khalil H, Tungpunkom P. Chapter 10: Umbrella Reviews. In: Aromataris E, Munn Z, editors. Joanna Briggs Institute Reviewer's Manual. Adelaide: The Joana Briggs Institute; 2017.

Pieper D, Antoine SL, Mathes T, Neugebauer EA, Eikermann M. Systematic review finds overlapping reviews were not mentioned in every other overview. J Clin Epidemiol. 2014;67(4):368–75.

Bougioukas K, Vounzoulaki E, Saavides E, Mantsiou C, Papanastasiou G, Haidich A-B. Methods for depicting overlap in overviews of systematic reviews: guidance for using tabular and graphical displays (protocol). Posted on the Open Science Framework. 2020. https://osf.io/e4fgd/ . Accessed 14 July 2020.

Download references

Acknowledgements

We thank Lana Atkinson, MLIS for assisting in the development of the searches.

Dr. Hartling is supported by a Canada Research Chair in Knowledge Synthesis and Translation. The funding body had no role in the design of the study, nor the collection, analysis, and interpretation of data.

Author information

Authors and affiliations.

Alberta Research Centre for Health Evidence, Department of Pediatrics, University of Alberta, 4-486C Edmonton Clinic Health Academy, 11405-87 Avenue NW, Edmonton, AB, T6G 1C9, Canada

Michelle Gates

Alberta Research Centre for Health Evidence, Department of Pediatrics, University of Alberta, 4-482C Edmonton Clinic Health Academy, 11405-87 Avenue NW, Edmonton, AB, T6G 1C9, Canada

Allison Gates

Alberta Research Centre for Health Evidence, Department of Pediatrics, University of Alberta, 4-488C Edmonton Clinic Health Academy, 11405-87 Avenue NW, Edmonton, AB, T6G 1C9, Canada

Samantha Guitard

Health Technology Assessment Unit, Institute of Health Economics, 1200 10405 Jasper Avenue, Edmonton, AB, T5J 3N4, Canada

Michelle Pollock

Alberta Research Centre for Health Evidence, Department of Pediatrics, University of Alberta, 4-472 Edmonton Clinic Health Academy, 11405-87 Avenue NW, Edmonton, AB, T6G 1C9, Canada

Lisa Hartling

You can also search for this author in PubMed   Google Scholar

Contributions

MG contributed to the searches, screening, data extraction and verification, and data analysis, and drafted sections of the manuscript. AG contributed to screening and data extraction and verification, and drafted sections of the manuscript. SG contributed to screening and data extraction and verification, and reviewed the drafted manuscript for important intellectual content. MP contributed to the protocol, was the lead investigator for and contributed to all stages involved in the first version of this scoping review, and reviewed the drafted manuscript for important intellectual content. LH contributed to the protocol, was the senior investigator for the first version of this scoping review, oversaw all aspects of the project, and reviewed the drafted manuscript for important intellectual content. All authors agreed on the version of the manuscript as submitted.

Corresponding author

Correspondence to Michelle Gates .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1..

PRISMA-ScR checklist. Completed reporting checklist for scoping reviews.

Additional file 2.

Details of the search strategies.

Additional file 3.

Studies excluded following full text review.

Additional file 4.

Documents included in the scoping review.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Gates, M., Gates, A., Guitard, S. et al. Guidance for overviews of reviews continues to accumulate, but important challenges remain: a scoping review. Syst Rev 9 , 254 (2020). https://doi.org/10.1186/s13643-020-01509-0

Download citation

Received : 15 July 2020

Accepted : 22 October 2020

Published : 04 November 2020

DOI : https://doi.org/10.1186/s13643-020-01509-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Overview of reviews
  • Umbrella review
  • Systematic reviews
  • Knowledge synthesis
  • Evidence synthesis
  • Evidence-based medicine
  • Scoping review
  • Metasummary

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

overview of reviews methodology

  • UNC Libraries
  • HSL Academic Process
  • Systematic Reviews

Systematic Reviews: Home

Created by health science librarians.

HSL Logo

  • Systematic review resources

What is a Systematic Review?

A simplified process map, how can the library help, publications by hsl librarians, systematic reviews in non-health disciplines, resources for performing systematic reviews.

  • Step 1: Complete Pre-Review Tasks
  • Step 2: Develop a Protocol
  • Step 3: Conduct Literature Searches
  • Step 4: Manage Citations
  • Step 5: Screen Citations
  • Step 6: Assess Quality of Included Studies
  • Step 7: Extract Data from Included Studies
  • Step 8: Write the Review

  Check our FAQ's

   Email us

   Call (919) 962-0800

   Make an appointment with a librarian

  Request a systematic or scoping review consultation

Sign up for a systematic review workshop or watch a recording

A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias , provide reliable findings , and inform decision-making.  ¹  

There are many types of literature reviews.

Before beginning a systematic review, consider whether it is the best type of review for your question, goals, and resources. The table below compares a few different types of reviews to help you decide which is best for you. 

Comparing Systematic, Scoping, and Systematized Reviews
Systematic Review Scoping Review Systematized Review
Conducted for Publication Conducted for Publication Conducted for Assignment, Thesis, or (Possibly) Publication
Protocol Required Protocol Required No Protocol Required
Focused Research Question Broad Research Question Either
Focused Inclusion & Exclusion Criteria Broad Inclusion & Exclusion Criteria Either
Requires Large Team Requires Small Team Usually 1-2 People
  • Scoping Review Guide For more information about scoping reviews, refer to the UNC HSL Scoping Review Guide.

Systematic Reviews: A Simplified, Step-by-Step Process Map

  • UNC HSL's Simplified, Step-by-Step Process Map A PDF file of the HSL's Systematic Review Process Map.
  • Text-Only: UNC HSL's Systematic Reviews - A Simplified, Step-by-Step Process A text-only PDF file of HSL's Systematic Review Process Map.

Creative commons license applied to systematic reviews image requires that reusers give credit to the creator. It allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only.

The average systematic review takes 1,168 hours to complete. ¹   A librarian can help you speed up the process.

Systematic reviews follow established guidelines and best practices to produce high-quality research. Librarian involvement in systematic reviews is based on two levels. In Tier 1, your research team can consult with the librarian as needed. The librarian will answer questions and give you recommendations for tools to use. In Tier 2, the librarian will be an active member of your research team and co-author on your review. Roles and expectations of librarians vary based on the level of involvement desired. Examples of these differences are outlined in the table below.

Roles and expectations of librarians based on level of involvement desired.
Tasks Tier 1: Consultative Tier 2: Research Partner / Co-author
Guidance on process and steps Yes Yes
Background searching for past and upcoming reviews Yes Yes
Development and/or refinement of review topic Yes Yes
Assistance with refinement of PICO (population, intervention(s), comparator(s), and key questions Yes Yes
Guidance on study types to include Yes Yes
Guidance on protocol registration Yes Yes
Identification of databases for searches Yes Yes
Instruction in search techniques and methods Yes Yes
Training in citation management software use for managing and sharing results Yes Yes
Development and execution of searches No Yes
Downloading search results to citation management software and removing duplicates No Yes
Documentation of search strategies No Yes
Management of search results No Yes
Guidance on methods Yes Yes
Guidance on data extraction, and management techniques and software Yes Yes
Suggestions of journals to target for publication Yes Yes
Drafting of literature search description in "Methods" section No Yes
Creation of PRISMA diagram No Yes
Drafting of literature search appendix No Yes
Review other manuscript sections and final draft No Yes
Librarian contributions warrant co-authorship No Yes
  • Request a systematic or scoping review consultation

The following are systematic and scoping reviews co-authored by HSL librarians.

Only the most recent 15 results are listed. Click the website link at the bottom of the list to see all reviews co-authored by HSL librarians in PubMed

Researchers conduct systematic reviews in a variety of disciplines.  If your focus is on a topic outside of the health sciences, you may want to also consult the resources below to learn how systematic reviews may vary in your field.  You can also contact a librarian for your discipline with questions.

  • EPPI-Centre methods for conducting systematic reviews The EPPI-Centre develops methods and tools for conducting systematic reviews, including reviews for education, public and social policy.

Cover Art

Environmental Topics

  • Collaboration for Environmental Evidence (CEE) CEE seeks to promote and deliver evidence syntheses on issues of greatest concern to environmental policy and practice as a public service

Social Sciences

overview of reviews methodology

  • Siddaway AP, Wood AM, Hedges LV. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annu Rev Psychol. 2019 Jan 4;70:747-770. doi: 10.1146/annurev-psych-010418-102803. A resource for psychology systematic reviews, which also covers qualitative meta-syntheses or meta-ethnographies
  • The Campbell Collaboration

Social Work

Cover Art

Software engineering

  • Guidelines for Performing Systematic Literature Reviews in Software Engineering The objective of this report is to propose comprehensive guidelines for systematic literature reviews appropriate for software engineering researchers, including PhD students.

Cover Art

Sport, Exercise, & Nutrition

Cover Art

  • Application of systematic review methodology to the field of nutrition by Tufts Evidence-based Practice Center Publication Date: 2009
  • Systematic Reviews and Meta-Analysis — Open & Free (Open Learning Initiative) The course follows guidelines and standards developed by the Campbell Collaboration, based on empirical evidence about how to produce the most comprehensive and accurate reviews of research

Cover Art

  • Systematic Reviews by David Gough, Sandy Oliver & James Thomas Publication Date: 2020

Cover Art

Updating reviews

  • Updating systematic reviews by University of Ottawa Evidence-based Practice Center Publication Date: 2007
  • Next: Step 1: Complete Pre-Review Tasks >>
  • Last Updated: May 16, 2024 3:24 PM
  • URL: https://guides.lib.unc.edu/systematic-reviews

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Overviews of Reviews: Concept and Development

Affiliation.

  • 1 Universidad de Murcia.
  • PMID: 35485529
  • DOI: 10.7334/psicothema2021.586

Background: In the last years, overviews of systematic reviews, or umbrella reviews, have seen a dramatic increase in their use. An overview aims to provide a summary of the included reviews and will often examine research questions beyond those addressed in the systematic reviews being synthesised. The purpose of this article is to provide some recommendations on how overviews should be conducted and reported.

Method: A literature review was performed to identify relevant papers on both methodological and applied overviews.

Results: The current literature recommends carrying out overviews by following similar steps to those of systematic reviews: (a) Defining the overview research question; (b) inclusion and exclusion criteria; (c) literature search; (d) data extraction; (e) assessment of risk of bias and reporting quality; (f) overview results; and (g) reporting the overview. Of special interest is how to address dependencies between the systematic reviews.

Conclusions: Overviews allow evidence to be efficiently combined from multiple systematic reviews. This offers the possibility of translating and summarizing large amounts of information. As in primary studies and systematic reviews, conducting and reporting of overviews must meet appropriate quality standards.

PubMed Disclaimer

Similar articles

  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Overviews of reviews incompletely report methods for handling overlapping, discordant, and problematic data. Lunny C, Brennan SE, Reid J, McDonald S, McKenzie JE. Lunny C, et al. J Clin Epidemiol. 2020 Feb;118:69-85. doi: 10.1016/j.jclinepi.2019.09.025. Epub 2019 Oct 10. J Clin Epidemiol. 2020. PMID: 31606430 Review.
  • Toward a comprehensive evidence map of overview of systematic review methods: paper 2-risk of bias assessment; synthesis, presentation and summary of the findings; and assessment of the certainty of the evidence. Lunny C, Brennan SE, McDonald S, McKenzie JE. Lunny C, et al. Syst Rev. 2018 Oct 12;7(1):159. doi: 10.1186/s13643-018-0784-8. Syst Rev. 2018. PMID: 30314530 Free PMC article.
  • Toward a comprehensive evidence map of overview of systematic review methods: paper 1-purpose, eligibility, search and data extraction. Lunny C, Brennan SE, McDonald S, McKenzie JE. Lunny C, et al. Syst Rev. 2017 Nov 21;6(1):231. doi: 10.1186/s13643-017-0617-1. Syst Rev. 2017. PMID: 29162130 Free PMC article. Review.
  • What guidance is available for researchers conducting overviews of reviews of healthcare interventions? A scoping review and qualitative metasummary. Pollock M, Fernandes RM, Becker LA, Featherstone R, Hartling L. Pollock M, et al. Syst Rev. 2016 Nov 14;5(1):190. doi: 10.1186/s13643-016-0367-5. Syst Rev. 2016. PMID: 27842604 Free PMC article. Review.
  • The Impact of Exercise on Interleukin-6 to Counteract Immunosenescence: Methodological Quality and Overview of Systematic Reviews. Reis ASLDS, Furtado GE, Menuchi MRTP, Borges GF. Reis ASLDS, et al. Healthcare (Basel). 2024 May 7;12(10):954. doi: 10.3390/healthcare12100954. Healthcare (Basel). 2024. PMID: 38786366 Free PMC article. Review.
  • Overview of systematic reviews of probiotics in the prevention and treatment of antibiotic-associated diarrhea in children. Yang Q, Hu Z, Lei Y, Li X, Xu C, Zhang J, Liu H, Du X. Yang Q, et al. Front Pharmacol. 2023 Jul 24;14:1153070. doi: 10.3389/fphar.2023.1153070. eCollection 2023. Front Pharmacol. 2023. PMID: 37564180 Free PMC article.
  • Symptoms of emotional disorders and sociodemographic factors as moderators of dropout in psychological treatment: A meta-review. Carpallo-González M, Muñoz-Navarro R, González-Blanch C, Cano-Vindel A. Carpallo-González M, et al. Int J Clin Health Psychol. 2023 Oct-Dec;23(4):100379. doi: 10.1016/j.ijchp.2023.100379. Epub 2023 Mar 5. Int J Clin Health Psychol. 2023. PMID: 36922928 Free PMC article.
  • Introduction to Umbrella Reviews as a Useful Evidence-Based Practice. Choi GJ, Kang H. Choi GJ, et al. J Lipid Atheroscler. 2023 Jan;12(1):3-11. doi: 10.12997/jla.2023.12.1.3. Epub 2022 Oct 21. J Lipid Atheroscler. 2023. PMID: 36761061 Free PMC article. Review.
  • Sponsorship Bias in Clinical Trials in the Dental Application of Probiotics: A Meta-Epidemiological Study. Hu Q, Acharya A, Leung WK, Pelekos G. Hu Q, et al. Nutrients. 2022 Aug 19;14(16):3409. doi: 10.3390/nu14163409. Nutrients. 2022. PMID: 36014917 Free PMC article.

Publication types

  • Search in MeSH

LinkOut - more resources

Full text sources, research materials.

  • NCI CPTC Antibody Characterization Program

full text provider logo

  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

overview of reviews methodology

Moody Medical Library

Library.utmb.edu, overview of reviews.

  • What is a Review Article?

Narrative Reviews

Scoping reviews, rapid reviews, umbrella reviews.

  • Systematic Reviews This link opens in a new window

Ask a Librarian

Reference Librarians are available Monday-Friday, 8am - 5pm

 Call: 409-772-2372  Text: 409-433-9976   Email (askus [at] utmb [dot] libanswers [dot] com)   Schedule an Appointment

The landscape of review articles  is varied. You may encounter many different types of reviews as a reader or contributor to the scholarly record in your field. Each review type has strengths and weaknesses, and some have methodologies to follow.

  • Which Review Should I Do? Use this interactive form to help you choose which review is appropriate for your interest and timeline. (Adapted from Cornell University Library's What Review Is Right for You)

Read about Reviews:

  • A typology of reviews: an analysis of 14 review types and associated methodologies Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info Libr J. 2009 Jun;26(2):91-108. doi:10.1111/j.1471-1842.2009.00848.x. more... less... Different types of review articles are described along with perceived strengths/weaknesses; includes a nice summary table.
  • Meeting the review family: exploring review types and associated information retrieval requirements Sutton A, Clowes M, Preston L, Booth A. Meeting the review family: exploring review types and associated information retrieval requirements . Health Information & Libraries Journal. 2019 Sep;36(3):202-22. https://doi.org/10.1111/hir.12276

Also Called: Literature Reviews,Overview Reviews, Review Articles

Purpose: Summarize critical points of current knowledge

Research Question: General topic; can be subjective

Team Size:  1 or more

Time to Complete : as short as 1 month

Advantages:

  • Feels like a book chapter
  • Can be part of a larger project including theses/dissertations

Limitations:

  • Subjective, open to bias
  • No explicit methodology to follow or replicate

Methodology :  Non-standardized search, unclear/unspecified methods

  • How to write a narrative review Agarwal S, Charlesworth M, Elrakhawy M. How to write a narrative review. Anaesthesia. 2023;78(9):1162-1166. doi:10.1111/anae.16016
  • Narrative Reviews: Flexible, Rigorous, and Practical Sukhera J. Narrative Reviews: Flexible, Rigorous, and Practical. J Grad Med Educ. 2022;14(4):414-417. doi:10.4300/JGME-D-22-00480.1
  • Traumatic Cardiac Arrest - A Narrative Review Schober P, Giannakopoulos GF, Bulte CSE, Schwarte LA. Traumatic Cardiac Arrest-A Narrative Review. J Clin Med. 2024;13(2):302. doi:10.3390/jcm13020302

Also Called:  Mapping Review, Scoping Study

Purpose:  A framework to provide a preliminary assessment of the size and scope of the literature; includes mapping key concepts and identifying gaps in the literature 

Research Question:  Broad and topic centered (may have sub-questions)

Team Size:  3 or more

Time to Complete :   3-12+ months

Advantages:  Helpful in determining if a systematic review is needed on a topic

  • Larger volumes of literature to screen can take as long or longer than a systematic review
  • Lack of critical appraisal of the literature can introduce bias

Methodology:  Systematic and comprehensive search, descriptive/reproducible methods

Helpful Articles: 

  • PRISMA for Scoping Reviews
  • Steps for Conducting a Scoping Review Mak S, Thomas A. Steps for Conducting a Scoping Review. J Grad Med Educ. 2022;14(5):565-567. doi:10.4300/JGME-D-22-00621.1
  • Updated methodological guidance for the conduct of scoping reviews Peters MDJ, Marnie C, Tricco AC, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth. 2020;18(10):2119-2126. doi:10.11124/JBIES-20-00167
  • Non-use of telemedicine: A scoping review Reinhardt G, Schwarz PE, Harst L. Non-use of telemedicine: A scoping review. Health Informatics J. 2021;27(4): doi:10.1177/14604582211043147

Also Called:  Expedited reviews

Purpose:  A streamlined systematic review process to respond to urgent situations or political pressure, often in a rapidly changing field

Research Question:  Narrow and focused

Team Size:  2 or more

Time to Complete:   1-3 months

Advantages: 

  • "Quick but not dirty" systematic review process
  • Some systematic review methods omitted for time-constrained settings
  • Under-developed methodology
  • Methodological shortcuts increase the risk of bias
  • Fast-tracking the process could lead to inadequate information

Methodology:  Systematic review methods applied with time constraints; no standard for which systematic review pieces are abbreviated or eliminated

  • Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews Garritty C, Gartlehner G, Nussbaumer-Streit B, et al. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13-22. doi:10.1016/j.jclinepi.2020.10.007
  • Evidence Summaries: the evolution of a rapid review approach Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1:10. doi:10.1186/2046-4053-1-10
  • Expediting systematic reviews: methods and implications of rapid reviews Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56. doi:10.1186/1748-5908-5-56
  • Rapid reviews methods series: Guidance on literature search Klerings I, Robalino S, Booth A, et al. Rapid reviews methods series: Guidance on literature search. BMJ Evid Based Med. 2023;28(6):412-417. doi:10.1136/bmjebm-2022-112079
  • Misinformation About COVID-19 Vaccines on Social Media: Rapid Review Skafle I, Nordahl-Hansen A, Quintana DS, Wynn R, Gabarron E. Misinformation About COVID-19 Vaccines on Social Media: Rapid Review. J Med Internet Res. 2022;24(8):e37367.doi:10.2196/37367

Also Called:  Overview of Reviews

Purpose:  Synthesize data from existing systematic reviews/meta-analyses to examine the highest level of evidence on an oversaturated topic

Research Question:  Broad or specific based on the quantity/objectives of the previous reviews

Time to Complete:  6-12 months

Advantages:  Provides an overview and list of relevant systematic reviews on a particular topic

Limitations:  Limited evidence sources if not many systematic reviews/meta-analyses are available

Methodology:  Systematic review methods applied but limited to publication type

  • Cochrane Handbook: Chapter V - Overview of Reviews Pollock M, Fernandes RM, Becker LA, Pieper D, Hartling L. Chapter V: Overviews of Reviews. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023.
  • Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses Ioannidis JP. Integration of evidence from multiple meta-analyses: a primer on umbrella reviews, treatment networks and multiple treatments meta-analyses. CMAJ. 2009;181(8):488-493. doi:10.1503/cmaj.081086
  • Methodology in conducting a systematic review of systematic reviews of healthcare interventions Smith V, Devane D, Begley CM, Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol. 2011;11(1):15. doi:10.1186/1471-2288-11-15
  • Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Healthc. 2015;13(3):132-140. doi:10.1097/XEB.0000000000000055
  • Ten simple rules for conducting umbrella reviews Fusar-Poli P, Radua J. Ten simple rules for conducting umbrella reviews. Evid Based Ment Health. 2018;21(3):95-100. doi:10.1136/ebmental-2018-300014
  • The Efficacy and Safety of Ischemic Stroke Therapies: An Umbrella Review Li Y, Cui R, Fan F, et al. The Efficacy and Safety of Ischemic Stroke Therapies: An Umbrella Review. Front Pharmacol. 2022;13:924747. doi:10.3389/fphar.2022.924747
  • Next: Systematic Reviews >>
  • Last Updated: May 1, 2024 1:00 PM
  • URL: https://guides.utmb.edu/review-overview
  • Research article
  • Open access
  • Published: 03 February 2011

Methodology in conducting a systematic review of systematic reviews of healthcare interventions

  • Valerie Smith 1 ,
  • Declan Devane 2 ,
  • Cecily M Begley 1 &
  • Mike Clarke 3  

BMC Medical Research Methodology volume  11 , Article number:  15 ( 2011 ) Cite this article

197k Accesses

57 Altmetric

Metrics details

Hundreds of studies of maternity care interventions have been published, too many for most people involved in providing maternity care to identify and consider when making decisions. It became apparent that systematic reviews of individual studies were required to appraise, summarise and bring together existing studies in a single place. However, decision makers are increasingly faced by a plethora of such reviews and these are likely to be of variable quality and scope, with more than one review of important topics. Systematic reviews (or overviews) of reviews are a logical and appropriate next step, allowing the findings of separate reviews to be compared and contrasted, providing clinical decision makers with the evidence they need.

The methods used to identify and appraise published and unpublished reviews systematically, drawing on our experiences and good practice in the conduct and reporting of systematic reviews are described. The process of identifying and appraising all published reviews allows researchers to describe the quality of this evidence base, summarise and compare the review's conclusions and discuss the strength of these conclusions.

Methodological challenges and possible solutions are described within the context of (i) sources, (ii) study selection, (iii) quality assessment (i.e. the extent of searching undertaken for the reviews, description of study selection and inclusion criteria, comparability of included studies, assessment of publication bias and assessment of heterogeneity), (iv) presentation of results, and (v) implications for practice and research.

Conducting a systematic review of reviews highlights the usefulness of bringing together a summary of reviews in one place, where there is more than one review on an important topic. The methods described here should help clinicians to review and appraise published reviews systematically, and aid evidence-based clinical decision-making.

Peer Review reports

The healthcare literature contains hundreds of thousands of studies of healthcare interventions, growing at tens of thousands per year [ 1 ]. In most areas of health care, there are too many studies for people involved in providing care to identify and consider when making decisions. Researchers have recognised this problem and many have accepted the challenge of preparing systematic reviews of individual studies in order to appraise, summarise and bring together existing studies in a single place. More recently, calls have been made for 'rapid reviews' to provide decision-makers with the evidence they need in a shorter time frame, but the possible limitations of such 'rapid reviews', compared to full systematic reviews, require further research [ 2 ]. There are now several organisations dedicated to the preparation of systematic reviews, including the National Institute of Health and Clinical Excellence (NICE) in the UK, the Evidence-based Practice Centre Program, funded by AHRQ in the USA, the Joanna Briggs Institute, and the international Campbell and Cochrane Collaborations, with the latter being the largest single producer of systematic reviews in health care, with more than 4200 published by the end of 2010 [ 3 ]. In recent years however, decision makers who were once overwhelmed by the number of individual studies have become faced by a plethora of reviews [ 4 , 5 ]. These reviews are likely to be of variable quality and scope, with more than one systematic review on important topics. For example, a comprehensive search of twelve health related citation databases (using database specific search strategies) identified over thirty reviews evaluating the effectiveness of nurse and midwife-led interventions on clinical outcomes, as part of an on-going study into the impact of the role of nurse and midwife specialist and advanced practitioners in Ireland. A logical and appropriate next step is to conduct a systematic review of reviews of the topic under consideration, allowing the findings of separate reviews to be compared and contrasted, thereby providing clinical decision makers with the evidence they need. We have been involved in several examples of systematic reviews (or overviews) of reviews [ 6 – 9 ] and The Cochrane Collaboration introduced a new type of Cochrane review in 2009 [ 10 ], the overview of Cochrane reviews, with two full overviews [ 11 , 12 ] and protocols for five more [ 13 – 17 ] published by October 2010. These reviews of reviews aims to provide a summary of evidence from more than one systematic review at a variety of different levels, including the combination of different interventions, different outcomes, different conditions, problems or populations, or the provision of a summary of evidence on the adverse effects of an intervention [ 10 ].

This paper describes the conduct and methods used to identify and appraise published and unpublished systematic reviews systematically. It draws on our experience of conducting several of these reviews of reviews in recent years. The purpose of such an overview, in identifying and appraising all published reviews is to describe their quality, summarise and compare their conclusions and discuss the strength of these conclusions, so that best evidence is made available to clinical decision-makers. During the review process a number of methodological challenges can arise. We describe these challenges and offer possible solutions to overcome them. We hope to provide a guide to clinicians and researchers who wish to conduct a systematic review of reviews and to share our experiences.

The objective and the reasons for conducting a systematic review of reviews should be made explicit at the start of the process, as this is likely to influence the methods used for the review. In formulating the scope for the review of reviews, the PICOS (participants, interventions, comparators, outcomes, and study design) structure may be helpful. This can help the reviewers to delineate clearly if they wish, for example, to compare and summarise systematic reviews that address the same treatment comparison or a particular intervention for a population or condition, or a range of interventions for people with a specific condition. Following this, the methods in conducting a systematic review of reviews require consideration of the following aspects, akin to the planning for a systematic review of individual studies: sources, review selection, quality assessment of reviews, presentation of results and implications for practice and research.

Sources and searching

Locating and retrieving relevant literature is challenging, yet crucial to the success of a systematic review. The material sourced provides the information from which evidence, conclusions and recommendations are drawn. For many, the literature search may appear overwhelming, given the sheer volume of material to check through. However, establishing a systematic search strategy, before commencing the literature search, is fundamental to appropriate and successful information retrieval. This planning assists in meeting the requirements of the systematic review and in answering the research question. In developing a search strategy, the scope of the search, its thoroughness and the time available to conduct it, all need to be considered. The aim is to ensure that the systematic review of reviews is comprehensive, thorough and objective.

The methods used in sourcing relevant literature to conduct a systematic review of reviews are similar to those adopted in conducting a systematic review of individual studies with some subtle differences described here. A realistic time-frame to conduct the systematic review of reviews should be established. It has been estimated that a typical systematic review would take between six and eighteen months [ 18 ] but this is very dependent on the research question and the staffing, funding and other resources available. The process might be faster for a systematic review of reviews if the time-frame to complete the literature search is significantly reduced through the ability to target the searching of articles most likely to be reports of a systematic review. In a systematic review of individual studies, the search should be as wide as possible to maximize the likelihood of capturing all relevant data and minimizing the effects of reporting biases. A search of a variety of electronic databases relevant to the topic of interest is recommended [ 18 ]. However, in a systematic review of reviews, it may be possible to limit the searches to databases specific to systematic reviews such as the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effects. Likewise, although the search for a review of individual studies might need to cover many decades [ 19 ], limiting the search to period from the early 1990 s onwards is likely to identify all but the very small minority of systematic reviews conducted before then [ 20 , 21 ]. Furthermore, researchers might find that identifying and highlighting a recent high quality systematic review will prove of most benefit to decision makers using their review or reviews. However, a summary of the earlier reviews can still prove helpful if these contain relevant information that is not included in the recent review. Applying language restrictions is not recommended; but, unavoidable constraints such as a lack of access to translation services or funds to pay for these may make it necessary to restrict the systematic review or reviews to English language publications. In such instances, this limitation should be acknowledged when reporting the review and it might be worthwhile reporting the difference between searches with and without language restrictions in order to estimate the amount of literature that might have been excluded.

The search terms used for the literature search should be clearly described, with information on their relevance to the research question. Furthermore, search terms should be focused so that they are broad enough in scope to capture all the relevant data yet narrow enough to minimize the capture of extraneous literature that may result in unnecessary time and effort being spent assessing irrelevant articles. In conducting a systematic review of reviews, systematic reviews rather than individual studies are of interest to the reviewer and several search strategies have been developed to identify this type of research [ 22 , 23 ] which could be combined with the terms for the relevant healthcare topic. In developing the search strategy for a systematic review of reviews, researchers might wish to consider the PRESS initiative, developed as a means for peer reviewing literature searches [ 24 ] to check that the various elements of the electronic search strategy have been considered. To minimize the risk of missing relevant reviews, a manual search of key journals and of the reference lists of reviews captured by the initial searches is also recommended. The literature search can also be complemented by contacting experts in the topic under review and by checking articles which cite individual studies that are known to be relevant to the topic. This may prove relevant in learning of published systematic reviews that are not indexed in the bibliographic databases searched, and of ongoing systematic reviews near completion. The development of a prospective register of systematic reviews should help further with this [ 25 ].

Review Selection

A major challenge to review selection is identifying all reviews relevant to the topic of interest, and of potential importance to answering the research question. During the planning phase, before commencing the systematic review of reviews, a review team should be established. The review team should include at least one person with methodological expertise in conducting systematic reviews and at least one person with expertise on the topic under review. The review team is responsible for developing a review selection strategy. An agreement of inclusion and exclusion criteria should be made before starting the review selection process. Aspects of this process might include decisions regarding the type of reviews that may be included in the systematic review. For example, in our review on interventions for preventing preterm birth [ 6 ], we restricted the inclusion criteria to reviews of randomized controlled trials. Another example of inclusion criteria might be to limit the systematic review of reviews to reviews of a particular type of participant (such as women having their first baby) or which assess a particular type of pain relief.

When a selection strategy has been developed, the selection process is carried out in a similar way to a review of individual studies:

Assess retrieved titles and abstracts for relevance and duplication.

Select those you wish to retrieve and appraise further.

Obtain full text copies of these potentially eligible reviews.

Assess these reviews for relevance and quality; ideally, using independent assessment by at least two members of the review team. This reduces bias in review selection and allows for appropriate discussion should uncertainty arise.

Quality Assessment of Reviews

The quality and strength of evidence presented in the individual, included reviews should influence the conclusions drawn in the systematic review of these. The quality and scope of published reviews varies widely. The strength of the conclusions and the ability to provide decision-makers with reliable information depends on the inclusion of reviews that meet a minimum standard of quality. When assessing the quality of the reviews, one should try to avoid being influenced by extraneous variables, such as authors, institutional affiliations and journal names; and should focus on the quality of the conduct of the review. Although the researchers will usually have to do this via an assessment of the quality of report, with the hope that initiatives such as PRISMA (formerly, QUOROM) which assist by facilitating adequate standards of reporting [ 26 ].

The AMSTAR tool [ 27 ], which became available after we started work on our review of reviews, is the only tool that we are aware of that has been validated as a means to assess the methodological quality of systematic reviews and could be used in the review of reviews to determine if the potentially eligible reviews meet minimum requirements based on quality. While the authors of the AMSTAR paper [ 27 ] recognise the need for further testing of the AMSTAR tool, important domains identified within the tool are: establishing the research question and inclusion criteria before the conduct of the review, data extraction by at least two independent data extractors, comprehensive literature review with searching of at least two databases, key word identification, expert consultation and limits applied, detailed list of included/excluded studies and study characteristics, quality assessment of included studies and consideration of quality assessments in analysis and conclusions, appropriate assessment of homogeneity, assessment of publication bias and a statement of any conflict of interest.

Although our review of reviews began before the publication of the AMSTAR tool, we used similar domains to assess review quality. Our assessment criteria are shown below and provide a structure that can be used to report the quality and comparability of the included reviews to help readers assess the strength of the evidence in the review of reviews:

▪ The extent of searching undertaken: Are the databases searched, years searched and restrictions applied in the original review clearly described? Information on the extent of searching should be clearly provided, to allow for a comprehensive assessment of the scope of the review.

▪ Description of review selection and inclusion criteria: Do the authors of the original review provide details of study selection and eligibility criteria and what are these details? This information should be clearly reported in the systematic review of reviews.

▪ Assessment of publication bias: Did the authors of the original review seek additional information from authors of the studies they included? Are there any details of statistical tests (such as funnel plot analysis) to assess for publication bias?

▪ Assessment of heterogeneity: Did the authors of the original review discuss or provide details of any tests of heterogeneity? In the presence of significant heterogeneity, were statistical tests used to address this?

▪ Comparability of included reviews: Are the reviews comparable in terms of eligibility criteria, study characteristics and primary outcome of interest? For example, in our review of reviews on fetal fibronectin and transvaginal cervical ultrasound for predicting preterm birth, [ 8 ] we included reviews that had incorporated studies among women who were both symptomatic and asymptomatic for preterm birth. As a means of addressing comparability of the included reviews, we provided details of the number of women in each group separately and reported the results for each group separately, where applicable.

Presentation of Results

When the results of a systematic review of reviews are presented, this should present the reader with the major conclusions of the review through the provision of answers to the research question, as well as the evidence on which these conclusions are based and an assessment of the quality of the evidence supporting each conclusion; for example, using the GRADE approach as adopted for the 'Summary of Findings' table in Cochrane reviews [ 28 ]. It is important to be specific in reporting the primary outcome of interest for the review, and this can reduce workload by limiting data extraction to only those results relevant to the topic of interest from reviews that report on several outcome measures. For example, some systematic reviews on antibiotic therapy for the prevention of preterm birth [ 29 , 30 ] report a variety of outcome measures other than preterm birth (e.g. neonatal outcomes). However, in our systematic reviews of reviews [ 6 , 8 ], our research focus on preterm birth meant that only results for the effects on preterm birth were extracted.

The use of summary tables and figures is helpful in presenting results in a structured and clear format that will enhance textual commentary. Table 1 is an example of the provision of details of the scope of the reviews included in a systematic review of reviews (3). Sources of evidence and some quality assessment criteria are included. The quality assessment is enhanced by a narrative discussion of heterogeneity and publication bias.

Table 2 provides an example of how summary results from each original review might be presented in the systematic review of reviews.

The use of a checklist or reporting tool may also guide the reviewer when reporting on a systematic review of reviews. Although we did not identify a tool specific to reporting of systematic reviews of reviews, the PRISMA statement provides a useful framework to follow [ 26 ]. This guidance, developed for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions, can be used to assess item inclusion in a systematic review of systematic reviews.

Implications for practice and research

One of the problems faced by decision makers who encounter multiple reviews of the same topic is inconsistency in the results or conclusions of these reviews. Jadad et al (1997) provide guidance on how to address discordant results from individual reviews [ 31 ] and conducting systematic reviews of reviews will help to address this issue further. A systematic review of reviews can provide reassurances that the conclusions of individual reviews are consistent, or not. The quality of individual reviews may be assessed, so that evidence from the best quality reviews can be highlighted and brought together in a single document, providing definitive summaries that could be used to inform clinical practice.

Meta-analyses in systematic reviews of reviews

A major challenge in conducting a systematic review of reviews is the creation of a 'meta-analysis' of the included reviews, which are themselves meta-analyses. In doing this, it is important that data from individual studies are not used more than once. This would give too much statistical power, with the risk that a misleading estimate will be produced and that this will be overly precise. Overcoming this challenge would require the unpicking of each of the included reviews and the subsequent combination of the results of the individual, included studies. This may prove to be a complex and time-consuming task and careful consideration should be given to its value when planning the systematic review of reviews, highlighting the importance of having clear reasons for conducting the review.

A systematic review of reviews allows the creation of a summary of reviews in a single document. In this paper, we have discussed the methods for conducting such a review. The methods we have described and discussed draw on our experiences, and should be useful to healthcare practitioners who wish to conduct a systematic review of reviews to enhance their evidence-based knowledge and to support well-informed clinical decision making. They should also be useful to practitioners who will find that the ideal starting point for knowledge from research will be a systematic review of reviews of the topic of interest to them.

Ghersi D, Pang T: From Mexico to Mali: four years in the history of clinical trial registration. Journal of Evidence-Based Medicine. 2009, 2: 1-7. 10.1111/j.1756-5391.2009.01014.x.

Article   PubMed   Google Scholar  

Gannan R, Ciliska D, Thomas H: Expediating systematic reviews: methods and implications of rapid reviews. Implementation Science. 2010, 5 (56):

The Cochrane Collaboration. [ http://www.cochrane.org ]

Bastian H, Glasziou P, Chalmers I: Seventy-five trials and eleven systematic reviews a day: how will we ever keep up. PLoS Medicine. 2010, 7 (9): 10.1371/journal.pmed.1000326.

Moher D, et al: Epidemiology and reporting characteristics of systematic reviews. PLoS Medicine. 2007, 4 (3): e78-10.1371/journal.pmed.0040078.

Article   PubMed   PubMed Central   Google Scholar  

Smith V, et al: A systematic review and quality assessment of systematic reviews of randomised trials of interventions for preventing and treating preterm birth. Eur J Obstet Gynecol Reprod Biol. 2009, 142: 3-11. 10.1016/j.ejogrb.2008.09.008.

Clarke M: Systematic reviews of reviews of risk factors for intracranial aneurysms. Neuroradiology. 2008, 50: 653-664. 10.1007/s00234-008-0411-9.

Smith V, et al: A systematic review and quality assessment of systematic reviews of fetal fibronectin and transvaginal sonographic cervical length for predicting preterm birth. Eur J Obstet Gynecol Reprod Biol. 2007, 133: 134-142. 10.1016/j.ejogrb.2007.03.005.

Article   CAS   PubMed   Google Scholar  

Williams C, et al: Cost-effectiveness of using prognostic information to select women with breast cancer for adjuvant systemic therapy. Health Technology Assessment. 2006, 10 (34): 1-204.

Becker L, Oxman AD: Overviews of reviews. Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.02 [updated September 2009]. Edited by: Higgins JP, Green S. 2009, Oxford: The Cochrane Collaboration

Google Scholar  

Singh J, et al: Biologics for rheumatoid arthritis: an overview of Cochrane reviews (Protocol). The Cochrane Database of Systematic Reviews. 2009, CD007848-4

Keus F, van Laarhoven CJH: Open, small-incision, or laproscopic cholecystectomy for patients with symptomatic cholecystolithiasis. An overview of Cochrane Hepato-Biliary Group reviews. Cochrane Database of Systematic Reviews. 2010, CD008318-1

Singh JA, et al: Adverse effects of biologics: a network meta-anlysis and Cochrane overview (Protocol). Cochrane Database of Systematic Reviews. 2010, CD008794-10

Ryan R, et al: Consumer-oriented interventions for evidence-based prescribing and medicine use: an overview of Cochrane reviews (Protocol). Cochrane Database of Systematic Reviews. 2009, CD007768-2

Yang M, et al: Interventions for preventing influenza: an overview of Cochrane systematic reviews (Protocol). Cochrane Database of Systematic Reviews. 2010, CD008501-5

Eccles MP, et al: An overview of reviews evaluating the effects of financial incentives in changing healthcare professional behaviours and patient outcomes (Protocol). Cochrane Database of Systematic Reviews. 2010, CD008608-7

Aaserud M, et al: Pharmaceutical policies: effects on rational drug use, an overview of 13 reviews (Protocol). Cochrane Database of Systematic Reviews. 2006, CD004397-2

Critical Reviews Advisory Group: Introduction to systematic reviews. School for Health and Related Research. 1996, [ http://www.shef.ac.uk/scharr ]

Lichtenstein A, Yetley E, Lau J: Application of systematic review methodology to the field of nutrition. Journal of Nutrition. 2008, 2297-2306. 10.3945/jn.108.097154.

Chalmers I, Hedges LV, Cooper H: Abrief history of research synthesis. Evaluation and the Health Professions. 2002, 25: 12-37. 10.1177/0163278702025001003.

Starr M, et al: The origins, evolution and future of the Cochrane Database of Systematic Reviews. Cochrane Database of Systematic Reviews. 2009, 25 (suppl 1): 182-195.

Montori VM, et al: Optimal search strategies for retreiving systematic reviews from Medline: analytical survey. BMJ. 2005, 330 (7482): 10.1136/bmj.38336.804167.47.

Wilczynski NL, Haynes RB, Hedges Team: EMBASE search strategies achieved high sensitivity and specificity for retrieving methdologically sound systematic reviews. J Clin Epidemiol. 2005, 60 (1): 29-33. 10.1016/j.jclinepi.2006.04.001.

Article   Google Scholar  

Sampson M, et al: An evidence-based practice guideline for the peer review of electronic search strategies. J Clin Epidemiol. 2009, 62 (9): 944-52. 10.1016/j.jclinepi.2008.10.012.

Booth A, et al: An international registry of systematic-review protocols. Lancet. 2011, 377 (9760): 108-109. 10.1016/S0140-6736(10)60903-8.

Liberati A, et al: The PRISMA Statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Medicine. 2009, 6 (7): e1000100-10.1371/journal.pmed.1000100.

Shea BJ, et al: AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009, 62 (10): 1013-20. 10.1016/j.jclinepi.2008.10.009.

Schunemann HJ, et al: Presenting results and 'Summary of findings' tables , in Cochrane Handbook for Systematic Reviews of Interventions. Version 5.0.02 [updated September 2009]. Edited by: Higgins JP, Green NS. 2009, The Cochrane Collaboration: Oxford

Simcox R, et al: Prophylactic antibiotics for the prevention of preterm birth in women at risk: a meta-analysis. Aus NZ J Obs & Gynaecol. 2007, 47: 368-377.

King J, Flenady V: Prophylactic antibiotics for inhibiting preterm labour with intact membranes. The Cochrane Database of Systematic Reviews. 2002, CD000246-

Jadad AR, Cook DJ, Browman GP: A guide to interpreting discordant systematic reviews. CMAJ. 1997, 156 (10): 1411-6.

CAS   PubMed   PubMed Central   Google Scholar  

Pre-publication history

The pre-publication history for this paper can be accessed here: http://www.biomedcentral.com/1471-2288/11/15/prepub

Download references

Author information

Authors and affiliations.

School of Nursing and Midwifery, University of Dublin, Trinity College Dublin, 24 D'Olier Street, Dublin 2, Ireland

Valerie Smith & Cecily M Begley

School of Nursing and Midwifery, National University of Ireland, Galway, Galway, Ireland

Declan Devane

UK Cochrane Centre, National Institute for Health Research, Middle Way, Oxford, OX2 7LG, UK

Mike Clarke

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Valerie Smith .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

VS participated in the sequence content and drafted the manuscript. MC conceived and contributed to the rationale for the manuscript. VS, CB, DD and MC contributed to the design of the manuscript. CB, DD and MC read and critically revised the draft manuscript for important intellectual content. All authors read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Smith, V., Devane, D., Begley, C.M. et al. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. BMC Med Res Methodol 11 , 15 (2011). https://doi.org/10.1186/1471-2288-11-15

Download citation

Received : 04 June 2010

Accepted : 03 February 2011

Published : 03 February 2011

DOI : https://doi.org/10.1186/1471-2288-11-15

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic Review
  • Preterm Birth
  • Review Team
  • Rapid Review
  • Included Review

BMC Medical Research Methodology

ISSN: 1471-2288

overview of reviews methodology

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

Banner

Systematic Reviews & Other Review Types

  • What is a Systematic Review?
  • What is a Scoping Review?
  • What is a literature review?
  • What is a Rapid Review?
  • What is a Mixed Methods Review?
  • What is a Network Meta-Analysis?

What is an Overview of Reviews?

  • What is a Meta-Syntheses?
  • What is an Integrative Review?
  • What is a Diagnostic Test Accuracy Review?
  • What is a Living Systematic Review?

"The intent of this kind of review is to include systematic reviews or meta-analyses as the main study type and thus examine only the highest level of evidence." 

"The aim is not to repeat the searches, assess study eligibility, and assess risk of bias from included studies but rather to provide an overall picture of findings."

Source: Blackwood D (2016)

Overviews of Reviews are used:

  • For synthesizing and combining relevant data from existing systematic reviews or meta-analyses to make better decisions.
  • For providing clinical decision makers with the evidence they need when there are too many systematic reviews for them to keep up with for an intervention.
  • When there needs to be a rapid evidence synthesis for decision-makers but you need higher quality evidence due to limitations of the rapid review methodology.

How an Overview of Reviews differs from a Systematic Review

Timeframe:  Approximately 12 months or less. 

Question:  A broad or narrow PICO of 1 or more interventions.

Sources and searches:  Only locating the highest level of evidence of published or unpublished:

  • Systematic Reviews or
  • Meta-analyses

The aim is not to identify any new studies.

Selection:  Based upon clear inclusion/exclusion criteria and outcome measures defined a priori. 

Appraisal:  Critical Appraisal of systematic reviews done by at least two independent reviewers using the appropriate tool.  

Synthesis: May need to alter the meta-analyses in the included reviews or re-synthesize the data while extracting and combining the relevant data.  A final 'Summary of Evidence' table should be present.

Overview of Reviews Resources

  • Blackwood, D. Taking it to the next level...reviews of systematic reviews. HLA News. Winter 2016;13-15.
  • Methodology in conducting a systematic review of systematic reviews of healthcare interventions

Limitations of an Overview of Reviews

  • Is often mistaken as just a summary of reviews, but it must include the re-synthesis of data.
  • Sometimes confused with being as quick as a rapid review
  • Overviews of Reviews also do not include study types other than systematic reviews and meta-analyses.
  • Requires a systematic review expert to critically appraise systematic reviews.
  • Is likely to limit the search from 1990 to current since there were few research syntheses published prior.

Other names for an Overview of Reviews

Systematic Review of Systematic Reviews, Umbrella Review, Review of Reviews, Summary of Systematic Reviews, Synthesis of Reviews, Review of Systematic Reviews, Review of Reviews, Review of Meta-analyses, Meta-review, Systematic Meta-review

Temple Attribution

Adapted with permission from Temple University Libraries. https://guides.temple.edu/systematicreviews

  • Last Updated: Mar 13, 2024 4:27 PM
  • URL: https://touromed.libguides.com/review_types

TechRepublic

Account information.

overview of reviews methodology

Share with Your Friends

Review Methodology for CRM Software

Your email has been sent

Image of Allyssa Haygood-Taylor

At TechRepublic, we uphold the standard of fair and honest reviews of CRM software . To successfully do this, we believe it’s important to disclose exactly how we evaluate CRM software, what criteria and subcriteria we’ve defined as most important and how they influence our final ratings and ideal use cases.

Our algorithm and in-house rubric might change or reflect different subcriteria as needed. This is done to ensure accuracy and consistency with industry standards, product evolution and customers’ changing needs. To date, each CRM software is rated against six main criteria: price, core features, customizations, integrations, ease of use and customer support.

Our CRM review methodology breaks down as follows.

Pricing (25%)

Provider cost accounts for 25% of our total score. This category’s ranking factors include, but are not limited to, the following:

  • Free trial length.
  • Billing options.
  • Free-for-life version.
  • Price transparency.

Core features (25%)

Core CRM features accounts for 25% of our total score. This category’s ranking factors include, but are not limited to, the following:

  • Contact and account management.
  • Pipeline management.
  • Basic reports.
  • Basic dashboards.
  • Marketing automation.
  • Activity tracking.
  • Lead scoring and routing.
  • Document management.
  • Sales automation.

Customizations (15%)

Customizations account for 15% of our total score. This category’s ranking factors include, but are not limited to, the following:

  • Custom pipelines.
  • Custom views and filters.
  • Custom workflows.
  • Custom deal stages and milestones.

Integrations (15%)

Integrations account for 15% of our total score. This category’s ranking factors include, but are not limited to, the following:

  • Number of total app integrations.
  • Email integration.
  • Calendar syncs/ scheduling.
  • Social media integration.
  • Native app integration.

Ease of use (10%)

Ease of use accounts for 10% of our total score. This category’s ranking factors include, but are not limited to, the following:

  • Real user reviews.
  • Community resources.
  • Knowledge base.

Customer support (10%)

Customer support accounts for 10% of our total score. This category’s ranking factors include, but are not limited to, the following:

  • Live chat support.
  • Phone support.
  • Email support.
  • Onboarding resources.
  • Dedicated account manager.

Our CRM evaluation research methods

To effectively score each CRM provider we review, we prioritize our own in-house, hands-on testing experience. We also reference any available demos, product reviews, information from sales reps and verified customer reviews on sites that include but are not limited to the following:

  • Gartner Peer Insights.
  • Trustpilot.
  • Google Play.
  • The App Store.
  • Community forums.

How do I choose the best CRM for my organization?

Understand your budget.

CRM pricing is typically based on the number of users you plan on granting access to the platform and what features or add-ons you require. Subscriptions can be billed monthly, annually and biannually. Fortunately, it is common for CRM providers to offer demos, free trials and even free-for-life versions of their products. This way, you can experience the interface and features before signing a contract and dealing with the hassle of importing data.

Know how you’re going to use the CRM

Adopting a CRM software to use within the marketing, sales and support operations of your business will provide bountiful advantages and streamline workflows. These CRM benefits can only be reaped when you have defined business goals around how you plan to implement the software. Understanding what type of CRM software would work best is a good starting point. From there, determine if marketing or support tools are necessary; or if the product will be used only by sales reps and administrators for client management purposes.

Prioritize in-market expertise

Not only should you consider your business size and scale, but you should also take into consideration any niche business operations within your industry. While there are CRM providers specifically built for certain industries, there are generalized providers like HubSpot, Salesforce and Pipedrive. Providers like these offer additional features that are crafted with unique industries in mind, such as inventory management for construction companies or donation tracking for nonprofits.

Now that you know exactly how we review CRM software, check out our individual CRM software reviews or our top product guides.

  • Feature Comparison: CRM Software and Services
  • Leveraging an AI-powered Salesforce CRM Will Require Better Data Management
  • 7 Best Free CRM for 2024
  • Zoho CRM vs. Hubspot: Which Is Best for Your Business in 2024?
  • Hiring kit: CRM developer
  • Big data: More must-read coverage

Image of Allyssa Haygood-Taylor

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

Deep learning for lungs cancer detection: a review

  • Open access
  • Published: 08 July 2024
  • Volume 57 , article number  197 , ( 2024 )

Cite this article

You have full access to this open access article

overview of reviews methodology

  • Rabia Javed 1 ,
  • Tahir Abbas 1 ,
  • Ali Haider Khan 2 ,
  • Ali Daud 3 ,
  • Amal Bukhari 4 &
  • Riad Alharbey 4  

Abstract    

Although lung cancer has been recognized to be the deadliest type of cancer, a good prognosis and efficient treatment depend on early detection. Medical practitioners’ burden is reduced by deep learning techniques, especially Deep Convolutional Neural Networks (DCNN), which are essential in automating the diagnosis and classification of diseases. In this study, we use a variety of medical imaging modalities, including X-rays, WSI, CT scans, and MRI, to thoroughly investigate the use of deep learning techniques in the field of lung cancer diagnosis and classification. This study conducts a comprehensive Systematic Literature Review (SLR) using deep learning techniques for lung cancer research, providing a comprehensive overview of the methodology, cutting-edge developments, quality assessments, and customized deep learning approaches. It presents data from reputable journals and concentrates on the years 2015–2024. Deep learning techniques solve the difficulty of manually identifying and selecting abstract features from lung cancer images. This study includes a wide range of deep learning methods for classifying lung cancer but focuses especially on the most popular method, the Convolutional Neural Network (CNN). CNN can achieve maximum accuracy because of its multi-layer structure, automatic learning of weights, and capacity to communicate local weights. Various algorithms are shown with performance measures like precision, accuracy, specificity, sensitivity, and AUC; CNN consistently shows the greatest accuracy. The findings highlight the important contributions of DCNN in improving lung cancer detection and classification, making them an invaluable resource for researchers looking to gain a greater knowledge of deep learning’s function in medical applications.

Similar content being viewed by others

overview of reviews methodology

Deep learning applications for lung cancer diagnosis: A systematic review

overview of reviews methodology

Using Deep Learning Techniques in Detecting Lung Cancer

overview of reviews methodology

Lung Cancer Detection from LDCT Images Using Deep Convolutional Neural Networks

Avoid common mistakes on your manuscript.

1 Introduction

Cancer refers to the growth of abnormal tissues that are unwanted, it’s uncontrollable, and it spreads speedily in the body; if it is not treated well at the start, it spreads and affects other body organs also. In the health sector, the use of modern technology has contributed a lot, especially in the detection of lungs cancer. It helps the doctors to identify as well as properly treat a disease. Many deaths occur in the whole world due to lung cancer and due to this, it is one of the deadliest diseases known in the world. In 2020, according to the research, approximately 2.21 million cases were detected, and 1.8 million mortalities were caused by lungs cancer (Sharma 2022 ). The report presented by the World Health Organization (WHO) in 2020, shows that lungs cancer is the deadliest among all kinds of cancers, that is said according to the death rate that is calculated as 1.80 million (World Health Organization 2022 ). Figure  1 shows the details of extinction because of cancer in 2020 according to WHO Lungs cancer is one of those diseases in which early-stage diagnosis and disease management play a crucial role in proper treatment.

figure 1

Global Distribution of Cancer-Related Deaths in 2020

Just like other cancers, the early detection of lungs cancer is mandatory due to which the chances of survival increase (Pathak et al. 2018 ). A large number of people affected by lung cancer cannot survive due to the delay in early detection, the overall survival rate of the patient is five years which is less than 20% (Roointan et al. 2019 ). Age is not a vital prognostic factor when it comes to the survival of patients (Hurria and Kris 2003 ). Both males and females fall prey to it. Men are more prone to lungs cancer than females According to research, the death rate in men due to lungs cancer is higher than females. (Chen et al. 2016 ). Factors contributing to increased lungs cancer cases include tobacco, smoke, viral infection, ionizing radiation (Cook et al. 1993 ; Esposito et al. 2010 ) air pollution, and unhealthy lifestyle. (Strak et al. 2017 ) History of chronic obstructive pulmonary disease (COPD) is the main factor that contributes to lungs cancer (Parris et al. 2019 ). The drastic increase in vehicles and bad smoking habits also play a crucial role. As tobacco is a primary factor, driving lungs cancer trend (Parascandola and Xiao 2019 ) so, the death toll due to lung cancer can be reduced by controlling tobacco usage (Field et al. 2013 ). Its symptoms include fatigue, difficulty in breathing, and persistent cough (Corner 2005 ; Mayo Clinic 2022 ). Apart from common symptoms, its symptoms vary from person to person, making its diagnosis quite tricky. It might be asymptomatic, and a person may have cancer without any sign (Quadrelli et al. 2015 ). Lack of symptoms at early stages leads to a late diagnosis of lungs cancer (Goebel et al. 2019 ). One of the most important cornerstones of human civilization is maintaining one's health, hence modern approaches to medical issues are required. The amount of information available in the form of lab tests, research papers, clinic reports, and other documents has increased due to advancements in the biomedical area (Riad Alharbey et al. 2022 ).

A lot of research has been performed by many researchers in different fields so that the accurate prediction and classification of lungs cancer can be increased. In recent studies, Initial screening of disease is performed by exhaled breath analysis which is non-invasive and inexpensive (Nardi-Agmon and Peled 2017 ). Different methods are used for the prediction of lungs cancer. In the detection process X-rays, CT, and MRI& PET scans are most used.

The classification of lungs cancer in its early stages (Basak and Nath 2017 ), and the chance of survival of the patient is opposite and inversely proportional to the disease. Tumor size determines the cancer stage. The cancer stage is measured by its spread in the body. The more the spread, the higher the stage. Mostly it’s not quite visible in the early stages; so, detection in the early stage is difficult (Das et al. 2020 ). But it’s quite easy to deal with it in the early stages as the disease progresses, it becomes complex to cope with it Fig.  2 . Depicts the stages of cancer.

figure 2

Progression stages of lung cancer—illustrating the different developmental phases"  

Analysis of visual images is an efficient way to investigate the lungs tissues identify the stages of lungs cancer and classify these stages. However, it is difficult to categorize it by stages.However, by the usage of advanced deep learning methods, lungs cancer can be classified accurately. Figure  2 effectively depicts the progression of lungs cancer and categorizes it into stages. Deep learning algorithms are implemented to identify different types of lung cancer and categorize them. The most important and effective method to diagnose and the treatment of lung cancer is made possible by the initial step of disease detection within the lung tissue. Subsequently, various classifiers are used to accurately classify the identified cases into their respective stages Fig. 3 . Depicts how classification and prediction of lungs cancer.is performed using deep learning.

figure 3

Deep Learning for the classification and detection of lungs cancer

Different therapies like chemotherapy and radiotherapy are performed for their treatment, but advanced lungs cancer is quite complex. CT is widely and commonly used for the detection purpose of lungs tumors, but it’s been closely observed that small nodules are mostly not predictive for lungs cancer (Horeweg et al. 2014 ). These are comparatively tricky and complicated to detect and treat. When it comes to classifying benign & malignant lesions CT has limited ability (Lardinois et al. 2003 ) Small lesions, having minimal contact with the chest wall are complicated and dealt with technically (Middleton et al. 2006 ). Chest wall structure, small blood vessels, airway walls, pulmonary structures (Lu et al. 2015 ), and tissues that are pretty like nodule makes the detection difficult, so it’s rather difficult to perform biopsy that leads to detection. A biopsy is performed for the evaluation of nodules (Lowe et al. 1998 ). The disease’s complexity and poor CT resolution sometimes lead to re-biopsy. Nodules are classified according to their type, size, and growth rate; it’s essential because it sets the direction of treatment. Nodules can be classified as the cavity, calcified & non-calcified (El-Baz et al. 2013 ), or as solid, non-solid, partially solid, and calcified (Massimo 2012 ). A partially solid tumor is a combination of non-solid and solid, the solidified lung tumors consist of a solid internal core. Lung nodules can also be classified as Benign and Malignant (Dhara et al. 2016 ; Silvestri MD et al. n.d. ; Wu et al. 2020 ; Zhang et al. 2019 )

Deep learning is contributing tremendously to healthcare (Esteva et al. 2019 ; Miotto et al. 2018 ; Mittal and Hasija 2019 ). Deep learning paves the way for fast and accurate detection and diagnosis of diseases (Mishra et al. 2020 ), leading to precise and exact treatment. Research shows that the algorithm of deep learning has shown its significance in the prediction, detection, and diagnosis with the classification of lung cancer; pulmonary nodules are closely observed. By incorporating different deep-learning approaches, the tumor, and the nodule features are captured and classified (Wu and Qian 2019 ). Big Data refers to incredibly massive data collections that are amenable to analysis to identify trends and patterns. Deep Learning is one method for data analysis that can be utilized to discover abstract patterns in large amounts of data (Gheisari et al. 2017 ). Advanced data representations and knowledge can be extracted with the help of deep learning (DL). Highly efficient DL methods aid in uncovering more buried information (Gheisari et al. 2023 ). CAD (Computer-aided design) is used for the screening of cancer in the early stages (Traverso et al. 2017 ). It is proven as a helping hand for doctors and radiologists (Yuan et al. 2006 ). State-of-the-art methods have been designed to develop automated processes. Coherent Anti-Stroke Raman Scattering (CARS) technique is used for sensitive investigations (Müller and Zumbusch 2007 ), and is also there to scan the lungs that capture the molecular movement and produce an image that helps to detect diseases accurately. Researchers have defined different classification models to detect & classify lungs cancer automatically (Nasrullah et al. 2019 ).

Using a deep learning algorithm, other techniques have been made to read and learn data representation from the unorganized (raw) data. Inner body details are examined, and valuable information is extracted from this data. Deep learning models, algorithms, and methods play a tremendous role in increasing accuracy and decreasing error in the classification of lungs cancer. Deep learning-based automatic segmentation is better than manual in many aspects (Liu et al. 2021 ) Deep learning helps to avoid misclassification, reduces error rate, provides high-quality images, and accurately predicts cancer. False-positive nodules are filtered out using different classifiers (Jiang et al. 2022 ). Accurate and high-quality images are directly proportional to the radiologist’s fast and accurate diagnostic decision. Deep learning methods are also incorporated to predict lungs cancer (Banerjee and Das 2021 ). Training images are provided, and features are extracted automatically. Comparatively, deep learning costs less than conventional CAD frameworks. Deep learning offers HD representation of the given input data, making the detection and identification process efficient and helping the radiologist. The image’s pixel directly contributes to cancer detection, as cancerous and non-cancerous areas are determined on the base of pixels. So, to diagnose accurately and the classification of disease, deep learning assists medical professionals in serving the healthcare system better. It helps to make accurate decisions regarding the disease. CNN design consists of multiple tiers, one of which being Convolutional Layers (CLs). By employing different kinds of convolution filters, the CL layers can extract distinct information from the images of cancer cells that are supplied to them (Manjula Devi et al. 2023 ). Under the methodology that is being described, the first step in the process is image processing, where preprocessing methods are used to improve the quality of medical images. The improved pictures next go through segmentation, which is an essential stage in identifying pertinent areas within lung imaging. Following identification, the regions are subjected to feature extraction, a process that involves the extraction of significant features to identify crucial patterns suggestive of lung cancer. The classification step, which uses a complex architecture called the Deep Convolutional Neural Network (DCNN), is where the classification process is most centrally located. To dynamically learn hierarchical features, this DCNN is composed of several convolutional layers, each of which has filters, activation functions, and pooling operations. Dense layers known as fully connected layers are another component of the architecture that handles the high-level characteristics that the convolutional layers have learned. The final output shows the findings of the categorization, which differentiates between various lung cancer classifications. The DCNN is an effective technique for accurately classifying lung cancer because its convolutional layers are essential for automatically learning complex patterns.

This study involves the following contributions to the field of medical science especially to the detection of lungs cancer in its early stages:

Provides the solution for the detection of lungs cancer in the field of healthcare.

Discussed different existing techniques and procedures.

Implemented deep learning algorithm and compared with the existing machine learning algorithms and compared the performance with the developed algorithm.

The designed technique is implemented on a large dataset and shows how to classify the features.

It shows the usability of Convolutional Neural Networks (CNN) in the field of artificial intelligence.

For future research it provides useful implementation and development techniques for the early detection of cancer disease.

The other parts of the literature survey are defined in the following sections. Imaging techniques for lungs cancer detection are presented in Section 2 . Section 3 includes the latest trends in lungs cancer detection, Section 4 offers the Research methodology of the opted research and the process of selecting research articles is given in Section 4 . Section 5 refers to the deep learning contribution towards lungs cancer classification. Section 6 refers to the Literature sources, and the research community’s contribution to the current field covering primary techniques and models used to classify, detect, and predict lungs cancer. Results that are obtained from the selected and extracted data are presented in Section 7 . Section 8 refers to the conclusion in which the state-of-the-art deep learning contribution towards lungs cancer classification is presented.

2 Imaging techniques for detection of lungs cancer

Different screening approaches are employed for the identification and screening of lung cancer (Schaefer-Prokop and Prokop 2002 ). These aid in the doctor's ability to see internal bodily processes and to gain an understanding of how internal organs function. To check for lung anomalies. There are several multimodality imaging techniques including positron emission tomography (PET), computer tomography (CT), Ultrasound, chest radiography (X-Ray), and magnetic resonance imaging (MRI) scans (Laal 2013 ; Tariq Hussain n.d. ) Fig. 4 shows a few techniques of Imaging.

figure 4

Modalities in lung cancer detection—a visual exploration of imaging techniques

3 Latest trends in lungs cancer detection

The primary purpose of the designed research is to demonstrate the methods and strategies employed in the deep learning categorization of lung cancer. Deep Learning algorithm is the most recent technique that helps medical professionals diagnose diseases and helps radiologists find difficult-to-diagnose conditions like lung cancer. The chosen articles demonstrate the most recent deep learning algorithms and their efficacy in the prediction and categorization of cancer. This paper presents different techniques defined in deep learning algorithms and concepts for this (Fig. 5 ).

figure 5

Sequential process of study execution

Convolutional Neural Network (CNN) consists of multiple layers. Convolutional layer that extracts features of image pooling layer which selects the feature. The third one is the fully connected or FC layer, its work is to combine those extracted features. Recurrent Neural Network (RNN) is suitable for sequential data and is mostly used for audio, video, and text. Deep Belief Network (DBN) consists of multiple RBMs. These are probabilistic generative models. DBN has many variants. Support Vector Machine (SVM) is a statistical theory-based algorithm. Artificial Neural Network (ANN) is structured just like human brains in which neurons are involved, that’s why it is known as a biological-inspired network. Deep Neural Network (DNN) is a new and advanced technique in the field of artificial intelligence, as it can also work for a complex nonlinear relationship. DNA-binding proteins have a close relationship with several human disorders, including AIDS, cancer, and asthma (Ali et al. 2022b ). DBP-DeepCNN would be beneficial in developing more promising therapeutic approaches for the management of chronic diseases (Ali et al. 2022a ) while patients with chronic depressive illness experience confusion in their social lives (Gheisari 2016 ). Integrating CC technology with wireless body area networks WBANs systems to create sensor-cloud infrastructure (S-CI) is helping the healthcare industry by enabling early detection of diseases and real-time patient monitoring (Masood et al. 2023 , 2018a ) while patient privacy should preserved (Masood et al. 2018b ). If a deep learning model is developed well, it may help prevent misdiagnosis and waste of time (Javed et al. 2023 ). Deep machine learning could be applied to the initial processing of images, the segmentation of images to emphasize the diagnostic objects under investigation, and the classification of these objects to ascertain their benign or malignant nature (Jamshaid Iqbal Janjua et al. 2022 ). It is challenging to predict human diseases, especially cancer, in order to deliver more effective and timely care. Cancer is a potentially deadly disease that affects the human body's many organs and systems (Abbas et al. 2023b )

4 Research methodology

For the selected research, a mapping study “Classification (lungs cancer)” analysis is chosen as a research methodology. Figure 6 illustrates the mapping process that has been followed. It consists of three steps that are as follows:

Step-I: Study Preparation

Step-II: Conduct of Study

Step-III: Analysis and Results of the Study

figure 6

Visual representation of the systematic article selection process

In this study, a mapping study methodology is employed to conduct a systematic exploration of the literature on lung cancer classification using deep learning approaches. The mapping process, illustrated in Fig. 5 , is structured into three distinct steps: Study Preparation, Conduct of Study, and Analysis and Results of the Study. The main contribution of this paper is described below.

Conducted a comprehensive Systematic Literature Review (SLR) using deep learning approaches, which included a detailed analysis of pertinent literature in the field of lung cancer.

Categorized and synthesized the overall methodologies observed in the literature, offering readers a systematic overview of the strategies adopted in the domain of deep learning for lung cancer detection and analysis

Outlined the current state-of-the-art and the latest advancements in deep learning methodologies applied to lung cancer research, providing insights into cutting-edge techniques and emerging trends in the field.

Conducted a comprehensive Quality Assessment of the approaches used in the examined papers, guaranteeing a strong assessment framework to gauge the validity and dependability of the deep learning methods applied in lung cancer research.

Provided a comprehensive overview of deep learning methodologies specifically tailored to lung cancer research, consolidating the collective knowledge and advancements in the area for the benefit of researchers, practitioners, and stakeholders.

Study preparation, the first step, entails defining the research's scope, creating inclusion and exclusion criteria, and choosing a search technique. The implementation of the literature search, data extraction, classification, and synthesis of pertinent literature are then included in the Conduct of Study phase. Lastly, analyzing the identified literature, gauging the caliber of the included research, and extracting significant findings to guide the systematic review are all part of the Analysis and Results of the Study phase.

4.1 Research objectives

The main purpose of this research is to provide the scientific community with a systematic step-by-step review of the current research on lungs cancer by using the technique known as deep learning just like the recurrent neural networks (RNN), deep belief network (DBN), support vector machine (SVM), convolution neural networks (CNN) and the deep neural networks (DNN) etc.

4.2 Research questions

As part of this procedure, the questions related to this research are listed in Table 1 and are defined step by step to provide a more thorough understanding of the investigation. These research questions are accompanied by their motivations.

4.3 Search scheme

Following databases and scientific resources have been searched to get and gather the most relevant research papers and articles IEEE Digital Library, Springer, Elsevier, ACM Digital Library, Science Direct, and Google Scholar are the main repositories that were used to get the most relevant research articles.

4.4 Search string

The following search string was used to conduct the automatic search in the selected databases/scientific sources.

(“Classification” OR “Detection” OR “Prediction” OR “Diagnosis” OR “Analysis”) AND (“Lungs Cancer” OR “Lung Cancer” OR “Pulmonary Nodule” OR “Lungs Tumor” OR “Lung Nodule”) AND “Deep Learning” also known as “Deep Neural Network” alternate “DNN” also written as “DL”

4.5 Study selection procedure

The selection procedure is focused on identifying and recognizing those articles that effectively meet the goal of the study. These articles have been searched and gathered from different sources, so if the article is present in more than one source it is counted just once. After comprehensively investigating and examining titles, abstracts, and keywords, each paper is evaluated and its candidature in a study is determined. The search string is considered in deciding the inclusion and exclusion criteria. Duplicates are removed and articles not observing the search string are excluded.

4.6 Inclusion & exclusion principles for the research studies

For the chosen research Table 2 listed the inclusion and exclusion principles. Articles from journals focused on the classification of lungs cancer where deep learning algorithms, are incorporated, and published between 2015-2024 are collected While Articles that are focused on other types of cancer and do not incorporate deep learning are not included.

Research articles are collected from different geographical locations through a combination of online databases, we gathered articles for our study from various geographic places. Using precise terms and search parameters linked to our research topic, we conducted in-depth searches on academic databases including IEEE, Nature, Google Scholar, etc. Table 3 depicts the geographical locations of the articles selected for the study.

The research process is conducted according to the given flow diagram in Fig.  6 , which depicts the steps of gathering the research material, from identifying articles to selecting articles for further analysis.

It starts by gathering articles from well-reputed databases. Then the overall number is calculated. After that, the duplicate articles are removed, and initial screening is performed.

Articles that are not in the English language are excluded, and further assessment is performed in this step different criteria of exclusion are applied. The articles that are not from journals are excluded, conference papers, papers published before 2015, papers that are focused on other types of cancer, and articles that are not focused on deep learning are excluded. After excluding the papers with justification, 66 articles were chosen for further investigation.

Figure  7 displays a graphical depiction of the scientific databases where the search term was used, and articles were chosen.

figure 7

Distribution of articles selected from scientific databases

5 Quality assessment of study

Quality assessment is important in systematic reviews of literature as it determines the quality of the study that is included.

The solution to the problem is presented clearly in the paper. The answer could be yes (+ 1), No (0), somehow (0.5)

The contribution of the paper regarding the issue “Classification of lung cancer using deep learning is presented clearly. The answer could be yes (+ 1), No (0), somehow (0.5)

Limitations and future study are presented and defined clearly the answer could be yes (+ 1), No (0), somehow (0.5)

Result parameters are presented clearly, the answer could be yes (+ 1), No (0), somehow (0.5)

Table 4 presents a detailed Quality assessment score. In which selected articles along with their reference number are presented. These are evaluated based on the solution of the problem, contribution, limitation, future work, and results. Each question possesses one score. A total of 4, each article is evaluated and graded.

Table 5 presents a summary of the total scores. There is one paper that possesses a score of 2, 7 articles with 2.5. There are 13 articles with a score of 3, 25 articles with a score of 3.5, and 20 articles with a score of 4

6 Literature sources

To look over and explore the detection, classification, and prediction of lung cancer using deep learning, 66 relevant articles published by reliable sources were examined.

To ensure the credibility of the systematic literature review, credible journals are selected as sources and data collection. Moreover, a large number of adequate literature surveys exist for that kind of review Fig. 8 portrays the year-wise detail of collected articles.

figure 8

Annual article selection overview (2015–2024)

It’s evident that the selected articles are from 2015 to 2024 and the chart represents that 2020 is the maximum contributing year because the maximum number of articles falls in 2020.

The nature of the review was to present the work done on the topic of the classification of lungs cancer using different deep-learning methodologies. According to the collected data, it is observed that the most frequently used method is the convolutional Neural Network) CNN. Convolution neural networks (CNN) is also a technique used to solve deep learning problems; it consists of multiple layers. CNN is contributing a lot to Image processing and computer vision (Liu et al. 2019a ). It is to be noted that the article review was quite prejudiced to the articles published (2015–2024), and the articles that have “Deep Learning” used in titles. It is observed that multiple data sets have been used in research.

This shows that many diversities in data when it comes to training and testing on CNN. MRI, thoracic surgery data, X-ray, CT, PET image data, CARS images, breathing data, thoracic MR images, Whole Slide Imaging (WSI), Lymph Node Slides, H & E slides, and Histopathological. Table 6 presents the summary of the selected research articles. The datasets used in the research, and their descriptions are presented. The parameters that decide the form of classification, applied method or approach, and the feature extraction techniques used are provided after thoroughly examining the selected research articles.

Cancer image data is collected and presented in forms. It is observed that a CT scan is the most frequently used type of data.

The management of input image sizes is a critical consideration in deep learning field for lung CT-scan processing. Some models require a specific input size, which calls for preprocessing operations like scaling or cropping to fit the data into this preset dimension. Such modifications, however, run the danger of information loss or image distortion, which reduces the model's effectiveness. In contrast, to solve these problems more advanced and sophisticated methods are developed. Models can adjust to different image sizes using techniques like padding or spatial pyramid pooling, but they may also introduce noise or artifacts. The idea of picture pyramids also produces numerous image copies at varying scales, facilitating feature capture at varied levels of detail. Fully Convolutional Networks (FCNs) are also specially designed to support arbitrary input sizes. The decision between these methods depends on the particular deep learning architecture and the actual requirements of the task of medical image analysis, taking into account elements like computational complexity and the requirement to adapt to various input dimensions.

The process of lungs cancer/nodules detection or classification goes as follows: -

Pre-Processing: it is the first step which is used to take inputs in the form of MRI, Thoracic Surgery Data, X-ray, CT, PET Image Data, Breathing Data, Thoracic MR Images, Whole Slide, Imaging (WSI), Lymph Node Slides, Histopathological Cancer Images, CARS Images, H&E. Different Image processing/ feature extraction techniques are applied on the Input. Table 7 lists the various data types used in the reviewed studies.

Computed Tomography is used most of the time, because it’s been frequently used in our review techniques used for preprocessing includes FPSOCNN, Wavelet-Transform-Based Features, SIFT, LBP, ABF Zernike Moment, and HE, inceptionv3, LDA, ODNN, Novel Nodule Candidate Detection Method, Single-level discrete Two-Dimensional Wavelet Transform (2D-DWT), Fractional-Order Darwinian Particle Swarm Optimization, Two-Dimensional Discrete Fourier transform (2D-DFT), U-Net, Back-Propagation, Deep Learning Architectures, Gradient Descent, SIFT + LBP, Swarm Algorithms, Bipartite Undirected Graphical Models (RBMs), including Alex, Deep (ConvNet),Intensity features + SVM Support Vector Machines, Transfer Learning Networks, Hybrid Geometric Texture Feature Descriptor, FODPSO CARS, VGG16,Wiener Filter, VGG19, Three-Dimensional (3D) CNN model, CNN Region-Of-Interest (ROI), Median Filter, Gaussian Filter, Gabor Filter, Knowledge-Based Collaborative (KBC) sub-mode, ResNet-50 networks, ProNet, RadNet, UB open-source software ITK-Snap, 3D Stereoscopic Planning System, IMR, Maximum Intensity Projection Technique, Based On Lung-RADS version 1.1, Three Reconstruction Kernels. Multi-Scale Dilated Residual Representation Block Size-Related (SR-DiRes) &Multi-Mask Convolution Representation Block (ConvRB), Multi-Scale LBP, AP Algorithms, Affinity Propagation (AP) Algorithm, Deep Convolution Neural Network (DCNN) Architecture, Radiomics-Based Analysis, Hot-Spot ROI-based DCE Kinetic Analysis, Deep Dueling Q-network, Hierarchical Deep Q-Networks, Radiomics Deep Q-Network, Weighted Mean Histogram Equalization, Dense PriNet, Multi-and single model method, Pixel-Wise Segmentation based On FCN ,Multi-Model, Weakly Supervised Learning Methods, HALO Tissue Classifier Analysis Module (Random Forest Algorithm), HALO AI (CNN, VGG network),Context-Aware Feature Selection And Aggregation, Graph Regularized Sparse MTL,K-means Algorithm, Extensive Data Augmentation,LUNA16 Pulmonary Nodule Annotations, Multi-Group Patch-Based Learning System, Diffusion (VED) and a Vessel filter, Integrated Deep Learning , Region Growth Algorithm, fuzzy logic and a NN, HE hybrid learning algorithm, ACC, VEL, Correlation Analysis Algorithm, PCA, (GSEA)Gene-Set Enrichment Analysis, ReLU Activation Function, CVAE-GAN, DenseNet using CoxPH, CNN model, ImageNet, Score-CAM Maximum Intensity Projection (MIP).,Multi-Scale Convolution Image Feature, Novel 15-Layer 2D Deep CNN architecture Hyperparameter Tuning, vector Quantization(VQ) algorithm, Gaussian noise model-based collaborative Wiener Filtering (GNM-CWF),Self-Adaptive Online VQ algorithm Residual Learning Denoising Model (DR-Net),Feature fusion strategy called DCA, Curriculum Learning, And Transfer Learning, Rival Convolutional Neural Network Models (setio-CNN and Over Feat), Taguchi-based CNN, combination of Deep Residual Learning, The Trial And Error Method, ACL algorithm, Converged Search and Rescue (CSAR) Algorithm, Novel Wilcoxon Signed-Rank Gain Preprocessing, Hybrid Spiral Optimization Intelligent-Generalized Rough Set approach, AI-based noninvasive radiomics biomarkers, SVM Principal Component Analysis (PCA), Multilevel Brightness-Preserving approach,. Numerous segmentation methods and approaches are then applied.

Segmentation contributes to the feature extraction process; it prepares the extracted data for classification.

Classification: - Classification is the phase where the prepared, feature extracted and segmented data is classified into different categories that may be abnormal or normal, benign, or malignant, LUAD or LUSC. After the detection of cancer nodules or pulmonary nodules which confirm that the disease is present or not of lungs cancer further classification is performed to classify these nodules into solid, calcified, partially solid, perifissural, and speculated.

Several classifiers are used in our study which includes; Fuzzy Particle Swarm Optimization (FPSO),Confusion matrix, Forest Classifiers, Back-Propagation, SVM or Naïve Bayes classifiers, Cross Entropy loss and Transfer Learning, RMS Prop-optimization method, Modified Gravitational Search Algorithm (MGSA),Multi-Channel CNN, Deep Learning and Swarm Intelligence, DBN and CNN Deep Learning, Convolutional Network Architecture, FODPSO, CARS & Deep learning, GoogleNet Inception v3 CNN architecture, Deep Learning Approach Based On Stacked Autoencoder and Softmax, CADe systemVGG19architectureand SVM classifier, Multi-View Knowledge-Based Collaborative (MV-KBC) Deep Model, Long Short-Term Memory/LSTM, Recurrent Neural Network/RNN,CNN Cancer Risk Prediction Model, triplet, DCNN AlexNet, watershed segmentation Banarization, Auto Encoder/AE &General Adversarial Networks/GAN, Deep Belief Network/DBN, Random Forest Classifier, Deep Quality Model, GG-net architecture, Convolutional Neural Networks with a U-net architecture, Recurrent Neural Networks (RNN), 3D Deep Learning and Radiomics, Core-Ring Blocks Residual Estimation with size-related damper block deep prediction model, GG-net architecture, Conditional Random Framework, SVM anti-PD-1 response prediction by H&E,DCNN CAD system, DCE Kinetic Parameters, Convolutional Long Short Term Memory (CLSTM) Network, DCE-MRI,Value-based Reinforcement Learning Approach, DSRL, Deep Successor Q-Network, Profuse Clustering Technique (IPCT),Deep. Learning Instantaneously Trained Neural Network (DITNN), deep Q-network and hierarchical deep Q-network, Explosion-Trained Deep Learning Neural Network (DITNN),CADe / CADx models, Deep Learning Classifier (Lymphoid Follicle CNN - LFCNN),3D Neural Network, Fully Convolutional Network (FCN),FCN, ScanNet, proportion-SVM, Adaptive Hierarchical Heuristic Mathematical Model (AHHMM),K-mean algorithm, 3D fully CNN based on the V-Net architecture, Four-Channel CNN, corrective lung contour, Wilcoxon Signed Generative DL (WS-GDL)Hyper-Parameter Tuning Algorithm, Multi-Scene Deep Learning Framework (MSDLF),Normalized Spherical Sampling, LSTM algorithms, NFNet, Fast R-CNN, Intra- And Inter-Fraction Fuzzy Deep Learning (IIFDL),deep learning-based radiogenomic framework-net, EfficientNet, CoxPH and CoxCC, LdcNet, Converged Search and Rescue Algorithm, Deep Gaussian Mixture Model in region-based CNN[DGMM-RBCNN], Lung-Deep System, Novel Nodule CADeCNN, INC classification, (DFD-Net), Two-path CNN with feature fusion DFD-Nets, Curriculum learning, 3D (DCNNs), Taguchi-based CNN, Fuzzy C-Ordered Means (FCOM) with ECN, Enhanced Capsule Networks (ECN), Lung Cancer Prediction (LCP-CNN), Generative Deep Learning, Survival Neural Network model, machine learning peculated known as k-nearest, KNN neighbors and SVM on CNN Ensemble Classifier and Improved DNN.

7 Literature synthesis: unveiling patterns and insights

Deep Convolutional Neural Networks (DCNN) is the best technique to detect lungs cancer in the field of machine learning. The highest levels of accurate lung cancer case classification were continuously attained using DCNN. The ability to make accurate diagnoses, a critical component in healthcare, is shown by this improved accuracy. Additionally, DCNN demonstrated exceptional specificity, reducing the incidence of false positives. In healthcare contexts, this precision is crucial since it lessens the possibility of misdiagnosing non-cancerous patients as cancer. The remarkable sensitivity of DCNN enabled the identification of sizable actual lung cancer cases. The potential for early identification and intervention, which are essential for enhancing patient outcomes, is increased by this high sensitivity. Finally, DCNN demonstrated impressive accuracy in detecting lung cancer. Finally, DCNN demonstrated remarkable accuracy in diagnosing lung cancer, reducing the possibility of incorrect diagnosis. These findings demonstrate how well DCNN performs in automatically extracting complex patterns from medical images, which helps in the accurate and reliable identification of lung cancer. DCNN stands out as the leading machine learning technology, to improve the accuracy and reliability of lung cancer detection in clinical practice due to its exceptional performance in accuracy, specificity, sensitivity, and precision. To promote medical diagnostics in this crucial area, we encourage continued investigation and development of DCNN-based techniques.

In the study, 66 research articles were carefully examined and their methods were analyzed. In Table  8 , the results of this evaluation procedure are collated and summarized. Important details including reference numbers, research methodology, and performance measures like the F1 score, accuracy, precision, sensitivity, and AUC are included in this table. Table 8 is a useful tool for comparing and assessing the research findings because these metrics are significant indicators of the efficacy and dependability of the various methodologies covered in the examined papers. The major limitation of current studies is mostly the size of the data sets. There is potential for improvement because there isn't a finished product or global standard for cancer detection and prediction. To ascertain the accuracy of these models, researchers need to gather up-to-date and new data, employ various deep learning and machine learning techniques, and combine new and old data. Early cancer detection can benefit millions of people. To detect cancer, there is no established standard or finished product. While deep learning has promising opportunities for lung cancer detection, there are important gaps that need to be filled. The generalizability of current findings is frequently problematic while accurate feature extraction is also crucial to be handled. Certain populations in the actual world may not respond well to models that were trained on their particular datasets. Furthermore, characterization may be eclipsed by an emphasis on detection. Some models are quite good at detecting nodules, but they may not give information about the tumor, which makes it more difficult to diagnose patients early and accurately. In addition, the "black box" nature of sophisticated deep learning models and worries about data security and privacy persist, making it challenging to comprehend how these models make decisions. Most of the research frequently runs into issues with feature extraction, making it difficult to identify pertinent elements that are essential for accurate prediction. These restrictions make it more difficult to do the thorough study needed to produce accurate and trustworthy detection results. In order to advance deep learning models' efficacy and dependability in lung cancer detection and eventually enhance patient outcomes and diagnostic accuracy, it is imperative that these issues be resolved. Despite these shortcomings, scientists are working hard to fill in the gaps Closing the existing gaps is critical to the future of deep learning for lung cancer detection. Researchers are focusing on tumor characterization rather than merely detection, enhancing generalizability through transfer learning, and advanced approaches. Strong data security protocols and resolving any biases in training data are also essential. Lastly, to stay up to date with the changing features of cancer, ongoing learning will be crucial. Deep learning has the potential to transform lung cancer detection and result in earlier diagnoses and better patient outcomes by overcoming these obstacles.

Different algorithms used for classification and detection purposes perform differently in terms of performance CNN surpasses the others.

8 Conclusion

Lung cancer is a serious and sometimes fatal disease that necessitates early discovery to effectively treat it. This work emphasizes how deep learning—specifically, the application of Convolutional Neural Networks, or CNN—is essential to changing the face of medical diagnosis. Deep learning techniques reduce the workload of healthcare professionals by automating the identification and categorization of lung cancer. This technological advancement improves the precision and efficacy of diagnosis. In the context of lung cancer research, this paper offers an extensive Systematic Literature Review (SLR) that makes use of deep learning approaches. It benefits researchers, practitioners, and stakeholders by classifying and synthesizing methodologies, outlining the state-of-the-art, conducting a thorough Quality Assessment, and offering a customized overview of deep learning approaches for lung cancer research. Our thorough analysis examines the several deep learning methods used in the identification and categorization of lung cancer across a range of imaging modalities, including MRIs, CT scans, and X-rays. Notably, we ensure a current overview by concentrating on the years 2015 to 2024 and obtaining information from credible journals. The study highlights the critical function that Deep Convolutional Neural Networks (DCNN) fulfill in the field of deep learning techniques. In particular, DCNN is the recommended method in the Convolutional Neural Network (CNN) architecture because of its unique multi-layered architecture, which makes direct feature learning from lung nodule images possible. The main reason why DCNN is so effective at obtaining the best accuracy is that it has the innate capacity to learn weights automatically. These models can significantly boost the efficiency and accuracy of lung cancer classification by the automatic learning of pertinent features, which improves diagnostic procedures in clinical settings. The research thoroughly assesses several performance measures, such as AUC, sensitivity, specificity, accuracy, and precision. When it comes to the classification of lung cancer, DCNN frequently performs more accurately than other algorithms. Even with these noteworthy successes, there are still difficulties, especially when managing large-volume datasets. This study presents the classification process on multiple data sets and multiple classifiers are used. The focus of the study is to present different deep learning algorithms and approaches to classify lungs cancer Nevertheless, there are still some challenges that still exist, most of them are related to the size of datasets. This study will help the researchers to better understand the existing deep-learning techniques and procedures to classify lungs cancer. This work lays the groundwork for future advancements in the field by providing a comprehensive grasp of the state-of-the-art deep learning methods for the categorization of lung cancer. Lung cancer diagnosis could be revolutionized by deep learning, however there remain obstacles because of the scale of data sets and the absence of an international standard. Current data must be gathered, deep learning and machine learning methods must be used, and both new and old data must be combined. Generalizability, characterization, data security, and privacy issues are lacking, nonetheless. Notwithstanding these obstacles, researchers are concentrating on tumor characterization, improving generalizability via transfer learning, and addressing biases in training data in an effort to close these gaps. Continual learning is also crucial to stay updated with changing cancer features.

References    

Abbas S, Issa GF, Fatima A et al (2023) Fused Weighted Federated Deep Extreme Machine Learning Based on Intelligent Lung Cancer Disease Prediction Model for Healthcare 5.0. Int J Intell Syst 2023:1–14. https://doi.org/10.1155/2023/2599161

Article   Google Scholar  

Abbas T, Fatima A, Shahzad T et al (2023) Secure IoMT for Disease Prediction Empowered With Transfer Learning in Healthcare 5.0, the Concept and Case Study. IEEE Access 11:39418–39430. https://doi.org/10.1109/access.2023.3266156

Abbas Q (2017) Lung-Deep: a Computerized Tool for detection of lung nodule patterns using deep learning algorithms detection of lung nodules patterns. Int J Adv Comput Sci Appl 8. https://doi.org/10.14569/ijacsa.2017.081015

Affonso C, de Nedjah NL (2020) Detection and classification of pulmonary nodules using deep learning and swarm intelligence. Multimedia Tools Appl 79:15437–15465. https://doi.org/10.1007/s11042-019-7473-z

Alharbey R, Kim JI, Daud A et al (2022) Indexing important drugs from medical literature. Scientometrics 127:2661–2681. https://doi.org/10.1007/s11192-022-04340-7

Ali F, Kumar H, Patil S et al (2022) DBP-DeepCNN: Prediction of DNA-binding proteins using wavelet-based denoising and deep learning. Chemom Intell Lab Syst 229:104639–104639. https://doi.org/10.1016/j.chemolab.2022.104639

Ali F, Kumar H, Patil S et al (2022) Target-DBPPred: An intelligent model for prediction of DNA-binding proteins using discrete wavelet transform based compression and light eXtreme gradient boosting. Comput Biol Med 145:105533–105533. https://doi.org/10.1016/j.compbiomed.2022.105533

Ardila D, Kiraly AP, Bharadwaj S et al (2019) Author Correction: End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 25:1319–1319. https://doi.org/10.1038/s41591-019-0536-x

Asuntha A, Srinivasan A (2020) Deep learning for lung Cancer detection and classification. Multimedia Tools Appl 79:7731–7762. https://doi.org/10.1007/s11042-019-08394-3

Banerjee N, Das S (2021) Lung cancer prediction in deep learning perspective

Basak P, Nath A (2017) Detection of different stages of lungs cancer in CT-scan images using image processing techniques. Int J Innov Res Comput Commun Eng 2320–9798

Chae KJ, Jin GY, Ko SB et al (2020) Deep Learning for the Classification of Small (≤2 cm) Pulmonary Nodules on CT Imaging: A Preliminary Study. Acad Radiol 27:e55–e63. https://doi.org/10.1016/j.acra.2019.05.018

Chaunzwa TL, Hosny A, Xu Y et al (2021) Deep learning classification of lung cancer histology using CT images. Sci Rep 11. https://doi.org/10.1038/s41598-021-84630-x

Chen W, Zheng R, Baade PD et al (2016) Cancer statistics in China, 2015. CA: A Cancer J Clin 66:115–132. https://doi.org/10.3322/caac.21338

Ciompi F, Chung K, van Riel SJ et al (2017) Towards automatic pulmonary nodule management in lung cancer screening with deep learning. Scientific Reports 7. https://doi.org/10.1038/srep46479

Cook RM, Miller YE, Bunn PA (1993) Small cell lung cancer: etiology, biology, clinical features, staging, and treatment. Curr Probl Cancer 17:69–141. https://doi.org/10.1016/0147-0272(93)90010-y

Corner J (2005) Is late diagnosis of lung cancer inevitable? Interview study of patients’ recollections of symptoms before diagnosis. Thorax 60:314–319. https://doi.org/10.1136/thx.2004.029264

Coudray N, Ocampo PS, Sakellaropoulos T et al (2018) Classification and mutation prediction from non–small cell lung cancer histopathology images using deep learning. Nat Med 24:1559–1567. https://doi.org/10.1038/s41591-018-0177-5

Cui X, Zheng S, Heuvelmans MA et al (2022) Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol 146:110068–110068. https://doi.org/10.1016/j.ejrad.2021.110068

Das P, Das B, Dutta HS (2020) Prediction of lungs cancer using machine learning

Dhara AK, Mukhopadhyay S, Dutta A et al (2016) A Combination of Shape and Texture Features for Classification of Pulmonary Nodules in Lung CT Images. J Digit Imaging 29:466–475. https://doi.org/10.1007/s10278-015-9857-6

Doppalapudi S, Qiu RG, Badr Y (2021) Lung cancer survival period prediction and understanding: Deep learning approaches. Int J Med Informatics 148:104371. https://doi.org/10.1016/j.ijmedinf.2020.104371

El-Baz A, Elnakib A, Abou El-Ghar M et al (2013) Automatic Detection of 2D and 3D Lung Nodules in Chest Spiral CT Scans. Int J Biomed Imaging 2013:1–11. https://doi.org/10.1155/2013/517632

Elnakib A, Amer HM, Abou-Chadi FE (2020) Early Lung Cancer Detection using Deep Learning Optimization. Int J Online Biomed Eng (iJOE) 16:82. https://doi.org/10.3991/ijoe.v16i06.13657

Esposito L, Conti D, Ailavajhala R et al (2010) Lung Cancer: Are we up to the Challenge? Curr Genom 11:513–518. https://doi.org/10.2174/138920210793175903

Esteva A, Robicquet A, Ramsundar B et al (2019) A guide to deep learning in healthcare. Nat Med 25:24–29. https://doi.org/10.1038/s41591-018-0316-z

Field JK, Oudkerk M, Pedersen JH, Duffy SW (2013) Prospects for population screening and diagnosis of lung cancer. The Lancet 382:732–741. https://doi.org/10.1016/s0140-6736(13)61614-1

Gheisari M, Wang G, Alam Bhuiyan MdZ (2017) A Survey on deep learning in big data. https://doi.org/10.1109/CSE-EUC.2017.215

Gheisari M, Ebrahimzadeh F, Rahimi M et al (2023) Deep learning: applications, architectures, models, tools, and frameworks: a comprehensive survey. CAAI Trans Intell Technol. https://doi.org/10.1049/cit2.12180

Gheisari M (2016) The Effectiveness of schema therapy integrated with neurological rehabilitation methods to improve executive functions in patients with chronic depression. Health Sci J 10

Goebel C, Louden CL, Mckenna R et al (2019) Diagnosis of Non-small Cell Lung Cancer for Early Stage Asymptomatic Patients. Cancer Genom - Proteom 16:229–244. https://doi.org/10.21873/cgp.20128

Gu Y, Chi J, Liu J et al (2021) A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 137:104806. https://doi.org/10.1016/j.compbiomed.2021.104806

Guo Y, Song Q, Jiang M et al (2021) Histological Subtypes Classification of Lung Cancers on CT Images Using 3D Deep Learning and Radiomics. Acad Radiol 28:e258–e266. https://doi.org/10.1016/j.acra.2020.06.010

Heuvelmans MA, van Ooijen PMA, Ather S et al (2021) Lung cancer prediction by Deep Learning to identify benign lung nodules. Lung Cancer 154:1–4. https://doi.org/10.1016/j.lungcan.2021.01.027

Hoang Ngoc Pham H, Futakuchi M, Bychkov A et al (2019) Detection of Lung Cancer Lymph Node Metastases from Whole-Slide Histopathologic Images Using a Two-Step Deep Learning Approach. Am J Pathol 189:2428–2439. https://doi.org/10.1016/j.ajpath.2019.08.014

Horeweg N, van Rosmalen J, Heuvelmans MA et al (2014) Lung cancer probability in patients with CT-detected pulmonary nodules: a prespecified analysis of data from the NELSON trial of low-dose CT screening. Lancet Oncol 15:1332–1341. https://doi.org/10.1016/S1470-2045(14)70389-4

Hu J, Cui C, Yang W et al (2021) Using deep learning to predict anti-PD-1 response in melanoma and lung cancer patients from histopathology images. Translational Oncol 14:100921. https://doi.org/10.1016/j.tranon.2020.100921

Hurria A, Kris MG (2003) Management of Lung Cancer in Older Adults. CA: A Cancer J Clin 53:325–341. https://doi.org/10.3322/canjclin.53.6.325

Hussein S, Kandel P, Bolan CW et al (2019) Lung and Pancreatic Tumor Characterization in the Deep Learning Era: Novel Supervised and Unsupervised Learning Approaches. IEEE Trans Med Imaging 38:1777–1787. https://doi.org/10.1109/tmi.2019.2894349

Jamshaid Iqbal Janjua, Tahir Abbas Khan, Nadeem M (2022) Chest x-ray anomalous object detection and classification framework for medical diagnosis. 2022 International conference on information networking (ICOIN). https://doi.org/10.1109/icoin53446.2022.9687110

Javed R, Abbas T, Jamshaid Iqbal Janjua et al (2023) wrist fracture prediction using transfer learning, a case study. J Popul Ther Clin Pharmacol 30. https://doi.org/10.53555/jptcp.v30i18.3161

Jena SR, George ST, Ponraj DN (2021) Lung cancer detection and classification with DGMM-RBCNN technique. 33:15601–15617. https://doi.org/10.1007/s00521-021-06182-5

Jiang H, Ma H, Qian W et al (2017) An Automatic Detection System of Lung Nodule Based on Multigroup Patch-Based Deep Learning Network. IEEE J Biomed Health Inform 22:1227–1237. https://doi.org/10.1109/JBHI.2017.2725903

Jiang W, Zeng G, Wang S et al (2022) Application of Deep Learning in Lung Cancer Imaging Diagnosis. J Healthc Eng 2022:1–12. https://doi.org/10.1155/2022/6107940

Jung H, Kim B, Lee I et al (2018) Classification of lung nodules in CT scans using three-dimensional deep convolutional neural networks with a checkpoint ensemble method. BMC Med Imaging 18. https://doi.org/10.1186/s12880-018-0286-0

Kumar V, Bakariya B (2021) Classification of malignant lung cancer using deep learning. J Med Eng Technol 45:85–93. https://doi.org/10.1080/03091902.2020.1853837

Kumar Swain A, Swetapadma A, Kumar Rout J, Kumar Balabantaray B (2024) Classification of non-small cell lung cancer types using sparse deep neural network features. Biomed Signal Process Control 87. https://doi.org/10.1016/j.bspc.2023.105485

Laal M (2013) Innovation Process in Medical Imaging. Procedia Soc Behav Sci 81:60–64. https://doi.org/10.1016/j.sbspro.2013.06.388

Lakshmanaprabu SK, Mohanty SN, Shankar K et al (2019) Optimal deep learning model for classification of lung cancer on CT images. Future Gener Comput Syst 92:374–382. https://doi.org/10.1016/j.future.2018.10.009

Lang N, Zhang Y, Zhang E et al (2019) Differentiation of spinal metastases originated from lung and other cancers using radiomics and deep learning based on DCE-MRI. Magn Reson Imaging 64:4–12. https://doi.org/10.1016/j.mri.2019.02.013

Lanjewar MG, Panchbhai KG, Charanarur P (2023) Lung Cancer detection from CT scans using modified DenseNet with feature selection methods and ML classifiers. Expert Syst Appl 119961. https://doi.org/10.1016/j.eswa.2023.119961

Lardinois D, Weder W, Hany TF et al (2003) Staging of Non–Small-Cell Lung Cancer with Integrated Positron-Emission Tomography and Computed Tomography. N Engl J Med 348:2500–2507. https://doi.org/10.1056/nejmoa022136

Li Y, Zhang L, Chen H, Yang N (2019) Lung Nodule Detection With Deep Learning in 3D Thoracic MR Images. IEEE Access 7:37822–37832. https://doi.org/10.1109/access.2019.2905574

Li Z, Zhang J, Tan T et al (2020) Deep learning methods for lung cancer segmentation in whole-slide histopathology images -- the ACDC@LungHP Challenge 2019. arXiv (Cornell University). https://doi.org/10.48550/arxiv.2008.09352

Lin C-J, Li Y-C (2020) Lung Nodule Classification Using Taguchi-Based Convolutional Neural Networks for Computer Tomography Images. Electronics 9:1066. https://doi.org/10.3390/electronics9071066

Liu Y, Wang H, Gu Y, Lv X (2019) Image classification toward lung cancer recognition by learning deep quality model. J Vis Commun Image Represent 63:102570. https://doi.org/10.1016/j.jvcir.2019.06.012

Liu Z, Yao C, Yu H, Wu T (2019) Deep reinforcement learning with its application for lung cancer detection in medical Internet of Things. Futur Gener Comput Syst 97:1–9. https://doi.org/10.1016/j.future.2019.02.068

Liu S, Yao W (2022) Prediction of lung cancer using gene expression and deep learning with KL divergence gene selection. BMC Bioinformatics 23. https://doi.org/10.1186/s12859-022-04689-9

Liu X, Li K-W, Yang R, Geng L-S (2021) Review of deep learning based automatic segmentation for lung cancer Radiotherapy. Front Oncol 11. https://doi.org/10.3389/fonc.2021.717039

Lowe VJ, Fletcher JW, Gobar L et al (1998) Prospective investigation of positron emission tomography in lung nodules. J Clin Oncol 16:1075–1084. https://doi.org/10.1200/jco.1998.16.3.1075

Lu L, Tan Y, Schwartz LH, Zhao B (2015) Hybrid detection of lung nodules on CT scan images. Med Phys 42:5042–5054. https://doi.org/10.1118/1.4927573

Manjula Devi R, Dhanaraj RK, Pani SK et al (2023) An improved deep convolutionary neural network for bone marrow cancer detection using image processing. Inf Med Unlocked 101233. https://doi.org/10.1016/j.imu.2023.101233

Masood I, Wang Y, Daud A et al (2018) Towards Smart Healthcare: Patient Data Privacy and Security in Sensor-Cloud Infrastructure. Wirel Commun Mob Comput 2018:1–23. https://doi.org/10.1155/2018/2143897

Masood I, Wang Y, Daud A et al (2018) Privacy management of patient physiological parameters. Telematics Inform 35:677–701. https://doi.org/10.1016/j.tele.2017.12.020

Masood I, Daud A, Wang Y et al (2023) A blockchain-based system for patient data privacy and security. Multimedia Tools Appl. https://doi.org/10.1007/s11042-023-17941-y

Massimo B (2012) A classification of pulmonary nodules by CT scan. https://doi.org/10.3332/ecancer.2012.260

Masud M, Sikder N, Nahid A-A et al (2021) A Machine Learning Approach to Diagnosing Lung and Colon Cancer Using a Deep Learning-Based Classification Framework. Sensors 21:748. https://doi.org/10.3390/s21030748

Mayo Clinic (2022) Cancer - Symptoms and Causes. In: Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/cancer/symptoms-causes/syc-20370588

Middleton WD, Teefey SA, Dahiya N (2006) Ultrasound-Guided Chest Biopsies. Ultrasound Q 22:241–252. https://doi.org/10.1097/01.ruq.0000237258.48756.94

Miotto R, Wang F, Wang S et al (2018) Deep Learning for healthcare: review, Opportunities and Challenges. Brief Bioinform 19:1236–1246. https://doi.org/10.1093/bib/bbx044

Mishra S, Dash A, Jena L (2020) Use of deep learning for disease detection and diagnosis. 903. https://doi.org/10.1007/978-981-15-5495-7_10

Mittal S, Hasija Y (2019) Deep Learning Techniques for Biomedical and Health Informatics. Stud Big Data 68:57–77. https://doi.org/10.1007/978-3-030-33966-1_4

Müller M, Zumbusch A (2007) Coherent anti-Stokes Raman Scattering Microscopy. ChemPhysChem 8:2156–2170. https://doi.org/10.1002/cphc.200700202

Nam JG, Park S, Hwang EJ et al (2019) Development and Validation of Deep Learning–based Automatic Detection Algorithm for Malignant Pulmonary Nodules on Chest Radiographs. Radiology 290:218–228. https://doi.org/10.1148/radiol.2018180237

Naqi SM, Sharif M, Jaffar A (2018) Lung nodule detection and classification based on geometric fit in parametric form and deep learning. Neural Comput Appl 32:4629–4647. https://doi.org/10.1007/s00521-018-3773-x

Nardi-Agmon I, Peled N (2017) Exhaled breath analysis for the early detection of lung cancer: recent developments and future prospects. Lung Cancer: Targets Ther 8:31–38. https://doi.org/10.2147/lctt.s104205

Nasrullah, Sang J, Mohammad Khursheed Alam, Xiang H (2019) Automated detection and classification for early stage lung cancer on CT images using deep learning. https://doi.org/10.1117/12.2520333

Nibali A, He Z, Wollersheim D (2017) Pulmonary nodule classification with deep residual networks. Int J Comput Assist Radiol Surg 12:1799–1808. https://doi.org/10.1007/s11548-017-1605-6

Obulesu O, Kallam S, Dhiman G et al (2021) Adaptive diagnosis of lung cancer by deep learning classification using Wilcoxon gain and generator. 2021:1–13. https://doi.org/10.1155/2021/5912051

Oh S, Im J, Kang S-R et al (2021) PET-Based Deep-Learning Model for Predicting Prognosis of Patients With Non-Small Cell Lung Cancer. IEEE Access 9:138753–138761. https://doi.org/10.1109/access.2021.3115486

Ozdemir O, Russell RL, Berlin AA (2020) A 3D Probabilistic Deep Learning System for Detection and Diagnosis of Lung Cancer Using Low-Dose CT Scans. IEEE Trans Med Imaging 39:1419–1429. https://doi.org/10.1109/tmi.2019.2947595

Pandit BR, Alsadoon A, Prasad PWC et al (2022) Deep learning neural network for lung cancer classification: enhanced optimization function. Multimedia Tools Appl. https://doi.org/10.1007/s11042-022-13566-9

Parascandola M, Xiao L (2019) Tobacco and the lung cancer epidemic in China. Translational Lung Cancer Res 8:S21–S30. https://doi.org/10.21037/tlcr.2019.03.12

Park S, Jin Lee S, Weiss E, Motai Y (2016) Intra- and Inter-Fractional Variation Prediction of Lung Tumors Using Fuzzy Deep Learning. IEEE J Translational Eng Health Med 4:4300112. https://doi.org/10.1109/JTEHM.2016.2516005

Parris BA, O’Farrell HE, Fong KM, Yang IA (2019) Chronic obstructive pulmonary disease (COPD) and lung cancer: common pathways for pathogenesis. J Thoracic Dis\ 11:S2155–S2172. https://doi.org/10.21037/jtd.2019.10.54

Pathak H, Manoj Kumar Pandey, Kaur J (2018) Detection and feature extraction of cancer nodules in lung CT image. J Emerging Technol Innov Res

Quadrelli S, Lyons G, Colt H et al (2015) Clinical Characteristics and Prognosis of Incidentally Detected Lung Cancers. Int J Surg Oncol 2015:1–6. https://doi.org/10.1155/2015/287604

Roointan A, Ahmad Mir T, Ibrahim Wani S et al (2019) Early detection of lung cancer biomarkers through biosensor technology: A review. J Pharm Biomed Anal 164:93–103. https://doi.org/10.1016/j.jpba.2018.10.017

Said Y, Alsheikhy AA, Shawly T, Lahza H (2023) Medical Images Segmentation for Lung Cancer Diagnosis Based on Deep Learning Architectures. Diagnostics 13:546–546. https://doi.org/10.3390/diagnostics13030546

Savitha G, Jidesh P (2020) A holistic deep learning approach for identification and classification of sub-solid lung nodules in computed tomographic scans. Comput Electr Eng 84:106626. https://doi.org/10.1016/j.compeleceng.2020.106626

Schaefer-Prokop C, Prokop M (2002) New imaging techniques in the treatment guidelines for lung cancer. Eur Respir J 19:71S-83S. https://doi.org/10.1183/09031936.02.00277902

Shah AA, Malik HAM, Muhammad A et al (2023) Deep learning ensemble 2D CNN approach towards the detection of lung cancer. Sci Rep 13. https://doi.org/10.1038/s41598-023-29656-z

Shakeel PM, Burhanuddin MA, Desa MI (2019) Lung cancer detection from CT image using improved profuse clustering and deep learning instantaneously trained neural networks. Measurement 145:702–712. https://doi.org/10.1016/j.measurement.2019.05.027

Shakeel PM, Burhanuddin MA, Desa MI (2020) Automatic lung cancer detection from CT image using improved deep neural network and ensemble classifier. Neural Comput Appl. https://doi.org/10.1007/s00521-020-04842-6

Shakeel PM, Burhanuddin MA, Desa MI (2022) Automatic lung cancer detection from CT image using improved deep neural network and ensemble classifier. Neural Comput Appl 7731–7762. https://doi.org/10.1007/s00521-020-04842-6

Sharma R (2022) Mapping of global, regional and national incidence, mortality and mortality-to-incidence ratio of lung cancer in 2020 and 2050. Int J Clin Oncol 27: https://doi.org/10.1007/s10147-021-02108-2

Siddiqui EA, Chaurasia V, Shandilya M (2023) Detection and classification of lung cancer computed tomography images using a novel improved deep belief network with Gabor filters. Chemom Intell Lab Syst 235:104763. https://doi.org/10.1016/j.chemolab.2023.104763

Silvestri MD GA, Tanner MD NT, Kearney P et al (n.d.) Assessment of plasma proteomics biomarker’s ability to distinguish benign from malignant lung nodules: results of the PANOPTIC (Pulmonary Nodule Plasma Proteomic Classifier). Trial Chest 154:491–500. https://doi.org/10.1016/j.chest.2018.02.012

Song Q, Zhao L, Luo X, Dou X (2017) Using Deep Learning for Classification of Lung Nodules on Computed Tomography Images. J Healthc Eng 2017:1–7. https://doi.org/10.1155/2017/8314740

Sori WJ, Feng J, Godana AW et al (2020) DFD-Net: lung cancer detection from denoised CT scan image using deep learning. Front Comput Sci 15: https://doi.org/10.1007/s11704-020-9050-z

Strak M, Janssen N, Beelen R et al (2017) Associations between lifestyle and air pollution exposure: Potential for confounding in large administrative data cohorts. Environ Res 156:364–373. https://doi.org/10.1016/j.envres.2017.03.050

Sui D, Guo M, Ma X et al (2021) Image bio-markers and gene expression data correlation framework for lung cancer radio-genomics analysis based on deep learning. Res Square (Res Square). https://doi.org/10.21203/rs.3.rs-144196/v1

Tan J, Huo Y, Liang Z, Li L (2019) Expert knowledge-infused deep learning for automatic lung nodule detection. J Xray Sci Technol 27:17–35. https://doi.org/10.3233/xst-180426

Tariq Hussain S (n.d.) The journey: from X-rays to PET-MRI. Indian J Nucl Med

Tian Q, Wu Y, Ren X, Razmjooy N (2021) A New optimized sequential method for lung tumor diagnosis based on deep learning and converged search and rescue algorithm. Biomed Signal Process Control 68:102761. https://doi.org/10.1016/j.bspc.2021.102761

Tran GS, Nghiem TP, Nguyen VT et al (2019) Improving Accuracy of Lung Nodule Classification Using Deep Learning with Focal Loss. J Healthc Eng 2019:1–9. https://doi.org/10.1155/2019/5156416

Traverso A, Lopez Torres E, Fantacci ME, Cerello P (2017) Computer-aided detection systems to improve lung cancer early diagnosis: state-of-the-art and challenges. J Phys: Conf Ser 841:012013. https://doi.org/10.1088/1742-6596/841/1/012013

Wang X, Chen H, Gan C et al (2020) Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis. IEEE Trans Cybern 50:3950–3962. https://doi.org/10.1109/tcyb.2019.2935141

Wang Y-W, Chen C-J, Huang H-C et al (2021) Dual energy CT image prediction on primary tumor of lung cancer for nodal metastasis using deep learning. Comput Med Imaging Graph 91:101935–101935. https://doi.org/10.1016/j.compmedimag.2021.101935

Wang W, Liu F, Zhi X et al (2020a) An Integrated deep learning algorithm for detecting lung nodules with low-dose CT and its application in 6G-enabled internet of medical things. IEEE Internet Things J 1–1. https://doi.org/10.1109/jiot.2020.3023436

Wani NA, Kumar R, Bedi J (2023) DeepXplainer: an interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence. Comput Methods Programs Biomed 243:107879. https://doi.org/10.1016/j.cmpb.2023.107879

Wankhade S, Vigneshwari S (2023) Lung cell cancer identification mechanism using deep learning approach. Soft Computing. https://doi.org/10.1007/s00500-023-08661-4

Weng S, Xu X, Li J, Wong STC (2017) Combining deep learning and coherent anti-Stokes Raman scattering imaging for automated differential diagnosis of lung cancer. J Biomed Opt 22:1. https://doi.org/10.1117/1.jbo.22.10.106017

World Health Organization (2022) Cancer. In: World Health Organization. https://www.who.int/news-room/fact-sheets/detail/cancer . Accessed 22 Jan 2024

Wu P, Sun X, Zhao Z et al (2020) Classification of lung nodules based on deep residual networks and migration learning. 2020:1–10. https://doi.org/10.1155/2020/8975078

Wu J, Qian T (2019) A survey of pulmonary nodule detection, segmentation and classification in computed tomography with deep learning techniques. J Med Artif Intell 2:8–8. https://doi.org/10.21037/jmai.2019.04.01

Xie Y, Xia Y, Zhang J et al (2019) Knowledge-based Collaborative Deep Learning for Benign-Malignant Lung Nodule Classification on Chest CT. IEEE Trans Med Imaging 38:991–1004. https://doi.org/10.1109/tmi.2018.2876510

Xu Y, Hosny A, Zeleznik R et al (2019) Deep Learning Predicts Lung Cancer Treatment Response from Serial Medical Imaging. Clin Cancer Res 25:3266–3275. https://doi.org/10.1158/1078-0432.ccr-18-2495

Yu H, Zhou Z, Wang Q (2020) Deep Learning Assisted Predict of Lung Cancer on Computed Tomography Images Using the Adaptive Hierarchical Heuristic Mathematical Model. IEEE Access 8:86400–86410. https://doi.org/10.1109/access.2020.2992645

Yuan R, Vos PM, Cooperberg PL (2006) Computer-Aided Detection in Screening CT for Pulmonary Nodules. Am J Roentgenol 186:1280–1287. https://doi.org/10.2214/ajr.04.1969

Yu-Jen Chen Y-J, Hua K-L, Hsu C-H et al (2015) Computer-aided classification of lung nodules on computed tomography images via deep learning technique. OncoTargets Ther 2015. https://doi.org/10.2147/ott.s80733

Zhang Q, Kong X (2020) Design of Automatic Lung Nodule Detection System Based on Multi-Scene Deep Learning Framework. IEEE Access 8:90380–90389. https://doi.org/10.1109/access.2020.2993872

Zhang G, Yang Z, Gong L et al (2019) Classification of benign and malignant lung nodules from CT images based on hybrid features. Phys Med Biol 64:125011. https://doi.org/10.1088/1361-6560/ab2544

Zhao X, Wang X, Xia W et al (2020) A cross-modal 3D deep learning for accurate lymph node metastasis prediction in clinical stage T1 lung adenocarcinoma. Lung Cancer (amsterdam, Netherlands) 145:10–17. https://doi.org/10.1016/j.lungcan.2020.04.014

Download references

Author information

Authors and affiliations.

Department of Computer Science, TIMES Institute Multan, Multan, 60000, Pakistan

Rabia Javed & Tahir Abbas

Department of Software Engineering, Faculty of Computer Science, Lahore Garrison University, Lahore, 54000, Pakistan

Ali Haider Khan

Faculty of Resilience, Rabdan Academy, Abu Dhabi, United Arab Emirates

Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia

Amal Bukhari & Riad Alharbey

You can also search for this author in PubMed   Google Scholar

Contributions

Rabia and Ali Haider have written a major part of the paper under the supervision of Tahir and Ali Daud. Tahir and Ali Daud have helped design and improve the methodology and wrote the paper initial draft with Rabia and Ali Haider. Riad and Amal have helped in improving the paper sections, such as, review methodology, datasets, performance evaluation and challenges and future directions. Ali, Amal and Riad have improved the technical writing of paper. All authors are involved in revising the manuscript critically and have approved the final version of the manuscript.

Corresponding author

Correspondence to Ali Daud .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Javed, R., Abbas, T., Khan, A.H. et al. Deep learning for lungs cancer detection: a review. Artif Intell Rev 57 , 197 (2024). https://doi.org/10.1007/s10462-024-10807-1

Download citation

Accepted : 16 May 2024

Published : 08 July 2024

DOI : https://doi.org/10.1007/s10462-024-10807-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Lungs Cancer
  • Deep learning
  • Classification
  • Segmentation
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. Chapter V: Overviews of Reviews

    V.2.1 Definition of a Cochrane Overview #section--2-1. Cochrane Overviews of Reviews (Cochrane Overviews) use explicit and systematic methods to search for and identify multiple systematic reviews on related research questions in the same topic area for the purpose of extracting and analysing their results across important outcomes.

  2. An introduction to overviews of reviews: planning a relevant research

    Overviews of systematic reviews are a relatively new approach to synthesising evidence, and research methods and associated guidance are developing. Within this paper we aim to help readers understand key issues which are essential to consider when taking the first steps in planning an overview. These issues relate to the development of clear, relevant research questions and objectives prior ...

  3. An overview of methodological approaches in systematic reviews

    The overview methodology followed well established guidelines and strict criteria defined for the inclusion of SRs. There are several limitations related to the nature of the included reviews. Evidence for most of the methods investigated in the included reviews was derived from a limited number of primary studies.

  4. Systematic reviews: Brief overview of methods, limitations, and

    A typology of reviews: An analysis of 14 review types and associated methodologies 5: Discusses 14 main review types to discriminate unique attributes. Describes and summarizes methods for systematic reviews, including meta-analysis and qualitative systematic review. Lists strengths and weaknesses and examples of systematic reviews

  5. Overviews of Reviews

    Cochrane Overviews of Reviews (Overviews) use explicit and systematic methods to search for and identify multiple systematic reviews on a similar topic for the purpose of extracting and analyzing their results across important outcomes. Thus, the unit of searching, inclusion and data analysis is the systematic review rather than the primary study.

  6. Types and associated methodologies of overviews of reviews in health

    An overview of reviews or simply overview is a type of evidence synthesis that has emerged as a result of the rapidly increasing number of published reviews [1, 2]. The aim of an overview is to search, collect, and integrate information on a topic, mainly from reviews (usually systematic reviews [SRs]), using explicit and robust methods.

  7. PDF An introduction to overviews of reviews: planning a relevant research

    development of clear, relevant research questions and objectives prior to the development of an overview protocol. Methods: Initial discussions and key concepts for this paper were formed during a workshop on overview methods at the 2016 UK Cochrane Symposium, at which all members of this author group presented work and contributed

  8. Using the Methodology of Systematic Review of Reviews for Evidence

    Methodology of Systematic Review of Reviews. The meta-review or reviews of systematic reviews is important for several reasons (10, 24, 25): (1) it allows the results of reviews relevant to a review question to be compared and contrasted; (2) It allows ready assessment of whether review authors addressing similar review questions independently observe similar findings; (3) it can play a role ...

  9. Systematic Review

    A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer. Example: Systematic review. In 2008, Dr. Robert Boyle and his colleagues published a systematic review in ...

  10. Introduction to Systematic Reviews

    A systematic review identifies and synthesizes all relevant studies that fit prespecified criteria to answer a research question (Lasserson et al. 2019; IOM 2011).What sets a systematic review apart from a narrative review is that it follows consistent, rigorous, and transparent methods established in a protocol in order to minimize bias and errors.

  11. Guidance to best tools and practices for systematic reviews

    Methods and guidance to produce a reliable evidence synthesis. Several international consortiums of EBM experts and national health care organizations currently provide detailed guidance (Table (Table1). 1).They draw criteria from the reporting and methodological standards of currently recommended appraisal tools, and regularly review and update their methods to reflect new information and ...

  12. Methodological Approaches to Literature Review

    A narrative review provides an overview of a wide range of relevant topics in an easy-to-read format without following a comprehensive systematic review methodology process (Murphy 2012). Unlike a systematic review, a narrative review has undefined methods of searching, analyzing, and synthesizing the literature.

  13. Guidance for overviews of reviews continues to accumulate, but

    How overview of reviews authors might best explore and present data on primary study overlap has become an area of increased research interest ... McDonald S, McKenzie JE. Toward a comprehensive evidence map of overview of systematic review methods: paper 1—purpose, eligibility, search and data extraction. Syst Rev. 2017;6(1):231. Google ...

  14. Literature review as a research methodology: An overview and guidelines

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  15. Home

    A systematic review is a literature review that gathers all of the available evidence matching pre-specified eligibility criteria to answer a specific research question. It uses explicit, systematic methods, documented in a protocol, to minimize bias, provide reliable findings, and inform decision-making.

  16. Types and associated methodologies of overviews of reviews in health

    Objectives: To provide a descriptive insight into the different types of research questions/objectives and associated methodologies of overviews of reviews, supplemented by representative examples from the health care literature. Study design and setting: We searched in methodological articles for information on types and methodologies used in overviews and we explored the typology of reviews ...

  17. Overviews of Reviews: Concept and Development

    An overview aims to provide a summary of the included reviews and will often examine research questions beyond those addressed in the systematic reviews being synthesised. The purpose of this article is to provide some recommendations on how overviews should be conducted and reported. Method: A literature review was performed to identify ...

  18. Home

    Also Called: Overview of Reviews. Purpose: Synthesize data from existing systematic reviews/meta-analyses to examine the highest level of evidence on an oversaturated topic. Research Question: Broad or specific based on the quantity/objectives of the previous reviews. Team Size: 2 or more. Time to Complete: 6-12 months.

  19. Methodology in conducting a systematic review of systematic reviews of

    We have been involved in several examples of systematic reviews (or overviews) of reviews [6-9] and The Cochrane Collaboration introduced a new type of Cochrane review in 2009 , the overview of Cochrane reviews, with two full overviews [11, 12] and protocols for five more [13-17] published by October 2010.

  20. Chapter 9 Methods for Literature Reviews

    9.3. Types of Review Articles and Brief Illustrations. EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic.

  21. Library Guides: Systematic Reviews: Methodology Overview

    1. Put together a systematic review team. · Generally, a team needs to have at least three people to reduce bias. Two team members will review each search result, with the third team member acting as a tiebreaker. 2. Develop a focused research question. 3. Establish criteria for including and excluding studies.

  22. What is an Overview of Reviews?

    For providing clinical decision makers with the evidence they need when there are too many systematic reviews for them to keep up with for an intervention. When there needs to be a rapid evidence synthesis for decision-makers but you need higher quality evidence due to limitations of the rapid review methodology.

  23. Evidence Synthesis and Systematic Reviews

    Definition: "A review method that summarizes past empirical or theoretical literature to provide a more comprehensive understanding of a particular phenomenon or healthcare problem (Broome 1993).Integrative reviews, thus, have the potential to build nursing science, informing research, practice, and policy initiatives. Well-done integrative reviews present the state of the science, contribute ...

  24. Review Methodology for CRM Software

    Overview of how we calculate our star ratings based on our in-house algorithm that brings our readers objective, informative reviews and guides. ... Review Methodology for CRM Software

  25. Deep learning for lungs cancer detection: a review

    Provided a comprehensive overview of deep learning methodologies specifically tailored to lung cancer research, consolidating the collective knowledge and advancements in the area for the benefit of researchers, practitioners, and stakeholders. ... Riad and Amal have helped in improving the paper sections, such as, review methodology, datasets ...