• Privacy Policy

Buy Me a Coffee

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Research Questions

Research Questions – Types, Examples and Writing...

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Working with sources
  • Evaluating Sources | Methods & Examples

Evaluating Sources | Methods & Examples

Published on June 2, 2022 by Eoghan Ryan . Revised on May 31, 2023.

The sources you use are an important component of your research. It’s important to evaluate the sources you’re considering using, in order to:

  • Ensure that they’re credible
  • Determine whether they’re relevant to your topic
  • Assess the quality of their arguments

Table of contents

Evaluating a source’s credibility, evaluating a source’s relevance, evaluating a source’s arguments, other interesting articles, frequently asked questions about evaluating sources.

Evaluating the credibility of a source is an important way of sifting out misinformation and determining whether you should use it in your research. Useful approaches include the CRAAP test and lateral reading .

One of the best ways to evaluate source credibility is the CRAAP test . This stands for:

  • Currency: Does the source reflect recent research?
  • Relevance: Is the source related to your research topic?
  • Authority: Is it a respected publication? Is the author an expert in their field?
  • Accuracy: Does the source support its arguments and conclusions with evidence?
  • Purpose: What is the author’s intention?

How you evaluate a source using these criteria will depend on your subject and focus. It’s important to understand the types of sources and how you should use them in your field of research.

Lateral reading

Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to:

  • Verify evidence
  • Contextualize information
  • Find potential weaknesses

If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable.

Rather than taking these figures at face value, you decide to determine the accuracy of the source’s claims by cross-checking them with official statistics such as census reports and figures compiled by the Department of Homeland Security’s Office of Immigration Statistics.

Prevent plagiarism. Run a free check.

How you evaluate the relevance of a source will depend on your topic, and on where you are in the research process . Preliminary evaluation helps you to pick out relevant sources in your search, while in-depth evaluation allows you to understand how they’re related.

Preliminary evaluation

As you cannot possibly read every source related to your topic, you can use preliminary evaluation to determine which sources might be relevant. This is especially important when you’re surveying a large number of sources (e.g., in a literature review or systematic review ).

One way to do this is to look at paratextual material, or the parts of a work other than the text itself.

  • Look at the table of contents to determine the scope of the work.
  • Consult the index for key terms or the names of important scholars.

You can also read abstracts , prefaces , introductions , and conclusions . These will give you a clear idea of the author’s intentions, the parameters of the research, and even the conclusions they draw.

Preliminary evaluation is useful as it allows you to:

  • Determine whether a source is worth examining in more depth
  • Quickly move on to more relevant sources
  • Increase the quality of the information you consume

While this preliminary evaluation is an important step in the research process, you should engage with sources more deeply in order to adequately understand them.

In-depth evaluation

Begin your in-depth evaluation with any landmark studies in your field of research, or with sources that you’re sure are related to your research topic.

As you read, try to understand the connections between the sources. Look for:

  • Key debates: What topics or questions are currently influencing research? How does the source respond to these key debates?
  • Major publications or critics: Are there any specific texts or scholars that have greatly influenced the field? How does the source engage with them?
  • Trends: Is the field currently dominated by particular theories or research methods ? How does the source respond to these?
  • Gaps: Are there any oversights or weaknesses in the research?

Even sources whose conclusions you disagree with can be relevant, as they can strengthen your argument by offering alternative perspectives.

Every source should contribute to the debate about its topic by taking a clear position. This position and the conclusions the author comes to should be supported by evidence from direct observation or from other sources.

Most sources will use a mix of primary and secondary sources to form an argument . It is important to consider how the author uses these sources. A good argument should be based on analysis and critique, and there should be a logical relationship between evidence and conclusions.

To assess an argument’s strengths and weaknesses, ask:

  • Does the evidence support the claim?
  • How does the author use evidence? What theories, methods, or models do they use?
  • Could the evidence be used to draw other conclusions? Can it be interpreted differently?
  • How does the author situate their argument in the field? Do they agree or disagree with other scholars? Do they confirm or challenge established knowledge?

Situating a source in relation to other sources ( lateral reading ) can help you determine whether the author’s arguments and conclusions are reliable and how you will respond to them in your own writing.

If you want to know more about ChatGPT, AI tools , citation , and plagiarism , make sure to check out some of our other articles with explanations and examples.

  • ChatGPT vs human editor
  • ChatGPT citations
  • Is ChatGPT trustworthy?
  • Using ChatGPT for your studies
  • What is ChatGPT?
  • Chicago style
  • Paraphrasing

 Plagiarism

  • Types of plagiarism
  • Self-plagiarism
  • Avoiding plagiarism
  • Academic integrity
  • Consequences of plagiarism
  • Common knowledge

Scribbr Citation Checker New

The AI-powered Citation Checker helps you avoid common mistakes such as:

  • Missing commas and periods
  • Incorrect usage of “et al.”
  • Ampersands (&) in narrative citations
  • Missing reference entries

how to critically evaluate research methods

As you cannot possibly read every source related to your topic, it’s important to evaluate sources to assess their relevance. Use preliminary evaluation to determine whether a source is worth examining in more depth.

This involves:

  • Reading abstracts , prefaces, introductions , and conclusions
  • Looking at the table of contents to determine the scope of the work
  • Consulting the index for key terms or the names of important scholars

Lateral reading is the act of evaluating the credibility of a source by comparing it with other sources. This allows you to:

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

The CRAAP test is an acronym to help you evaluate the credibility of a source you are considering using. It is an important component of information literacy .

The CRAAP test has five main components:

  • Currency: Is the source up to date?
  • Relevance: Is the source relevant to your research?
  • Authority: Where is the source published? Who is the author? Are they considered reputable and trustworthy in their field?
  • Accuracy: Is the source supported by evidence? Are the claims cited correctly?
  • Purpose: What was the motive behind publishing this source?

Scholarly sources are written by experts in their field and are typically subjected to peer review . They are intended for a scholarly audience, include a full bibliography, and use scholarly or technical language. For these reasons, they are typically considered credible sources .

Popular sources like magazines and news articles are typically written by journalists. These types of sources usually don’t include a bibliography and are written for a popular, rather than academic, audience. They are not always reliable and may be written from a biased or uninformed perspective, but they can still be cited in some contexts.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Ryan, E. (2023, May 31). Evaluating Sources | Methods & Examples. Scribbr. Retrieved April 2, 2024, from https://www.scribbr.com/working-with-sources/evaluating-sources/

Is this article helpful?

Eoghan Ryan

Eoghan Ryan

Other students also liked, student guide: information literacy | meaning & examples, types of sources explained | examples & tips, what are credible sources & how to spot them | examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile My Profile

Not Logged In

  • Sign in Signed in
  • My profile My Profile

Not Logged In

Understanding and Evaluating Research: A Critical Guide

  • By: Sue L. T. McGregor
  • Publisher: SAGE Publications, Inc
  • Publication year: 2018
  • Online pub date: December 20, 2019
  • Discipline: Sociology , Education , Psychology , Health , Anthropology , Social Policy and Public Policy , Social Work , Political Science and International Relations , Geography
  • Methods: Theory , Research questions , Mixed methods
  • DOI: https:// doi. org/10.4135/9781071802656
  • Keywords: discipline , emotion , Johnson & Johnson , journals , knowledge , law , peer review Show all Show less
  • Print ISBN: 9781506350950
  • Online ISBN: 9781071802656
  • Buy the book icon link

Subject index

Understanding and Evaluating Research: A Critical Guide shows students how to be critical consumers of research and to appreciate the power of methodology as it shapes the research question, the use of theory in the study, the methods used, and how the outcomes are reported. The book starts with what it means to be a critical and uncritical reader of research, followed by a detailed chapter on methodology, and then proceeds to a discussion of each component of a research article as it is informed by the methodology. The book encourages readers to select an article from their discipline, learning along the way how to assess each component of the article and come to a judgment of its rigor or quality as a scholarly report.

Front Matter

  • Acknowledgments
  • About the Author
  • INTRODUCTION
  • Chapter 1: Critical Research Literacy
  • PHILOSOPHICAL AND THEORETICAL ASPECTS OF RESEARCH
  • Chapter 2: Research Methodologies
  • Chapter 3: Conceptual Frameworks, Theories, and Models
  • ORIENTING AND SUPPORTIVE ELEMENTS OF RESEARCH
  • Chapter 4: Orienting and Supportive Elements of a Journal Article
  • Chapter 5: Peer-Reviewed Journals
  • RESEARCH JUSTIFICATIONS, AUGMENTATION, AND RATIONALES
  • Chapter 6: Introduction and Research Questions
  • Chapter 7: Literature Review
  • RESEARCH DESIGN AND RESEARCH METHODS
  • Chapter 8: Overview of Research Design and Methods
  • Chapter 9: Reporting Qualitative Research Methods
  • Chapter 10: Reporting Quantitative Methods and Mixed Methods Research
  • RESULTS AND FINDINGS
  • Chapter 11: Statistical Literacy and Conventions
  • Chapter 12: Descriptive and Inferential Statistics
  • Chapter 13: Results and Findings
  • DISCUSSION, CONCLUSIONS, AND RECOMMENDATIONS
  • Chapter 14: Discussion
  • Chapter 15: Conclusions
  • Chapter 16: Recommendations
  • ARGUMENTATIVE ESSAYS AND THEORETICAL PAPERS
  • Chapter 17: Argumentative Essays: Position, Discussion, and Think-Piece Papers
  • Chapter 18: Conceptual and Theoretical Papers

Back Matter

Sign in to access this content, get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists
  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Critically appraising...

Critically appraising qualitative research

  • Related content
  • Peer review
  • Ayelet Kuper , assistant professor 1 ,
  • Lorelei Lingard , associate professor 2 ,
  • Wendy Levinson , Sir John and Lady Eaton professor and chair 3
  • 1 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 2 Department of Paediatrics and Wilson Centre for Research in Education, University of Toronto and SickKids Learning Institute; BMO Financial Group Professor in Health Professions Education Research, University Health Network, 200 Elizabeth Street, Eaton South 1-565, Toronto
  • 3 Department of Medicine, Sunnybrook Health Sciences Centre
  • Correspondence to: A Kuper ayelet94{at}post.harvard.edu

Six key questions will help readers to assess qualitative research

Summary points

Appraising qualitative research is different from appraising quantitative research

Qualitative research papers should show appropriate sampling, data collection, and data analysis

Transferability of qualitative research depends on context and may be enhanced by using theory

Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power

Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.

Box 1 Key questions to ask when reading qualitative research studies

Was the sample used in the study appropriate to its research question, were the data collected appropriately, were the data analysed appropriately, can i transfer the results of this study to my own setting, does the study adequately address potential ethical issues, including reflexivity.

Overall: is what the researchers did clear?

One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a sample of such survivors (rather than, for example, healthcare providers, social services workers, or academics in the field). The sample should be broad enough to capture the many facets of a phenomenon, and limitations to the sample should be clearly justified. Since the answers to questions of experience and meaning also relate to people’s social affiliations (culture, religion, socioeconomic group, profession, etc), it is also important that the researcher acknowledges these contexts in the selection of a study sample.

In contrast with quantitative approaches, qualitative studies do not usually have predetermined sample sizes. Sampling stops when a thorough understanding of the phenomenon under study has been reached, an end point that is often called saturation. Researchers consider samples to be saturated when encounters (interviews, observations, etc) with new participants no longer elicit trends or themes not already raised by previous participants. Thus, to sample to saturation, data analysis has to happen while new data are still being collected. Multiple sampling methods may be used to broaden the understanding achieved in a study (box 2). These sampling issues should be clearly articulated in the methods section.

Box 2 Qualitative sampling methods for interviews and focus groups 9

Examples are for a hypothetical study of financial concerns among adult patients with chronic renal failure receiving ongoing haemodialysis in a single hospital outpatient unit.

Typical case sampling —sampling the most ordinary, usual cases of a phenomenon

The sample would include patients likely to have had typical experiences for that haemodialysis unit and patients who fit the profile of patients in the unit for factors found on literature review. Other typical cases could be found via snowball sampling (see below)

Deviant case sampling —sampling the most extreme cases of a phenomenon

The sample would include patients likely to have had different experiences of relevant aspects of haemodialysis. For example, if most patients in the unit are 60-70 years old and recently began haemodialysis for diabetic nephropathy, researchers might sample the unmarried university student in his 20s on haemodialysis since childhood, the 32 year old woman with lupus who is now trying to get pregnant, and the 90 year old who newly started haemodialysis due to an adverse reaction to radio-opaque contrast dye. Other deviant cases could be found via theoretical and/or snowball sampling (see below)

Critical case sampling —sampling cases that are predicted (based on theoretical models or previous research) to be especially information-rich and thus particularly illuminating

The nature of this sample depends on previous research. For example, if research showed that marital status was a major determinant of financial concerns for haemodialysis patients, then critical cases might include patients whose marital status changed while on haemodialysis

Maximum-variation sampling —sampling as wide a range of perspectives as possible to capture the broadest set of information and experiences)

The sample would include typical, deviant, and critical cases (as above), plus any other perspectives identified

Confirming-disconfirming sampling —Sampling both individuals or texts whose perspectives are likely to confirm the researcher’s developing understanding of the phenomenon under study and those whose perspectives are likely to challenge that understanding

The sample would include patients whose experiences would likely either confirm or disconfirm what the researchers had already learnt (from other patients) about financial concerns among patients in the haemodialysis unit. This could be accomplished via theoretical and/or snowball sampling (see below)

Snowball sampling —sampling participants found by asking current participants in a study to recommend others whose experiences would be relevant to the study

Current participants could be asked to provide the names of others in the unit who they thought, when asked about financial concerns, would either share their views (confirming), disagree with their views (disconfirming), have views typical of patients on their unit (typical cases), or have views different from most other patients on their unit (deviant cases)

Theoretical sampling —sampling individuals or texts whom the researchers predict (based on theoretical models or previous research) would add new perspectives to those already represented in the sample

Researchers could use their understanding of known issues for haemodialysis patients that would, in theory, relate to financial concerns to ensure that the relevant perspectives were represented in the study. For example, if, as the research progressed, it turned out that none of the patients in the sample had had to change or leave a job in order to accommodate haemodialysis scheduling, the researchers might (based on previous research) choose to intentionally sample patients who had left their jobs because of the time commitment of haemodialysis (but who could not do peritoneal dialysis) and others who had switched to jobs with more flexible scheduling because of their need for haemodialysis

It is important that a qualitative study carefully describes the methods used in collecting data. The appropriateness of the method(s) selected to use for the specific research question should be justified, ideally with reference to the research literature. It should be clear that methods were used systematically and in an organised manner. Attention should be paid to specific methodological challenges such as the Hawthorne effect, 1 whereby the presence of an observer may influence participants’ behaviours. By using a technique called thick description, qualitative studies often aim to include enough contextual information to provide readers with a sense of what it was like to have been in the research setting.

Another technique that is often used is triangulation, with which a researcher uses multiple methods or perspectives to help produce a more comprehensive set of findings. A study can triangulate data, using different sources of data to examine a phenomenon in different contexts (for example, interviewing palliative patients who are at home, those who are in acute care hospitals, and those who are in specialist palliative care units); it can also triangulate methods, collecting different types of data (for example, interviews, focus groups, observations) to increase insight into a phenomenon.

Another common technique is the use of an iterative process, whereby concurrent data analysis is used to inform data collection. For example, concurrent analysis of an interview study about lack of adherence to medications among a particular social group might show that early participants seem to be dismissive of the efforts of their local pharmacists; the interview script might then be changed to include an exploration of this phenomenon. The iterative process constitutes a distinctive qualitative tradition, in contrast to the tradition of stable processes and measures in quantitative studies. Iterations should be explicit and justified with reference to the research question and sampling techniques so that the reader understands how data collection shaped the resulting insights.

Qualitative studies should include a clear description of a systematic form of data analysis. Many legitimate analytical approaches exist; regardless of which is used, the study should report what was done, how, and by whom. If an iterative process was used, it should be clearly delineated. If more than one researcher analysed the data (which depends on the methodology used) it should be clear how differences between analyses were negotiated. Many studies make reference to a technique called member checking, wherein the researcher shows all or part of the study’s findings to participants to determine if they are in accord with their experiences. 2 Studies may also describe an audit trail, which might include researchers’ analysis notes, minutes of researchers’ meetings, and other materials that could be used to follow the research process.

The contextual nature of qualitative research means that careful thought must be given to the potential transferability of its results to other sociocultural settings. Though the study should discuss the extent of the findings’ resonance with the published literature, 3 much of the onus of assessing transferability is left to readers, who must decide if the setting of the study is sufficiently similar for its results to be transferable to their own context. In doing so, the reader looks for resonance—the extent that research findings have meaning for the reader.

Transferability may be helped by the study’s discussion of how its results advance theoretical understandings that are relevant to multiple situations. For example, a study of patients’ preferences in palliative care may contribute to theories of ethics and humanity in medicine, thus suggesting relevance to other clinical situations such as the informed consent exchange before treatment. We have explained elsewhere in this series the importance of theory in qualitative research, and there are many who believe that a key indicator of quality in qualitative research is its contribution to advancing theoretical understanding as well as useful knowledge. This debate continues in the literature, 4 but from a pragmatic perspective most qualitative studies in health professions journals emphasise results that relate to practice; theoretical discussions tend to be published elsewhere.

Reflexivity is particularly important within the qualitative paradigm. Reflexivity refers to recognition of the influence a researcher brings to the research process. It highlights potential power relationships between the researcher and research participants that might shape the data being collected, particularly when the researcher is a healthcare professional or educator and the participant is a patient, client, or student. 5 It also acknowledges how a researcher’s gender, ethnic background, profession, and social status influence the choices made within the study, such as the research question itself and the methods of data collection. 6 7

Research articles written in the qualitative paradigm should show evidence both of reflexive practice and of consideration of other relevant ethical issues. Ethics in qualitative research should extend beyond prescriptive guidelines and research ethics boards into a thorough exploration of the ethical consequences of collecting personal experiences and opening those experiences to public scrutiny (a detailed discussion of this problem within a research report may, however, be limited by the practicalities of word count limitations). 8 Issues of confidentiality and anonymity can become quite complex when data constitute personal reports of experience or perception; the need to minimise harm may involve not only protection from external scrutiny but also mechanisms to mitigate potential distress to participants from sharing their personal stories.

In conclusion: is what the researchers did clear?

The qualitative paradigm includes a wide range of theoretical and methodological options, and qualitative studies must include clear descriptions of how they were conducted, including the selection of the study sample, the data collection methods, and the analysis process. The list of key questions for beginning readers to ask when reading qualitative research articles (see box 1) is intended not as a finite checklist, but rather as a beginner’s guide to a complex topic. Critical appraisal of particular qualitative articles may differ according to the theories and methodologies used, and achieving a nuanced understanding in this area is fairly complex.

Further reading

Crabtree F, Miller WL, eds. Doing qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 1999.

Denzin NK, Lincoln YS, eds. Handbook of qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 2000.

Finlay L, Ballinger C, eds. Qualitative research for allied health professionals: challenging choices . Chichester: Wiley, 2006.

Flick U. An introduction to qualitative research . 2nd ed. London: Sage, 2002.

Green J, Thorogood N. Qualitative methods for health research . London: Sage, 2004.

Lingard L, Kennedy TJ. Qualitative research in medical education . Edinburgh: Association for the Study of Medical Education, 2007.

Mauthner M, Birch M, Jessop J, Miller T, eds. Ethics in Qualitative Research . Thousand Oaks, CA: Sage, 2002.

Seale C. The quality of qualitative research . London: Sage, 1999.

Silverman D. Doing qualitative research . Thousand Oaks, CA: Sage, 2000.

Journal articles

Greenhalgh T. How to read a paper: papers that go beyond numbers. BMJ 1997;315:740-3.

Mays N, Pope C. Qualitative research: Rigour and qualitative research. BMJ 1995;311:109-12.

Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000;320:50-2.

Popay J, Rogers A, Williams G. Rationale and standards for the systematic review of qualitative literature in health services research. Qual Health Res 1998;8:341-51.

Internet resources

National Health Service Public Health Resource Unit. Critical appraisal skills programme: qualitative research appraisal tool . 2006. www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf

Cite this as: BMJ 2008;337:a1035

  • Related to doi: , 10.1136/bmj.a288
  • doi: , 10.1136/bmj.39602.690162.47
  • doi: , 10.1136/bmj.a1020
  • doi: , 10.1136/bmj.a879
  • doi: 10.1136/bmj.a949

This is the last in a series of six articles that aim to help readers to critically appraise the increasing number of qualitative research articles in clinical journals. The series editors are Ayelet Kuper and Scott Reeves.

For a definition of general terms relating to qualitative research, see the first article in this series.

Contributors: AK wrote the first draft of the article and collated comments for subsequent iterations. LL and WL made substantial contributions to the structure and content, provided examples, and gave feedback on successive drafts. AK is the guarantor.

Funding: None.

Competing interests: None declared.

Provenance and peer review: Commissioned; externally peer reviewed.

  • ↵ Holden JD. Hawthorne effects and research into professional practice. J Evaluation Clin Pract 2001 ; 7 : 65 -70. OpenUrl CrossRef PubMed Web of Science
  • ↵ Hammersley M, Atkinson P. Ethnography: principles in practice . 2nd ed. London: Routledge, 1995 .
  • ↵ Silverman D. Doing qualitative research . Thousand Oaks, CA: Sage, 2000 .
  • ↵ Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ 2000 ; 320 : 50 -2. OpenUrl FREE Full Text
  • ↵ Lingard L, Kennedy TJ. Qualitative research in medical education . Edinburgh: Association for the Study of Medical Education, 2007 .
  • ↵ Seale C. The quality of qualitative research . London: Sage, 1999 .
  • ↵ Wallerstein N. Power between evaluator and community: research relationships within New Mexico’s healthier communities. Soc Sci Med 1999 ; 49 : 39 -54. OpenUrl CrossRef PubMed Web of Science
  • ↵ Mauthner M, Birch M, Jessop J, Miller T, eds. Ethics in qualitative research . Thousand Oaks, CA: Sage, 2002 .
  • ↵ Kuzel AJ. Sampling in qualitative inquiry. In: Crabtree F, Miller WL, eds. Doing qualitative research . 2nd ed. Thousand Oaks, CA: Sage, 1999 :33-45.

how to critically evaluate research methods

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 31 January 2022

The fundamentals of critically appraising an article

  • Sneha Chotaliya 1  

BDJ Student volume  29 ,  pages 12–13 ( 2022 ) Cite this article

1949 Accesses

Metrics details

Sneha Chotaliya

We are often surrounded by an abundance of research and articles, but the quality and validity can vary massively. Not everything will be of a good quality - or even valid. An important part of reading a paper is first assessing the paper. This is a key skill for all healthcare professionals as anything we read can impact or influence our practice. It is also important to stay up to date with the latest research and findings.

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

We are sorry, but there is no personal subscription option available for your country.

Buy this article

Purchase on Springer Link

Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

Chambers R, 'Clinical Effectiveness Made Easy', Oxford: Radcliffe Medical Press , 1998

Loney P L, Chambers L W, Bennett K J, Roberts J G and Stratford P W. Critical appraisal of the health research literature: prevalence or incidence of a health problem. Chronic Dis Can 1998; 19 : 170-176.

Brice R. CASP CHECKLISTS - CASP - Critical Appraisal Skills Programme . 2021. Available at: https://casp-uk.net/casp-tools-checklists/ (Accessed 22 July 2021).

White S, Halter M, Hassenkamp A and Mein G. 2021. Critical Appraisal Techniques for Healthcare Literature . St George's, University of London.

Download references

Author information

Authors and affiliations.

Academic Foundation Dentist, London, UK

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sneha Chotaliya .

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Chotaliya, S. The fundamentals of critically appraising an article. BDJ Student 29 , 12–13 (2022). https://doi.org/10.1038/s41406-021-0275-6

Download citation

Published : 31 January 2022

Issue Date : 31 January 2022

DOI : https://doi.org/10.1038/s41406-021-0275-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

how to critically evaluate research methods

Writing Academically

Proofreading, other editing & coaching for highly successful academic writing

  • Editing & coaching pricing
  • Academic coaching
  • How to conduct a targeted literature search

How to write a successful critical analysis

  • How to write a strong literature review
  • Cautious in tone
  • Formal English
  • Precise and concise English
  • Impartial and objective English
  • Substantiate your claims
  • The academic team

Kozzi-pros-cons-128x128

For further queries or assistance in writing a critical analysis email Bill Wrigley .

What do you critically analyse?

In a critical analysis you do not express your own opinion or views on the topic. You need to develop your thesis, position or stance on the topic from the views and research of others . In academic writing you critically analyse other researchers’:

  • concepts, terms
  • viewpoints, arguments, positions
  • methodologies, approaches
  • research results and conclusions

This means weighing up the strength of the arguments or research support on the topic, and deciding who or what has the more or stronger weight of evidence or support.

Therefore, your thesis argues, with evidence, why a particular theory, concept, viewpoint, methodology, or research result(s) is/are stronger, more sound, or more advantageous than others.

What does ‘analysis’ mean?

A critical analysis means analysing or breaking down the parts of the literature and grouping these into themes, patterns or trends.

In an analysis you need to:

1. Identify and separate out the parts of the topic by grouping the various key theories, main concepts, the main arguments or ideas, and the key research results and conclusions on the topic into themes, patterns or trends of agreement , dispute and omission .

2. Discuss each of these parts by explaining:

i. the areas of agreement/consensus, or similarity

ii. the issues or controversies: in dispute or debate, areas of difference

ii. the omissions, gaps, or areas that are under-researched

3. Discuss the relationship between these parts

4. Examine how each contributes to the whole topic

5. Make conclusions about their significance or importance in the topic

What does ‘critical’ mean?

A critical analysis does not mean writing angry, rude or disrespectful comments, or  expressing your views in judgmental terms of black and white, good and bad, or right and wrong.

To be critical, or to critique, means to evaluate . Therefore, to write critically in an academic analysis means to:

  • judge the quality, significance or worth of the theories, concepts, viewpoints, methodologies, and research results
  • evaluate in a fair and balanced manner
  • avoid extreme or emotional language

strengths and weaknesses computer keys showing performance or an

  • strengths, advantages, benefits, gains, or improvements
  • disadvantages, weaknesses, shortcomings, limitations, or drawbacks

How to critically analyse a theory, model or framework

The evaluative words used most often to refer to theory, model or framework are a sound theory or a strong theory.

The table below summarizes the criteria for judging the strengths and weaknesses of a theory:

  • comprehensive
  • empirically supported
  • parsimonious

Evaluating a Theory, Model or Framework

The table below lists the criteria for the strengths and their corresponding weaknesses that are usually considered in a theory.

Critical analysis examples of theories

The following sentences are examples of the phrases used to explain strengths and weaknesses.

Smith’s (2005) theory appears up to date, practical and applicable across many divergent settings.

Brown’s (2010) theory, although parsimonious and logical, lacks a sufficient body of evidence to support its propositions and predictions

Little scientific evidence has been presented to support the premises of this theory.

One of the limitations with this theory is that it does not explain why…

A significant strength of this model is that it takes into account …

The propositions of this model appear unambiguous and logical.

A key problem with this framework is the conceptual inconsistency between ….

How to critically analyse a concept

The table below summarizes the criteria for judging the strengths and weaknesses of a concept:

  • key variables identified
  • clear and well-defined

Evaluating Concepts

Critical analysis examples of concepts

Many researchers have used the concept of control in different ways.

There is little consensus about what constitutes automaticity.

Putting forth a very general definition of motivation means that it is possible that any behaviour could be included.

The concept of global education lacks clarity, is imprecisely defined and is overly complex.

Some have questioned the usefulness of resilience as a concept because it has been used so often and in so many contexts.

Research suggests that the concept of preoperative fasting is an outdated clinical approach.

How to critically analyse arguments, viewpoints or ideas

The table below summarizes the criteria for judging the strengths and weaknesses of an argument, viewpoint or idea:

  • reasons support the argument
  • argument is substantiated by evidence
  • evidence for the argument is relevant
  • evidence for the argument is unbiased, sufficient and important
  • evidence is reputable

Evaluating Arguments, Views or Ideas

Critical analysis examples of arguments, viewpoints or ideas

The validity of this argument is questionable as there is insufficient evidence to support it.

Many writers have challenged Jones’ claim on the grounds that …….

This argument fails to draw on the evidence of others in the field.

This explanation is incomplete because it does not explain why…

The key problem with this explanation is that ……


The existing accounts fail to resolve the contradiction between …

However, there is an inconsistency with this argument. The inconsistency lies in…

Although this argument has been proposed by some, it lacks justification.

However, the body of evidence showing that… contradicts this argument.

How to critically analyse a methodology

The table below provides the criteria for judging the strengths and weaknesses of methodology.

An evaluation of a methodology usually involves a critical analysis of its main sections:

design; sampling (participants); measurement tools and materials; procedure

  • design tests the hypotheses or research questions
  • method valid and reliable
  • potential bias or measurement error, and confounding variables addressed
  • method allows results to be generalized
  • representative sampling of cohort and phenomena; sufficient response rate
  • valid and reliable measurement tools
  • valid and reliable procedure
  • method clear and detailed to allow replication

Evaluating a Methodology

Critical analysis examples of a methodology

The unrepresentativeness of the sample makes these results misleading.

The presence of unmeasured variables in this study limits the interpretation of the results.

Other, unmeasured confounding variables may be influencing this association.

The interpretation of the data requires caution because the effect of confounding variables was not taken into account.

The insufficient control of several response biases in this study means the results are likely to be unreliable.

Although this correlational study shows association between the variables, it does not establish a causal relationship.

Taken together, the methodological shortcomings of this study suggest the need for serious caution in the meaningful interpretation of the study’s results.

How to critically analyse research results and conclusions

The table below provides the criteria for judging the strengths and weaknesses of research results and conclusions:

  • appropriate choice and use of statistics
  • correct interpretation of results
  • all results explained
  • alternative explanations considered
  • significance of all results discussed
  • consistency of results with previous research discussed
  • results add to existing understanding or knowledge
  • limitations discussed
  • results clearly explained
  • conclusions consistent with results

Evaluating the Results and Conclusions

Leave a Reply

Click here to cancel reply.

You must be logged in to post a comment.

WRITING FORMATS FOR EDITING OR COACHING

  • Essay or assignment
  • Thesis or dissertation
  • Proposal for PhD or Masters research
  • Literature review
  • Journal article or book chapter
  • IELTS writing tasks 1 & 2 for general and academic writing
  • Resumes & cover letters
  • Presentations
  • Applications & submissions

Do you have a question?

  • Academic writing skills
  • Academic English skills
  • The Academic team
  • Privacy policy
  • Terms and conditions
  • ABN: 15796080518
  • 66 Mungarie Street, Keperra, Qld 4054 Australia
  • Email: [email protected]

Website design and development by Caboodle Web

A simplified approach to critically appraising research evidence

Affiliation.

  • 1 School of Health and Life Sciences, Teesside University, Middlesbrough, England.
  • PMID: 33660465
  • DOI: 10.7748/nr.2021.e1760

Background Evidence-based practice is embedded in all aspects of nursing and care. Understanding research evidence and being able to identify the strengths, weaknesses and limitations of published primary research is an essential skill of the evidence-based practitioner. However, it can be daunting and seem overly complex.

Aim: To provide a single framework that researchers can use when reading, understanding and critically assessing published research.

Discussion: To make sense of published research papers, it is helpful to understand some key concepts and how they relate to either quantitative or qualitative designs. Internal and external validity, reliability and trustworthiness are discussed. An illustration of how to apply these concepts in a practical way using a standardised framework to systematically assess a paper is provided.

Conclusion: The ability to understand and evaluate research builds strong evidence-based practitioners, who are essential to nursing practice.

Implications for practice: This framework should help readers to identify the strengths, potential weaknesses and limitations of a paper to judge its quality and potential usefulness.

Keywords: literature review; qualitative research; quantitative research; research; systematic review.

©2021 RCN Publishing Company Ltd. All rights reserved. Not to be copied, transmitted or recorded in any way, in whole or part, without prior permission of the publishers.

  • Evidence-Based Medicine / standards*
  • Nursing Research*

how to critically evaluate research methods

  • University of Oregon Libraries
  • Research Guides

How to Write a Literature Review

  • 5. Critically Analyze and Evaluate
  • Literature Reviews: A Recap
  • Reading Journal Articles
  • Does it Describe a Literature Review?
  • 1. Identify the Question
  • 2. Review Discipline Styles
  • Searching Article Databases
  • Finding Full-Text of an Article
  • Citation Chaining
  • When to Stop Searching
  • 4. Manage Your References

Critically analyze and evaluate

Tip: read and annotate pdfs.

  • 6. Synthesize
  • 7. Write a Literature Review

Chat

Ask yourself questions like these about each book or article you include:

  • What is the research question?
  • What is the primary methodology used?
  • How was the data gathered?
  • How is the data presented?
  • What are the main conclusions?
  • Are these conclusions reasonable?
  • What theories are used to support the researcher's conclusions?

Take notes on the articles as you read them and identify any themes or concepts that may apply to your research question.

This sample template (below) may also be useful for critically reading and organizing your articles. Or you can use this online form and email yourself a copy .

  • Sample Template for Critical Analysis of the Literature

Opening an article in PDF format in Acrobat Reader will allow you to use "sticky notes" and "highlighting" to make notes on the article without printing it out. Make sure to save the edited file so you don't lose your notes!

Some Citation Managers like Mendeley also have highlighting and annotation features.Here's a screen capture of a pdf in Mendeley with highlighting, notes, and various colors:

Screen capture of Mendeley desktop showing note, highlight, and color tools. Tips include adding notes and highlighting, and using different colors for other purposes like quotations

Screen capture from a UO Librarian's Mendeley Desktop app

  • Learn more about citation management software in the previous step: 4. Manage Your References
  • << Previous: 4. Manage Your References
  • Next: 6. Synthesize >>
  • Last Updated: Jan 10, 2024 4:46 PM
  • URL: https://researchguides.uoregon.edu/litreview

Contact Us Library Accessibility UO Libraries Privacy Notices and Procedures

Make a Gift

1501 Kincaid Street Eugene, OR 97403 P: 541-346-3053 F: 541-346-3485

  • Visit us on Facebook
  • Visit us on Twitter
  • Visit us on Youtube
  • Visit us on Instagram
  • Report a Concern
  • Nondiscrimination and Title IX
  • Accessibility
  • Privacy Policy
  • Find People
  • Writing Rules
  • Running Head & Page numbers
  • Using Quotations
  • Citing Sources
  • Reference List
  • General Reference List Principles
  • Structure of the Report

Introduction

  • References & Appendices
  • Unpacking the Assignment Topic
  • Planning and Structuring the Assignment
  • Writing the Assignment
  • Writing Concisely
  • Developing Arguments

Critically Evaluating Research

  • Editing the Assignment
  • Writing in the Third Person
  • Directive Words
  • Before You Submit
  • Cover Sheet & Title Page
  • Academic Integrity
  • Marking Criteria
  • Word Limit Rules
  • Submitting Your Work
  • Writing Effective E-mails
  • Writing Concisely Exercise
  • About Redbook

Some research reports or assessments will require you critically evaluate a journal article or piece of research. Below is a guide with examples of how to critically evaluate research and how to communicate your ideas in writing.

To develop the skill of being able to critically evaluate, when reading research articles in psychology read with an open mind and be active when reading. Ask questions as you go and see if the answers are provided. Initially skim through the article to gain an overview of the problem, the design, methods, and conclusions. Then read for details and consider the questions provided below for each section of a journal article.

  • Did the title describe the study?
  • Did the key words of the title serve as key elements of the article?
  • Was the title concise, i.e., free of distracting or extraneous phrases?
  • Was the abstract concise and to the point?
  • Did the abstract summarise the study’s purpose/research problem, the independent and dependent variables under study, methods, main findings, and conclusions?
  • Did the abstract provide you with sufficient information to determine what the study is about and whether you would be interested in reading the entire article?
  • Was the research problem clearly identified?
  • Is the problem significant enough to warrant the study that was conducted?
  • Did the authors present an appropriate theoretical rationale for the study?
  • Is the literature review informative and comprehensive or are there gaps?
  • Are the variables adequately explained and operationalised?
  • Are hypotheses and research questions clearly stated? Are they directional? Do the author’s hypotheses and/or research questions seem logical in light of the conceptual framework and research problem?
  • Overall, does the literature review lead logically into the Method section?
  • Is the sample clearly described, in terms of size, relevant characteristics (gender, age, SES, etc), selection and assignment procedures, and whether any inducements were used to solicit subjects (payment, subject credit, free therapy, etc)?
  • What population do the subjects represent (external validity)?
  • Are there sufficient subjects to produce adequate power (statistical validity)?
  • Have the variables and measurement techniques been clearly operationalised?
  • Do the measures/instruments seem appropriate as measures of the variables under study (construct validity)?
  • Have the authors included sufficient information about the psychometric properties (eg. reliability and validity) of the instruments?
  • Are the materials used in conducting the study or in collecting data clearly described?
  • Are the study’s scientific procedures thoroughly described in chronological order?
  • Is the design of the study identified (or made evident)?
  • Do the design and procedures seem appropriate in light of the research problem, conceptual framework, and research questions/hypotheses?
  • Are there other factors that might explain the differences between groups (internal validity)?
  • Were subjects randomly assigned to groups so there was no systematic bias in favour of one group? Was there a differential drop-out rate from groups so that bias was introduced (internal validity and attrition)?
  • Were all the necessary control groups used? Were participants in each group treated identically except for the administration of the independent variable?
  • Were steps taken to prevent subject bias and/or experimenter bias, eg, blind or double blind procedures?
  • Were steps taken to control for other possible confounds such as regression to the mean, history effects, order effects, etc (internal validity)?
  • Were ethical considerations adhered to, eg, debriefing, anonymity, informed consent, voluntary participation?
  • Overall, does the method section provide sufficient information to replicate the study?
  • Are the findings complete, clearly presented, comprehensible, and well organised?
  • Are data coding and analysis appropriate in light of the study’s design and hypotheses? Are the statistics reported correctly and fully, eg. are degrees of freedom and p values given?
  • Have the assumptions of the statistical analyses been met, eg. does one group have very different variance to the others?
  • Are salient results connected directly to hypotheses? Are there superfluous results presented that are not relevant to the hypotheses or research question?
  • Are tables and figures clearly labelled? Well-organised? Necessary (non-duplicative of text)?
  • If a significant result is obtained, consider effect size. Is the finding meaningful? If a non-significant result is found, could low power be an issue? Were there sufficient levels of the IV?
  • If necessary have appropriate post-hoc analyses been performed? Were any transformations performed; if so, were there valid reasons? Were data collapsed over any IVs; if so, were there valid reasons? If any data was eliminated, were valid reasons given?

Discussion and Conclusion

  • Are findings adequately interpreted and discussed in terms of the stated research problem, conceptual framework, and hypotheses?
  • Is the interpretation adequate? i.e., does it go too far given what was actually done or not far enough? Are non-significant findings interpreted inappropriately?
  • Is the discussion biased? Are the limitations of the study delineated?
  • Are implications for future research and/or practical application identified?
  • Are the overall conclusions warranted by the data and any limitations in the study? Are the conclusions restricted to the population under study or are they generalised too widely?
  • Is the reference list sufficiently specific to the topic under investigation and current?
  • Are citations used appropriately in the text?

General Evaluation

  • Is the article objective, well written and organised?
  • Does the information provided allow you to replicate the study in all its details?
  • Was the study worth doing? Does the study provide an answer to a practical or important problem? Does it have theoretical importance? Does it represent a methodological or technical advance? Does it demonstrate a previously undocumented phenomenon? Does it explore the conditions under which a phenomenon occurs?

How to turn your critical evaluation into writing

Example from a journal article.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Indian J Anaesth
  • v.60(9); 2016 Sep

Critical appraisal of published literature

Goneppanavar umesh.

Department of Anaesthesia, Dharwad Institute of Mental Health and Neuro Sciences, Dharwad, Karnataka, India

John George Karippacheril

1 Department of Anaesthesiology, Universal Hospital, Abu Dhabi, UAE

Rahul Magazine

2 Department of Pulmonary Medicine, Kasturba Medical College, Manipal University, Manipal, Karnataka, India

With a large output of medical literature coming out every year, it is impossible for readers to read every article. Critical appraisal of scientific literature is an important skill to be mastered not only by academic medical professionals but also by those involved in clinical practice. Before incorporating changes into the management of their patients, a thorough evaluation of the current or published literature is an important step in clinical practice. It is necessary for assessing the published literature for its scientific validity and generalizability to the specific patient community and reader's work environment. Simple steps have been provided by Consolidated Standard for Reporting Trial statements, Scottish Intercollegiate Guidelines Network and several other resources which if implemented may help the reader to avoid reading flawed literature and prevent the incorporation of biased or untrustworthy information into our practice.

INTRODUCTION

Critical appraisal

‘ The process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context ’ -Burls A[ 1 ]

The objective of medical literature is to provide unbiased, accurate medical information, backed by robust scientific evidence that could aid and enhance patient care. With the ever increasing load of scientific literature (more than 12,000 new articles added every week to the MEDLINE database),[ 2 ] keeping abreast of the current literature can be arduous. Critical appraisal of literature may help distinguish between useful and flawed studies. Although substantial resources of peer-reviewed literature are available, flawed studies may abound in unreliable sources. Flawed studies if used to guide clinical decisions may end up with no benefit or at worse result in significant harm. Readers can, thus, make informed decisions by critically evaluating medical literature.

STEPS TO CRITICALLY EVALUATE AN ARTICLE

Initial evaluation of an article published in literature should be based on certain core questions. These may include querying what could be the key learning points in the article, about its clinical relevance, if the study has a robust methodology, if the results are reproducible and could there be any bias or conflict of interest [ Table 1 ]. If there are serious doubts regarding any of these steps, the reader could skip the article at this stage itself.

Core questions for initial evaluation of a scientific article

An external file that holds a picture, illustration, etc.
Object name is IJA-60-670-g001.jpg

Introduction, methods, results and discussion pattern of scientific literature

Introduction.

Evaluate if the need (as dearth of studies on the topic in scientific literature) and the purpose of the study (attempting to find answers to one of the important unanswered queries of clinical relevance) are properly explained with scientific rationale. If the research objective and hypothesis were not clearly defined or the findings of the study are different from the objectives (chance findings), the study outcomes become questionable.

A good working scientific hypothesis backed by a strong methodology is the stepping stone for carrying out a meaningful research. The groups to be involved and the study end points should be determined prior to starting the study. Strong methodology depends on several aspects that must be properly addressed and evaluated [ Table 2 ]. The methodology for statistical analysis including tests for distribution pattern of study data, level of significance and sample size calculation should be clearly defined in the methods section. Data that violate the assumption of normal distribution pattern must be analysed with non-parametric statistical tests. Inadequate sample size can lead to false-negative results or beta error (aide-memoire: beta error is blindness). Setting a higher level of significance, especially when performing multiple comparisons, can lead to false-positive results or alpha error (aide-memoire: alpha error is hallucination). A confidence interval when used in the study methodology provides information on the direction and strength of the effect, in contrast to P values alone, from which the magnitude, direction or comparison of relative risk between groups cannot be inferred. P value simply accepts or rejects the null hypothesis, therefore it must be reported in conjunction with confidence intervals.[ 6 ] An important guideline for evaluating and reporting randomised controlled trials, mandatory for publication in several international medical journals, is the Consolidated Standard for Reporting Trial (CONSORT) statement.[ 7 ] Other scientific societies such as the Scottish Intercollegiate Guidelines Network have devised checklists that may aid in the critical evaluation of articles depending on the type of study methodology.[ 8 , 9 ]

Appraisal elements for robust evaluation of study methodology

An external file that holds a picture, illustration, etc.
Object name is IJA-60-670-g002.jpg

The results section should only report the findings of the study, without attempting to reason them out. The total number of participants with the number of those excluded, dropped out or withdrawn from the study should be analysed. Failure to do so may lead to underestimation or overestimation of results.[ 10 ] A summary flowchart containing enrolment data could be created as per the CONSORT statement.[ 5 ] Actual values including the mean with standard deviation/error or median with interquartile range should be reported. Evaluate for completeness – all the variables in the study methodology should be analysed using appropriate statistical tests. Ensure that findings stated in the results are the same in other areas of the article – abstract, tables and figures. Appropriate tables and graphs should be used to provide the results of the study. Assess if the results of the study can be generalised and are useful to our workplace or patient population.

Although significant positive results from the study are more likely to be accepted for publication (publication bias), remember that high rates of falsely inflated results are demonstrated in studies that had flawed methodology (due to improper or lack of appropriate randomisation, allocation concealment, blinding and/or assessing outcomes through statistical modeling). Further, the publication bias towards studies with positive outcome leads to scientific distortions in the body of scientific knowledge.[ 11 , 12 ]

Discussion and conclusion

The discussion is used to account for or reason out the outcomes of the study, including dropouts and any change in methodology, to comment on the external validity of the study and to discuss its limitations. The authors should report their findings in comparison with that previously published in literature, if the study results added new information to the current literature, if it could alter patient management and if the findings need larger studies for further evaluation or confirmation. When concluding, the interpretation should be consistent with the actual findings. Evaluate if the questions in the study hypothesis were adequately addressed and if the conclusions were justified by the actual data. Authors should also provide limitations of their study and constructive suggestions for future research.

Readers may find useful resources on how to constructively read the published literature at the following resources:

  • Consolidated Standard for Reporting Trial 2010 for randomised trials – http://www.consort-statement.org
  • Sign checklists – http://www.sign.ac.uk/methodology/checklists.html
  • BMJ series of articles – http://www.bmj.com/about-bmj/resources-readers/publications/how-read-paper
  • Equator network for health research – http://www.equator-network.org
  • Strobe statement for observational studies – http://www.strobe-statement.org/index.php?id=strobe-home
  • Care for case reports – http://www.care-statement.org
  • PRISMA statement for meta-analytical studies and systematic reviews: http://www.prisma-statement.org
  • Agree – http://www.agreetrust.org
  • CASP – http://www.casp-uk.net
  • http://www.delfini.org .

Critical appraisal of scientific literature is an important skill to be mastered not just by academic medical professionals but also by those involved in clinical practice. Before incorporating changes in the management of their patients, a thorough evaluation of the current or published literature is a necessary step in practicing evidence-based medicine.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

how to critically evaluate research methods

  • The Open University
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

About this free course

Become an ou student, download this course, share this free course.

Succeeding in postgraduate study

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1 Important points to consider when critically evaluating published research papers

Simple review articles (also referred to as ‘narrative’ or ‘selective’ reviews), systematic reviews and meta-analyses provide rapid overviews and ‘snapshots’ of progress made within a field, summarising a given topic or research area. They can serve as useful guides, or as current and comprehensive ‘sources’ of information, and can act as a point of reference to relevant primary research studies within a given scientific area. Narrative or systematic reviews are often used as a first step towards a more detailed investigation of a topic or a specific enquiry (a hypothesis or research question), or to establish critical awareness of a rapidly-moving field (you will be required to demonstrate this as part of an assignment, an essay or a dissertation at postgraduate level).

The majority of primary ‘empirical’ research papers essentially follow the same structure (abbreviated here as IMRAD). There is a section on Introduction, followed by the Methods, then the Results, which includes figures and tables showing data described in the paper, and a Discussion. The paper typically ends with a Conclusion, and References and Acknowledgements sections.

The Title of the paper provides a concise first impression. The Abstract follows the basic structure of the extended article. It provides an ‘accessible’ and concise summary of the aims, methods, results and conclusions. The Introduction provides useful background information and context, and typically outlines the aims and objectives of the study. The Abstract can serve as a useful summary of the paper, presenting the purpose, scope and major findings. However, simply reading the abstract alone is not a substitute for critically reading the whole article. To really get a good understanding and to be able to critically evaluate a research study, it is necessary to read on.

While most research papers follow the above format, variations do exist. For example, the results and discussion sections may be combined. In some journals the materials and methods may follow the discussion, and in two of the most widely read journals, Science and Nature, the format does vary from the above due to restrictions on the length of articles. In addition, there may be supporting documents that accompany a paper, including supplementary materials such as supporting data, tables, figures, videos and so on. There may also be commentaries or editorials associated with a topical research paper, which provide an overview or critique of the study being presented.

Box 1 Key questions to ask when appraising a research paper

  • Is the study’s research question relevant?
  • Does the study add anything new to current knowledge and understanding?
  • Does the study test a stated hypothesis?
  • Is the design of the study appropriate to the research question?
  • Do the study methods address key potential sources of bias?
  • Were suitable ‘controls’ included in the study?
  • Were the statistical analyses appropriate and applied correctly?
  • Is there a clear statement of findings?
  • Does the data support the authors’ conclusions?
  • Are there any conflicts of interest or ethical concerns?

There are various strategies used in reading a scientific research paper, and one of these is to start with the title and the abstract, then look at the figures and tables, and move on to the introduction, before turning to the results and discussion, and finally, interrogating the methods.

Another strategy (outlined below) is to begin with the abstract and then the discussion, take a look at the methods, and then the results section (including any relevant tables and figures), before moving on to look more closely at the discussion and, finally, the conclusion. You should choose a strategy that works best for you. However, asking the ‘right’ questions is a central feature of critical appraisal, as with any enquiry, so where should you begin? Here are some critical questions to consider when evaluating a research paper.

Look at the Abstract and then the Discussion : Are these accessible and of general relevance or are they detailed, with far-reaching conclusions? Is it clear why the study was undertaken? Why are the conclusions important? Does the study add anything new to current knowledge and understanding? The reasons why a particular study design or statistical method were chosen should also be clear from reading a research paper. What is the research question being asked? Does the study test a stated hypothesis? Is the design of the study appropriate to the research question? Have the authors considered the limitations of their study and have they discussed these in context?

Take a look at the Methods : Were there any practical difficulties that could have compromised the study or its implementation? Were these considered in the protocol? Were there any missing values and, if so, was the number of missing values too large to permit meaningful analysis? Was the number of samples (cases or participants) too small to establish meaningful significance? Do the study methods address key potential sources of bias? Were suitable ‘controls’ included in the study? If controls are missing or not appropriate to the study design, we cannot be confident that the results really show what is happening in an experiment. Were the statistical analyses appropriate and applied correctly? Do the authors point out the limitations of methods or tests used? Were the methods referenced and described in sufficient detail for others to repeat or extend the study?

Take a look at the Results section and relevant tables and figures : Is there a clear statement of findings? Were the results expected? Do they make sense? What data supports them? Do the tables and figures clearly describe the data (highlighting trends etc.)? Try to distinguish between what the data show and what the authors say they show (i.e. their interpretation).

Moving on to look in greater depth at the Discussion and Conclusion : Are the results discussed in relation to similar (previous) studies? Do the authors indulge in excessive speculation? Are limitations of the study adequately addressed? Were the objectives of the study met and the hypothesis supported or refuted (and is a clear explanation provided)? Does the data support the authors’ conclusions? Maybe there is only one experiment to support a point. More often, several different experiments or approaches combine to support a particular conclusion. A rule of thumb here is that if multiple approaches and multiple lines of evidence from different directions are presented, and all point to the same conclusion, then the conclusions are more credible. But do question all assumptions. Identify any implicit or hidden assumptions that the authors may have used when interpreting their data. Be wary of data that is mixed up with interpretation and speculation! Remember, just because it is published, does not mean that it is right.

O ther points you should consider when evaluating a research paper : Are there any financial, ethical or other conflicts of interest associated with the study, its authors and sponsors? Are there ethical concerns with the study itself? Looking at the references, consider if the authors have preferentially cited their own previous publications (i.e. needlessly), and whether the list of references are recent (ensuring that the analysis is up-to-date). Finally, from a practical perspective, you should move beyond the text of a research paper, talk to your peers about it, consult available commentaries, online links to references and other external sources to help clarify any aspects you don’t understand.

The above can be taken as a general guide to help you begin to critically evaluate a scientific research paper, but only in the broadest sense. Do bear in mind that the way that research evidence is critiqued will also differ slightly according to the type of study being appraised, whether observational or experimental, and each study will have additional aspects that would need to be evaluated separately. For criteria recommended for the evaluation of qualitative research papers, see the article by Mildred Blaxter (1996), available online. Details are in the References.

Activity 1 Critical appraisal of a scientific research paper

A critical appraisal checklist, which you can download via the link below, can act as a useful tool to help you to interrogate research papers. The checklist is divided into four sections, broadly covering:

  • some general aspects
  • research design and methodology
  • the results
  • discussion, conclusion and references.

Science perspective – critical appraisal checklist [ Tip: hold Ctrl and click a link to open it in a new tab. ( Hide tip ) ]

  • Identify and obtain a research article based on a topic of your own choosing, using a search engine such as Google Scholar or PubMed (for example).
  • The selection criteria for your target paper are as follows: the article must be an open access primary research paper (not a review) containing empirical data, published in the last 2–3 years, and preferably no more than 5–6 pages in length.
  • Critically evaluate the research paper using the checklist provided, making notes on the key points and your overall impression.

Critical appraisal checklists are useful tools to help assess the quality of a study. Assessment of various factors, including the importance of the research question, the design and methodology of a study, the validity of the results and their usefulness (application or relevance), the legitimacy of the conclusions, and any potential conflicts of interest, are an important part of the critical appraisal process. Limitations and further improvements can then be considered.

Previous

IMAGES

  1. How to Critically Evaluate a RESEARCH Paper? Part 1: Introduction

    how to critically evaluate research methods

  2. How to Critically Evaluate

    how to critically evaluate research methods

  3. Evaluative Research: Definition, Methods & Types

    how to critically evaluate research methods

  4. 15 Research Methodology Examples (2023)

    how to critically evaluate research methods

  5. Critical Analysis Of A Research Paper

    how to critically evaluate research methods

  6. Evaluate: Assessing Your Research Process and Findings

    how to critically evaluate research methods

VIDEO

  1. How can I improve my ability to critically evaluate what I read?

  2. Metho 4: Good Research Qualities / Research Process / Research Methods Vs Research Methodology

  3. Avoid Costly Mistakes: Why Research Methods are Essential

  4. How to Evaluate Research Sources

  5. Workshop 7: How to Evaluate a Medical AI Study

  6. How to Evaluate Research Methods (a model)

COMMENTS

  1. Evaluating Research

    Evaluating Research Methods are as follows: Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field. ... Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other ...

  2. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    critiquing the literature, critical analysis, reviewing the literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants ...

  3. Evaluating Research in Academic Journals: A Practical Guide to

    New to this edition: New chapters on: - evaluating mixed methods research - evaluating systematic reviews and meta-analyses - program evaluation research Updated chapters and appendices that ...

  4. Tips for Evaluating Your Research

    Abstract. When evaluating Quality of Science (QoS) in the context of development initiatives, it is essential to define adequate criteria. The objective of this perspective paper is to show how altmetric and bibliometric indicators have been used to support the evaluation of QoS in the 2020 Review of the Phase 2-CGIAR Research Programs (CRPs, 2017-2022), where, for the first time, the ...

  5. Evaluating Sources

    Lateral reading. Lateral reading is the act of evaluating the credibility of a source by comparing it to other sources. This allows you to: Verify evidence. Contextualize information. Find potential weaknesses. If a source is using methods or drawing conclusions that are incompatible with other research in its field, it may not be reliable.

  6. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  7. Understanding and Evaluating Research: A Critical Guide

    Understanding and Evaluating Research: A Critical Guide shows students how to be critical consumers of research and to appreciate the power of methodology as it shapes the research question, the use of theory in the study, the methods used, and how the outcomes are reported.

  8. Critically reviewing literature: A tutorial for new researchers

    Critically reviewing the literature is an indispensible skill which is used throughout a research career. This article demystifies the processes involved in systematically and critically reviewing the literature to demonstrate knowledge, identify research ideas, position research and develop theory. Although aimed primarily at research students ...

  9. Critically appraising qualitative research

    Six key questions will help readers to assess qualitative research #### Summary points Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

  10. The fundamentals of critically appraising an article

    Here are some of the tools and basic considerations you might find useful when critically appraising an article. In a nutshell when appraising an article, you are assessing: 1. Its relevance ...

  11. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  12. Guidance to best tools and practices for systematic reviews

    Exemplar methods and research practices are described, complemented by novel pragmatic strategies to improve evidence syntheses. ... Stang A. Critical evaluation of the Newcastle-Ottawa scale for the assessment of the quality of nonrandomized studies in meta-analyses. Eur J Epidemiol. 2010; 25 (9):603-605. doi: 10.1007/s10654-010-9491-z.

  13. How to write a successful critical analysis

    To be critical, or to critique, means to evaluate. Therefore, to write critically in an academic analysis means to: judge the quality, significance or worth of the theories, concepts, viewpoints, methodologies, and research results. evaluate in a fair and balanced manner. avoid extreme or emotional language. You evaluate or judge the quality ...

  14. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  15. A simplified approach to critically appraising research evidence

    Aim: To provide a single framework that researchers can use when reading, understanding and critically assessing published research. Discussion: To make sense of published research papers, it is helpful to understand some key concepts and how they relate to either quantitative or qualitative designs. Internal and external validity, reliability ...

  16. PDF Planning and writing a critical review

    appraisal, critical analysis) is a detailed commentary on and critical evaluation of a text. You might carry out a critical review as a stand-alone exercise, or as part of your research and preparation for writing a literature review. The following guidelines are designed to help you critically evaluate a research article. What is meant by ...

  17. How to Write a Literature Review

    Take notes on the articles as you read them and identify any themes or concepts that may apply to your research question. This sample template (below) may also be useful for critically reading and organizing your articles. Or you can use this online form and email yourself a copy.

  18. EVALUATING RESEARCH METHODS IN PSYCHOLOGY

    • You become better equipped to evaluate or review published research, knowing better what kinds of things to look for; • You gain confidence in designing your own research, and develop some valuable competences to help you ask the right questions, and to self-evaluate your plans. The book has four main sections. The first introduces some ...

  19. Writing Tips: Critically Evaluating Research

    To develop the skill of being able to critically evaluate, when reading research articles in psychology read with an open mind and be active when reading. Ask questions as you go and see if the answers are provided. Initially skim through the article to gain an overview of the problem, the design, methods, and conclusions. Then read for details ...

  20. Critical Appraisal of Mixed Methods Studies

    It is generally known that the critical appraisal of articles is an essential step in the development of a methodologically sound review. This article provides an overview of the available critical appraisal frameworks developed to evaluate primary mixed methods research articles.

  21. Critical appraisal of published literature

    Critical appraisal. ' The process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context '. -Burls A [ 1] The objective of medical literature is to provide unbiased, accurate medical information, backed by robust scientific evidence that could aid and enhance ...

  22. Evaluating research: Methodology for people who need to read research

    His enthusiasm for methods present on every page, author Francis C. Dane wants readers to have the skills to evaluate research and critically use empirical results. Readers will gain a greater appreciation for the ways in which consuming research can add to their understanding of the way the world works and how they can enhance their efforts to ...

  23. 1 Important points to consider when critically evaluating published

    There is a section on Introduction, followed by the Methods, then the Results, which includes figures and tables showing data described in the paper, and a Discussion. The paper typically ends with a Conclusion, and References and Acknowledgements sections. ... Critically evaluate the research paper using the checklist provided, making notes on ...

  24. Evaluating Research Skills: Effective Feedback Methods

    Learn how to assess research skills with practical steps that focus on critical analysis, methodology, and communication clarity for effective feedback and evaluation.

  25. How to Start an Evaluation Essay: Tips & Steps

    Support for the second reason: Offer persuasive supporting evidence to substantiate your research. Draw upon quotes, examples, statistics, or expert testimonials to strengthen your argument. Introduce and refute objections: Enhance the credibility of your evaluation essay outline by acknowledging and countering potential objections or opposing ...