IJERE

Message from Editor

Publication Ethics and Malpractice Statement

International Journal of Educational Research Review ( ISSN:2458-9322 )

International Journal of Educational Research Review  publishes scholarly articles that are of general significance to the education research community and that come from a wide range of areas. The International Journal of Educational Research Review  aims to provide a forum for scholarly understanding of the field of education. Articles focus upon concepts, research, review and practices that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our understandings of pressing issues.

The range of topics covered in the International Journal of Educational Research Review   include;

read full aims and scope;  https://www.ijere.com/page/aims-and-scope

Peer Review Policy:

All articles in this journal will undergo initial editor screening, rigorous double-anonymous peer review, and review by the editorial board.

Double-blind peer review

International Journal of Educational Research Review  follows a double-blind reviewing procedure. This means that the author will remain anonymous to the reviewers throughout peer review. It is the responsibility of the author to anonymize the manuscript and any associated materials.

read full Peer Review Policy;  https://www.ijere.com/page/peer-review-policy

Ownership and management

IJERE journal is hosted by Dergipark/Tubitak. IJERE Journal is published with the support of Sakarya University Faculty of Education/TURKEY.

Governing body

Editor in Chief Dr. Serhat Arslan, Gazi University, Turkey 

read full  Governing body /Ownership and management/ Editorial board member ;  https://www.ijere.com/page/editorial-board

Copyright and licensing

Copyright Statement

Copyright violation is an important, and possibly related, ethical issue. Authors should check their manuscripts for possible breaches of copyright law (e.g., where permissions are needed for quotations, artwork or tables taken from other publications or from other freely available sources on the Internet) and secure the necessary permissions before submission to International Journal of Educational Research Review.

read full  Copyright and licensing;  https://www.ijere.com/page/copyright-and-licensing-open-access-statement

  • Popular Articles
  • Popular Articles from Current Issue

Moved Permanently

The document has been permanently moved.

International Journal of Educational Research

international journal of educational research review

Subject Area and Category

Elsevier B.V.

Publication type

Information.

How to publish in this journal

[email protected]

international journal of educational research review

The set of journals have been ranked according to their SJR and divided into four equal groups, four quartiles. Q1 (green) comprises the quarter of the journals with the highest values, Q2 (yellow) the second highest values, Q3 (orange) the third highest values and Q4 (red) the lowest values.

The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is.

Evolution of the number of published documents. All types of documents are considered, including citable and non citable documents.

This indicator counts the number of citations received by documents from a journal and divides them by the total number of documents published in that journal. The chart shows the evolution of the average number of times documents published in a journal in the past two, three and four years have been cited in the current year. The two years line is equivalent to journal impact factor ™ (Thomson Reuters) metric.

Evolution of the total number of citations and journal's self-citations received by a journal's published documents during the three previous years. Journal Self-citation is defined as the number of citation from a journal citing article to articles published by the same journal.

Evolution of the number of total citation per document and external citation per document (i.e. journal self-citations removed) received by a journal's published documents during the three previous years. External citations are calculated by subtracting the number of self-citations from the total number of citations received by the journal’s documents.

International Collaboration accounts for the articles that have been produced by researchers from several countries. The chart shows the ratio of a journal's documents signed by researchers from more than one country; that is including more than one country address.

Not every article in a journal is considered primary research and therefore "citable", this chart shows the ratio of a journal's articles including substantial research (research articles, conference papers and reviews) in three year windows vs. those documents other than research articles, reviews and conference papers.

Ratio of a journal's items, grouped in three years windows, that have been cited at least once vs. those not cited during the following year.

Scimago Journal & Country Rank

Leave a comment

Name * Required

Email (will not be published) * Required

* Required Cancel

The users of Scimago Journal & Country Rank have the possibility to dialogue through comments linked to a specific journal. The purpose is to have a forum in which general doubts about the processes of publication in the journal, experiences and other issues derived from the publication of papers are resolved. For topics on particular articles, maintain the dialogue through the usual channels with your editor.

Scimago Lab

Follow us on @ScimagoJR Scimago Lab , Copyright 2007-2024. Data Source: Scopus®

international journal of educational research review

Cookie settings

Cookie Policy

Legal Notice

Privacy Policy

international journal of educational research review

Journal of Lifelong Learning

Volumes and issues

Volume 70 february 2024 feb 2024.

  • Issue 1 February 2024

Volume 69 April - December 2023 Apr - Dec 2023

  • Issue 6 December 2023
  • Issue 5 October 2023
  • Issue 4 August 2023
  • Issue 3 June 2023
  • Issue 1-2 April 2023

Volume 68 February - December 2022 Feb - Dec 2022

  • Issue 6 December 2022

The Faure report – 50 years on

  • Issue 4 August 2022
  • Issue 3 June 2022

Special issue on Strengthening the future of adult education and lifelong learning for all: Building bridges between CONFINTEA and the SDGs

  • Issue 1 February 2022

Volume 67 April - December 2021 Apr - Dec 2021

Special Issue on Transnational migration, refugee studies and lifelong learning

  • Issue 5 October 2021
  • Issue 4 August 2021
  • Issue 3 June 2021

Special Issue on Education in the age of COVID-19: Implications for the future

Volume 66 February - December 2020 Feb - Dec 2020

Special Issue on Education in the age of COVID-19: Understanding the consequences

  • Issue 4 August 2020

Special issue on Literacy and numeracy: Global and comparative perspectives

  • Issue 1 February 2020

Volume 65 February - December 2019 Feb - Dec 2019

  • Issue 6 December 2019

Special Issue: Education in prison: A basic right and an essential tool/Éducation en prison : un droit fondamental et un outil essentiel

  • Issue 4 August 2019

Adult literacy, local languages and lifelong learning in rural African contexts

Lifelong learning and the Sustainable Development Goals (SDGs): Probing the implications and the effects

Indigenous knowledges as vital contributions to sustainability

Volume 64 February - December 2018 Feb - Dec 2018

Special Issue on Using a narrative approach to researching literacy and non-formal education in Africa and Asia

  • Issue 5 October 2018

Special Issue On Nordic-Baltic cooperation in the field of adult education 1991–2004

Special Issue on Building sustainable learning cities

Special Issue on Learning potentials and educational challenges of massive open online courses (MOOCs) in lifelong learning

  • Issue 1 February 2018

Volume 63 February - December 2017 Feb - Dec 2017

Special Issue on Non-formal and community learning for sustainable development

  • Issue 5 October 2017

Special Issue on Language learning to support active social inclusion: Issues and challenges for lifelong learning

  • Issue 3 June 2017
  • Issue 2 April 2017

Special Issue on Experiential learning in informal educational settings

Volume 62 February - December 2016 Feb - Dec 2016

  • Issue 6 December 2016

Special Issue on Societal sustainability: The contribution of adult education to sustainable societies

Special Issue on Interreligious and intercultural education for dialogue, peace and social cohesion

  • Issue 3 June 2016
  • Issue 2 April 2016

Special Issue on Rediscovering the Ubuntu Paradigm in Education

Volume 61 February - December 2015 Feb - Dec 2015

Special Issue on Workplace learning, subjectivity and identity

  • Issue 5 October 2015
  • Issue 4 August 2015

Special Issue on Lifelong literacy: Towards a new agenda

  • Issue 2 April 2015
  • Issue 1 February 2015

Volume 60 April - December 2014 Apr - Dec 2014

  • Issue 6 December 2014
  • Issue 5 October 2014

Special Issue : New Times, New Voices

What humanism for the 21st century? Quel humanisme pour le 21e siècle?

Special Issue: Learning Needs and Life Skills for Youth

  • Issue 1 April 2014

Volume 59 June - December 2013 Jun - Dec 2013

  • Issue 6 December 2013
  • Issue 5 October 2013

Special Issue: Learning Cities: Developing Inclusive, Prosperous and Sustainable Urban Communities

Special Issue: The Future of Lifelong Learning

  • Issue 2 July 2013
  • Issue 1 June 2013

Volume 58 February - December 2012 Feb - Dec 2012

  • Issue 6 December 2012
  • Issue 5 October 2012
  • Issue 4 August 2012
  • Issue 3 June 2012
  • Issue 2 April 2012
  • Issue 1 February 2012

Volume 57 August - December 2011 Aug - Dec 2011

Special Issue: Quality Multilingual and Multicultural Education for Lifelong Learning

Special Issue: Bordering, Re-Bordering and New Possibilities in Education and Society

Special Issue: CONFINTEA VI Follow-up: The Challenges of Moving from Rhetoric to Action

Volume 56 February - December 2010 Feb - Dec 2010

  • Issue 5-6 December 2010
  • Issue 4 October 2010

Special Issue:The Midway Point of the UN Decade of Education for Sustainable Development: Where Do We Stand? / Guest Edited by G. de Haan, I. Borman and A. Leicht

  • Issue 1 February 2010

Volume 55 January - November 2009 Jan - Nov 2009

Special Issue: Undoing Gender / Guest Edited by N. P. Stromquist and G. E. Fischman

  • Issue 4 July 2009

Special Issue: Education for Reconciliation and Conflict Resolution

  • Issue 1 January 2009

Volume 54 January - November 2008 Jan - Nov 2008

Literacy Education for All: Challenges and Prospects

Living Together: Education and Intercultural Dialogue

  • Issue 2 March 2008
  • Issue 1 January 2008

Volume 53 January - November 2007 Jan - Nov 2007

Special Issue: Quality Education in Africa: Challenges and Prospects

  • Issue 4 July 2007
  • Issue 3 May 2007
  • Issue 2 March 2007
  • Issue 1 January 2007

Volume 52 March 2006 - January 2007 Mar 2006 - Jan 2007

  • Issue 1-2 January 2007
  • Issue 6 December 2006
  • Issue 5 September 2006

Special Issue: Education and Poverty Reduction

Education and Social Justice

Volume 51 January - November 2005 Jan - Nov 2005

  • Issue 5-6 November 2005
  • Issue 4 July 2005
  • Issue 2-3 May 2005
  • Issue 1 January 2005

Volume 50 January - November 2004 Jan - Nov 2004

  • Issue 5-6 November 2004
  • Issue 3-4 July 2004
  • Issue 2 March 2004
  • Issue 1 January 2004

Volume 49 March - November 2003 Mar - Nov 2003

  • Issue 6 November 2003
  • Issue 5 September 2003
  • Issue 3-4 July 2003
  • Issue 1-2 March 2003

Volume 48 March - November 2002 Mar - Nov 2002

  • Issue 6 November 2002
  • Issue 5 September 2002
  • Issue 3-4 July 2002
  • Issue 1-2 March 2002

Volume 47 March - November 2001 Mar - Nov 2001

  • Issue 6 November 2001
  • Issue 5 September 2001
  • Issue 3-4 July 2001
  • Issue 1-2 March 2001

Volume 46 May - November 2000 May - Nov 2000

  • Issue 6 November 2000
  • Issue 5 September 2000
  • Issue 3-4 July 2000
  • Issue 1-2 May 2000

Volume 45 January - November 1999 Jan - Nov 1999

  • Issue 5-6 November 1999
  • Issue 3-4 May 1999
  • Issue 2 March 1999
  • Issue 1 January 1999

Volume 44 January - September 1998 Jan - Sept 1998

  • Issue 5-6 September 1998
  • Issue 4 July 1998
  • Issue 2-3 March 1998
  • Issue 1 January 1998

Volume 43 January - September 1997 Jan - Sept 1997

  • Issue 5-6 September 1997
  • Issue 4 July 1997
  • Issue 2-3 March 1997
  • Issue 1 January 1997

Volume 42 January - November 1996 Jan - Nov 1996

  • Issue 6 November 1996
  • Issue 5 September 1996
  • Issue 4 July 1996
  • Issue 1-3 January 1996

Volume 41 January - November 1995 Jan - Nov 1995

  • Issue 6 November 1995
  • Issue 5 September 1995
  • Issue 3-4 May 1995
  • Issue 1-2 January 1995

Volume 40 January - November 1994 Jan - Nov 1994

  • Issue 6 November 1994
  • Issue 3-5 May 1994
  • Issue 2 March 1994
  • Issue 1 January 1994

Volume 39 March - November 1993 Mar - Nov 1993

  • Issue 6 November 1993
  • Issue 5 September 1993
  • Issue 4 July 1993
  • Issue 3 May 1993
  • Issue 1-2 March 1993

Volume 38 January - November 1992 Jan - Nov 1992

  • Issue 6 November 1992
  • Issue 5 September 1992
  • Issue 4 July 1992
  • Issue 3 May 1992
  • Issue 2 March 1992
  • Issue 1 January 1992

Volume 37 March - December 1991 Mar - Dec 1991

  • Issue 4 December 1991
  • Issue 3 September 1991
  • Issue 2 June 1991
  • Issue 1 March 1991

Volume 36 March - December 1990 Mar - Dec 1990

  • Issue 4 December 1990
  • Issue 3 September 1990
  • Issue 2 June 1990
  • Issue 1 March 1990

Volume 35 March - December 1989 Mar - Dec 1989

  • Issue 4 December 1989
  • Issue 3 September 1989
  • Issue 2 June 1989
  • Issue 1 March 1989

Volume 34 March - December 1988 Mar - Dec 1988

  • Issue 4 December 1988
  • Issue 3 September 1988
  • Issue 2 June 1988
  • Issue 1 March 1988

Volume 33 March - December 1987 Mar - Dec 1987

  • Issue 4 December 1987
  • Issue 3 September 1987
  • Issue 2 June 1987
  • Issue 1 March 1987

Volume 32 March - December 1986 Mar - Dec 1986

  • Issue 4 December 1986
  • Issue 3 September 1986
  • Issue 2 June 1986
  • Issue 1 March 1986

Volume 31 December 1985 Dec 1985

  • Issue 1 December 1985

Volume 30 March 1984 - September 1985 Mar 1984 - Sept 1985

  • Issue 3 September 1985
  • Issue 4 December 1984
  • Issue 2 June 1984
  • Issue 1 March 1984

Volume 29 March - December 1983 Mar - Dec 1983

  • Issue 4 December 1983
  • Issue 3 September 1983
  • Issue 2 June 1983
  • Issue 1 March 1983

Volume 28 March - December 1982 Mar - Dec 1982

  • Issue 4 December 1982
  • Issue 3 September 1982
  • Issue 2 June 1982
  • Issue 1 March 1982

Volume 27 March - December 1981 Mar - Dec 1981

  • Issue 4 December 1981
  • Issue 3 September 1981
  • Issue 2 June 1981
  • Issue 1 March 1981

Volume 26 March - December 1980 Mar - Dec 1980

  • Issue 4 December 1980
  • Issue 3 September 1980
  • Issue 2 June 1980
  • Issue 1 March 1980

Volume 25 March - December 1979 Mar - Dec 1979

  • Issue 4 December 1979
  • Issue 2-3 June 1979
  • Issue 1 March 1979

Volume 24 March - December 1978 Mar - Dec 1978

  • Issue 4 December 1978
  • Issue 3 September 1978
  • Issue 2 June 1978
  • Issue 1 March 1978

Volume 23 March - December 1977 Mar - Dec 1977

  • Issue 4 December 1977
  • Issue 3 September 1977
  • Issue 2 June 1977
  • Issue 1 March 1977

Volume 22 March - December 1976 Mar - Dec 1976

  • Issue 4 December 1976
  • Issue 3 September 1976
  • Issue 2 June 1976
  • Issue 1 March 1976

Volume 21 March - December 1975 Mar - Dec 1975

  • Issue 4 December 1975
  • Issue 3 September 1975
  • Issue 2 June 1975
  • Issue 1 March 1975

Volume 20 March - December 1974 Mar - Dec 1974

  • Issue 4 December 1974
  • Issue 3 September 1974
  • Issue 2 June 1974
  • Issue 1 March 1974

Volume 19 March - December 1973 Mar - Dec 1973

  • Issue 4 December 1973
  • Issue 3 September 1973
  • Issue 2 June 1973
  • Issue 1 March 1973

Volume 18 December 1972 Dec 1972

  • Issue 1 December 1972

Volume 17 March - December 1971 Mar - Dec 1971

  • Issue 4 December 1971
  • Issue 3 September 1971
  • Issue 2 June 1971
  • Issue 1 March 1971

Volume 16 March - December 1970 Mar - Dec 1970

  • Issue 4 December 1970
  • Issue 3 September 1970
  • Issue 2 June 1970
  • Issue 1 March 1970

Volume 15 March - December 1969 Mar - Dec 1969

  • Issue 4 December 1969
  • Issue 3 September 1969
  • Issue 2 June 1969
  • Issue 1 March 1969

Volume 14 March - December 1968 Mar - Dec 1968

  • Issue 4 December 1968
  • Issue 3 September 1968
  • Issue 2 June 1968
  • Issue 1 March 1968

Volume 13 March - December 1967 Mar - Dec 1967

  • Issue 4 December 1967
  • Issue 3 September 1967
  • Issue 2 June 1967
  • Issue 1 March 1967

Volume 12 March - December 1966 Mar - Dec 1966

  • Issue 4 December 1966
  • Issue 3 September 1966
  • Issue 2 June 1966
  • Issue 1 March 1966

Volume 11 March - December 1965 Mar - Dec 1965

  • Issue 4 December 1965
  • Issue 3 September 1965
  • Issue 2 June 1965
  • Issue 1 March 1965

Volume 10 March - December 1964 Mar - Dec 1964

  • Issue 4 December 1964
  • Issue 3 September 1964
  • Issue 2 June 1964
  • Issue 1 March 1964

Volume 9 March - December 1963 Mar - Dec 1963

  • Issue 4 December 1963
  • Issue 3 September 1963
  • Issue 2 June 1963
  • Issue 1 March 1963

Volume 8 March 1962 - September 1963 Mar 1962 - Sept 1963

  • Issue 3-4 September 1963
  • Issue 2 June 1962
  • Issue 1 March 1962

Volume 7 March 1961 - December 1962 Mar 1961 - Dec 1962

  • Issue 4 December 1962
  • Issue 3 September 1961
  • Issue 2 June 1961
  • Issue 1 March 1961

Volume 6 December 1960 Dec 1960

  • Issue 1 December 1960

Volume 5 March - December 1959 Mar - Dec 1959

  • Issue 4 December 1959
  • Issue 3 September 1959
  • Issue 2 June 1959
  • Issue 1 March 1959

Volume 4 December 1958 Dec 1958

  • Issue 1 December 1958

Volume 3 March - December 1957 Mar - Dec 1957

  • Issue 4 December 1957
  • Issue 3 September 1957
  • Issue 2 June 1957
  • Issue 1 March 1957

Volume 2 March - December 1956 Mar - Dec 1956

  • Issue 4 December 1956
  • Issue 3 September 1956
  • Issue 2 June 1956
  • Issue 1 March 1956

Volume 1 March - December 1955 Mar - Dec 1955

  • Issue 4 December 1955
  • Issue 3 September 1955
  • Issue 2 July 1955
  • Issue 1 March 1955
  • Find a journal
  • Publish with us
  • Track your research

Citation Analysis

  • Google Scholar
  • Scholar Metrics
  • Web of Science

Quick Links

  • Author Guideline
  • Editorial Boards
  • Online Submissions
  • Abstracting and Indexing
  • Publication Ethics
  • Visitor Statistics
  • For Readers
  • For Authors
  • For Librarians

Announcements

International journal of evaluation and research in education (ijere).

International Journal of Evaluation and Research in Education (IJERE) , p-ISSN: 2252-8822, e-ISSN: 2620-5440 is an interdisciplinary publication of original research and writing on education which publishes papers to international audiences of educational researchers. This journal aims to provide a forum for scholarly understanding of the field of education and plays an important role in promoting the process that accumulated knowledge, values, and skills are transmitted from one generation to another; and to make methods and contents of evaluation and research in education available to teachers, administrators and research workers. The journal encompasses a variety of topics, including child development, curriculum, reading comprehension, philosophies of education and educational approaches, etc. IJERE has been indexed by SCOPUS , and ScimagoJR .

Beginning with issue 1 of volume 13 (2024), this journal will be published as a bimonthly journal (6 issues/year). The journal is published by  Institute of Advanced Engineering and Science (IAES)   in collaboration with   Intelektual Pustaka Media Utama (IPMU) .

Journal Homepage Image

Kindly please download the IJERE template in  MS Word  or  Latex

Submit your manuscripts today! < click in here >

Please do not hesitate to contact us if you require any further information at email: [email protected].

All Issues:

  • 2024:  Vol. 13 No. 1 , Vol. 13 No. 2 , Vol. 13 No. 3 , Vol. 13 No. 4, Vol. 13 No. 5, Vol. 13 No. 6
  • 2023: Vol. 12 No. 1 ,  Vol. 12 No. 2 ,  Vol. 12 No. 3 ,  Vol. 12 No. 4
  • 2022:  Vol. 11 No. 1 ,  Vol. 11 No. 2 ,  Vol. 11 No. 3 ,  Vol. 11 No. 4
  • 2021:  Vol. 10 No. 1 ,  Vol. 10 No. 2 ,  Vol. 10 No. 3 ,  Vol. 10 No. 4
  • 2020:  Vol. 9 No. 1 ,  Vol. 9 No. 2 ,  Vol. 9 No. 3 ,  Vol. 9 No. 4
  • 2019:  Vol. 8 No. 1 ,  Vol. 8 No. 2 ,  Vol. 8 No. 3 ,  Vol. 8 No. 4
  • 2018: Vol. 7 No. 1 ,  Vol. 7 No. 2 ,  Vol. 7 No. 3 ,  Vol. 7 No. 4
  • 2017:  Vol. 6 No. 1 ,  Vol. 6 No. 2 ,  Vol. 6 No. 3 ,  Vol. 6 No. 4
  • 2016:  Vol. 5 No. 1 ,  Vol. 5 No. 2 ,  Vol. 5 No. 3 ,  Vol. 5 No. 4
  • 2015:  Vol. 4 No. 1 ,  Vol. 4 No. 2 ,  Vol. 4 No. 3 ,  Vol. 4 No. 4
  • 2014:  Vol. 3 No. 1 ,  Vol. 3 No. 2 ,  Vol. 3 No. 3 ,  Vol. 3 No. 4
  • 2013: Vol. 2 No. 1 , Vol. 2 No. 2 , Vol. 2 No. 3 , Vol. 2 No. 4
  • 2012: Vol. 1 No. 1 , Vol. 1 No. 2

Vol 13, No 3: June 2024

Table of contents.

International Journal of Evaluation and Research in Education (IJERE) p-ISSN: 2252-8822, e-ISSN: 2620-5440 The journal is published by  Institute of Advanced Engineering and Science (IAES)   in collaboration with   Intelektual Pustaka Media Utama (IPMU)  

Creative Commons License

international journal of educational research review

International Journal of Educational Research and Review

international journal of educational research review

About IJERR

International Journal of Educational Research and Review   ( ISSN: 2756-4789 )   is an open access academic refereed journal published monthly by Spectacular Journals.

IJERR publishes research articles that report premier fundamental discoveries and inventions, and the applications of those discoveries, unconfined by traditional discipline barriers. All articles published by IJERR are published in English.

The objective of IJERR is to publish the most significant as well as innovative articles all areas of this journal; in other to accelerate the sharing and free-access of knowledge.

  • Research article
  • Open access
  • Published: 22 April 2024

What does it mean to be good at peer reviewing? A multidimensional scaling and cluster analysis study of behavioral indicators of peer feedback literacy

  • Yi Zhang   ORCID: orcid.org/0000-0001-7153-0955 1 ,
  • Christian D. Schunn 2 &
  • Yong Wu 3  

International Journal of Educational Technology in Higher Education volume  21 , Article number:  26 ( 2024 ) Cite this article

151 Accesses

2 Altmetric

Metrics details

Peer feedback literacy is becoming increasingly important in higher education as peer feedback has substantially grown as a pedagogical approach. However, quality of produced feedback, a key behavioral aspect of peer feedback literacy, lacks a systematic and evidence-based conceptualization to guide research, instruction, and system design. We introduce a novel framework involving six conceptual dimensions of peer feedback quality that can be measured and supported in online peer feedback contexts: reviewing process, rating accuracy, feedback amount, perceived comment quality, actual comment quality, and feedback content. We then test the underlying dimensionality of student competencies through correlational analysis, Multidimensional Scaling, and cluster analysis, using data from 844 students engaged in online peer feedback in a university-level course. The separability of the conceptual dimensions is largely supported in the cluster analysis. However, the cluster analysis also suggests restructuring perceived and actual comment quality in terms of initial impact and ultimate impact. The Multi-Dimensional Scaling suggests the dimensions of peer feedback can be conceptualized in terms of relative emphasis on expertise vs. effort and on overall review quality vs. individual comment quality. The findings provide a new road map for meta-analyses, empirical studies, and system design work focused on peer feedback literacy.

Introduction

Peer review, as a student-centered pedagogical approach, has become widely used in higher education (Gao et al., 2023 ; Kerman et al., 2024 ). In recent years, higher education research has begun to investigate peer feedback literacy (Dawson et al., 2023 ; Little et al., 2024 ; Nieminen & Carless, 2023 ). Peer feedback literacy refers to the capacity to comprehend, interpret, provide, and effectively utilize feedback in a peer review context (Dong et al., 2023 ; Man et al., 2022 ; Sutton, 2012 ). It supports learning processes by fostering critical thinking, enhancing interpersonal skills, and promoting active engagement in course groupwork (Hattie & Timperley, 2007 ). To date, conceptualizations of peer feedback literacy have primarily been informed by interview and survey data (e.g., Dong et al., 2023 ; Woitt et al., 2023 ; Zhan, 2022 ). These methods have provided valuable insights into learners’ knowledge of and attitudes towards peer feedback. However, they have not generally examined the behavioral aspect of peer feedback literacy, especially the quality of the feedback that students with high feedback literacy produce (Gielen et al., 2010 ). Knowledge and attitudes to not always translate into effective action (Becheikh et al., 2010 ; Huberman, 1990 ), and the quality of feedback that students actually produce play an important role in their learning from the process (Lu et al., 2023 ; Topping, 2023 ; Zheng et al., 2020 ; Zong et al., 2021a , b ).

In order to make progress on behavioral indicators of peer feedback literacy, it is important to recognize a lack of agreement in the literature in defining the key aspects of “quality” of peer feedback. In fact, collectively, a large number of different conceptualizations and measures have been explored (Jin et al., 2022 ; Noroozi et al., 2022 ; Patchan et al., 2018 ; Tan & Chen, 2022 ), and their interrelationships have not been examined. Further, much of the literature to date has investigated peer feedback quality at the level of individual comments and ratings. Individual comments and ratings can be driven by characteristics of the object being studied, moment-to-moment fluctuations in attention and motivation, as well as feedback literacy of the reviewer. To understand the dimensionality of feedback literacy, investigations of reviewing quality must be conducted at the level of reviewers, not individual comments. For example, specific comment choices may have weak or even negative relationships based upon alternative structures (i.e., a reviewer might choose between two commenting strategies in a given comment), but at the individual level (as a reviewer) the same elements might be positively correlated reflecting more general attitudes or skills.

Integrating across many prior conceptualizations and empirical investigations, we propose a new conceptual framework that broadly encompasses many dimensions of reviewing quality. We then present an empirical investigation using multidimensional scaling and cluster analysis of the dimensionality of peer reviewing quality at the reviewer level (i.e., the behavioral component of peer feedback literacy), utilizing a large peer review dataset in a university-level course.

Literature review

While most studies of peer reviewing quality have tended to focus on one or two specific measures, a few authors considered peer reviewing quality more broadly. In building a tool for university computer science courses that automatically evaluates peer feedback quality, Ramachandran et al. ( 2017 ) proposed conceptualizing peer feedback quality in terms of six specific measures such as whether the feedback is aligned to the rubric dimensions, whether the feedback has a balanced tone, and whether the feedback was copied from another review. Since their focus was on tool building, they did not consider the dimensionality of the specific measures.

More recently, Zhang and Schunn ( 2023 ) proposed a five-dimensional conceptual framework for assessing the quality of peer reviews: accuracy, amount, impact, features, and content. The larger framework was not tested, and only a few specific measures were studied in university biology courses. Using a broader literature review, here we expand and refine this framework to include six dimensions: reviewing process, rating accuracy, amount, perceived comment quality, actual comment quality, and feedback content (see Table  1 ).

The first dimension, reviewing process , pertains to varying methods students use while reviewing, significantly affecting feedback quality. This includes aspects like time devoted to reviewing or use of drafting of comments. Studies conducted in a lab and on MOOCs found a positive correlation between efficient time management and improved review accuracy (Piech et al., 2013 ; Smith & Ratcliff, 2004 ). However, such easily-collected process measures may not accurately represent effective processes. For instance, time logged in an online system may not reflect actual working time. Indeed, another study found that spending slightly below-average time reviewing correlated with higher reliability (Piech et al., 2013 ). To address this concern, Xiong and Schunn ( 2021 ) focused on whether reviews were completed in extremely short durations (< 10 min) instead of measuring the total time spent on a review. Similarly, numerous revisions while completing a review could signify confusion rather than good process. Methods like eye-tracking (Bolzer et al., 2015 ) or think-aloud techniques (Wolfe, 2005 ) could provide additional measures related to peer reviewing processes.

The second dimension, rating accuracy , focuses on peer assessment and the alignment between a reviewers’ ratings and a document’s true quality. True document quality is ideally determined by expert ratings, but sometimes, more indirect measures like instructor or mean multi-peer ratings are used. Across varied terms like error, validity, or accuracy, the alignment of peer ratings with document quality is typically quantified either by measuring agreement (i.e., distance from expert ratings—Li et al., 2016 ; Xiong & Schunn, 2021 ) or by measuring evaluator consistency (i.e., having similar rating patterns across document and dimension—Schunn et al., 2016 ; Tong et al., 2023 ; Zhang et al., 2020 ). Past studies typically focused on specific indicators without examining their interrelations or their relationship with other dimensions of peer reviewing quality.

The third dimension, amount , can pertain to one peer feedback component (i.e., the number or length of comments in a review) or broadly to peer review (i.e., the number of reviews completed). Conceptually, this dimension may be especially driven by motivation levels and attitudes towards peer feedback, but the amount produced can also reflect understanding and expertise (Zong et al., 2022 ). Within amount, a distinction has been made between frequency—defined by the number of provided comments or completed reviews as a kind of behavioral engagement (Zong et al., 2021b ; Zou et al., 2018 )—and comment length, indicating cognitive engagement and learning value (Zong et al., 2021a ). While comment length logically correlates with quality dimensions focused on the contents of a comment (i.e., adding explanations or potential solutions increases length), its associations with many other dimensions, like accuracy in ratings, reviewing process, or feedback content, remain unexplored.

The fourth dimension, perceived comment quality , focuses on various aspects of comments from the feedback recipient’s perspective; peer feedback is a form of communication, and recipients are well positioned to judge communication quality. This dimension may focus on the initial processing of the comment (e.g., was it understandable?; Nelson & Schunn, 2009 ) or its ultimate impact (e.g., was it accepted? was it helpful for revision? did the recipient learn something?; Huisman et al., 2018 ), typically measured using Likert scales. Modern online peer feedback systems used in university contexts often incorporate a step where feedback recipients rate the received feedback’s helpfulness (Misiejuk & Wasson, 2021 ). However, little research has explored the relation between perceived comment quality and other reviewing quality dimensions, especially at the grain size of a reviewer (e.g., do reviewers whose comments are seen as helpful tend to put more effort into reviewing, produce more accurate ratings, or focus on critical aspects of the document?).

The fifth dimension, actual comment quality , revolves around the comment’s objective impact (e.g., is it implementable or what is processed by the reviewer?) or concrete, structural elements influencing its impact (e.g., does it provide a solution, is the tone balanced, does it explain the problem?). This impact, or feedback uptake (Wichmann et al., 2018 ), typically pertains to the comment’s utilization in revisions (Wu & Schunn, 2021b ). However, as comments might be ignored for reasons unrelated to their comment content (Wichmann et al., 2018 ), some studies focus upon potential impact (Cui et al., 2021 ; Leijen, 2017 ; Liu & Sadler, 2003 ; Wu & Schunn, 2023 ). Another approach examines comment features likely to influence their impact, like the inclusion of explanations, suggestions, or praise (Lu et al., 2023 ; Tan & Chen, 2022 ; Tan et al., 2023 ; Wu & Schunn, 2021a ). Most studies on actual comment quality have explored how students utilize received feedback (van den Bos & Tan, 2019 ; Wichmann et al., 2018 ; Wu & Schunn, 2023 ), with much less attention given to how actual comment quality is related to other dimensions of feedback quality, particularly at the level of feedback providers (e.g., do reviewers who provide more explanations give more accurate ratings?).

The last dimension, feedback content , shifts from the structure of the comment (e.g., was it said in a useful way?) to the semantic topic of the content (i.e., was the comment about the right content?). Content dimensions explored thus far include whether the review comments were aligned with the rubric provided by the instructor (Ramachandran et al., 2017 ), whether they covered the whole object being reviewed (Ramachandran et al., 2017 ), whether they attend to the most problematic issues in the document from an expert perspective (e.g., Gao et al., 2019 ), whether they focused on pervasive/global issues (Patchan et al., 2018 ) or higher-order writing issues (van den Bos & Tan, 2019 ) rather than sentence level issues, whether the comments were self-plagiarized or copied from other reviewers (Ramachandran et al., 2017 ), or whether multiple peers also referred to these same issues (Leijen, 2017 ), which indicates that many readers find it problematic. It is entirely possible that reviewers give many well-structured comments but generally avoid addressing the most central or challenging issues in a document perhaps because those require more work or intellectual risk (Gao et al., 2019 ). It could be argued that high peer feedback literacy involves staying focused on critical issues. However, it is unknown whether reviewers who tend to give well-structured comments when provided a focused rubric tend to give more accurate ratings or address critical issues in the documents they are reviewing.

The present study

In the current study, we seek to expand upon existing research on peer reviewing quality by examining its multidimensional structure, at the reviewer level, in essence developing behavioral dimensions of peer review literacy. This exploration is critical for theoretical and practical reasons: the dimensionality of peer reviewing quality is foundational to conceptualizations of peer feedback literacy, sampling plans for studies of peer feedback literacy, and interventions designed to improve peer feedback literacy.

To make it possible to study many dimensions and specific measures of peer feedback quality at once, we leverage an existing dataset involving a university-level course in which different studies have collectively developed measures and data for a wide range of reviewing quality constructs. We further add a few measures that can be efficiently added using mathematical formulas. As a result, we are able to study five of the six dimensions (all but feedback content) and specifically eighteen specific measures. Our primary research question is: What is the interrelationship among different dimensions and measures of peer reviewing quality at the reviewer level? Specifically, we postulate that the five dimensions—reviewing process, rating accuracy, amount of feedback, perceived comment quality, and actual comment quality—are interconnected in strong ways within a dimension and in relatively weaker ways across dimensions.

Participants

Participants were 844 students enrolled in an Advanced Placement course in writing at nine secondary schools distributed across the United States. Participants were predominately female (59%; 4% did not report gender) and Caucasian (55%), followed by Asian (12%), African American (7%), and Hispanic/Latino (7%; 19% did not report their ethnicity). The mean age was 17 years ( SD  = 1.8).

The Advanced Placement (AP) course is a higher education course aimed for advanced high school students who are ready for instruction at the higher education level, similar to cases in which advanced high school students attend a course at a local university. This course is typically taken by students who are only 1 year younger than first-year university students, the point at which this specific course is normally taken, and by students who are especially likely to go on to university and wanting to be able to get credit for university-level courses to reduce their university degree time and costs. Since student enrollment in higher education and studies of their behavior focus on their general level of proficiency rather than age, students in this course should be thought of as more similar to entry-level university students than they are to general high school students. Further, the course is designed and regulated by a national organization, the College Board, to be entirely equivalent to a university course in content and grading.

The AP English Language and Composition course focuses on argument and rhetorical elements of writing, equivalent to the first-writing course that is required at most universities in the US (College Board, 2021 ). For a study on peer feedback within this course context, students from a school were taught by the same teacher, interacting online for peer feedback activities. Nine eligible teachers with experience in teaching this AP course were recruited. The selected teachers met the following eligibility criteria: 1) they had previously taught the course; 2) they were teaching at least two sections of the course during the study period; 3) they agreed to participate in training on effective use of the online peer feedback approach and study requirements; 4) they were willing to assign a specific writing assignment to students and require peer feedback on that assignment using the online system; and 5) they collectively represented a diverse range of regions in the US and student demographics.

All data were collected via an online peer-reviewing system, Peerceptiv ( https://peerceptiv.com ; Schunn, 2016 ), a system predominantly used at the university level (Yu & Schunn, 2023 ). The system provided access to data organized by research ids to protect student privacy, and the Human Research Protection Office at the University of Pittsburgh approved research on this data.

The task involved analyzing rhetorical strategies in a provided persuasive essay, with the specific prompt from a prior year’s end-of-year test. Students needed to: 1) submit their own document using a pseudonym; 2) review at least four randomly-assigned peer documents and rate document quality using seven 7-point rubrics, along with providing comments supported by seven corresponding comment prompts; 3) back-evaluate the helpfulness of received comments using a 5-point scale; and 4) submit a revised document. Half the students used an experimental version of the system that necessitated the use of a revision planning tool to indicate which received comments would be implemented in the revision and their priority, on a 3-point scale.

Measures of reviewing quality

This study examined 18 measures of peer reviewing quality in five categories (see Table  2 ), utilizing both simple mathematics calculations (like mean rating and word count) and labor-intensive hand-coding for comment content analysis. The hand-coding was aggregated from four prior studies (Wu & Schunn, 2020a , b , 2021a , b ). This analysis introduces new elements: novel measures (priority, agreement measures, number of features), integration of measures not previously examined together, and an analysis of the data aggregated to the reviewer-level data. The detailed hand coding processes are described in the prior publications. Here we give brief summaries of the measures and their coding reliabilities.

The amount and mean perceived comment quality measures were directly calculated by computer from the raw data. All the remaining measures involving data coded by a trained pool of four undergraduate research assistants and six writing experts (all with years of experience teaching writing and familiarity with specific writing assignment and associated reviewing rubrics used in the study). A given measure involved either undergraduate assistants or expertise depending upon the level of expertise required. Artifacts were coded by two individuals to assess reliability; discrepancies were resolved through discussion to improve data quality. Coding on each dimension for both research assistants and experts involved a training phase in which coders iteratively coded a subset of artifacts and discussed discrepancies/revised coding manuals until acceptable levels of reliability were obtained.

Before all hand-coding procedures, comments were segmented by idea units by a research assistant if a given textbox included comments about two or more different issues, resulting in 24,816 comments. Then, given the focus of the writing assignment on learning complex elements of writing, comments about low-level writing issues (i.e., typos, spelling, grammar) were excluded from further coding and data analysis, resulting in 20,912 high-level comments.

Reviewing process

The duration of the review process was determined by the recorded time interval between the point at which a document assigned for review was downloaded and the point at which the completed review was submitted. Reviews completed within a duration of less than 10 min were likely expedited, given the need to attend to seven dimensions, even for the expert evaluators (Xiong & Schunn, 2021 ). Here we used the converse, Not speeded, to refer to positive feedback quality.

Rating accuracy

As a reminder, both students and experts rated the quality of the documents submitted for peer review based on seven 1-to-7 scales. Accuracy was separately defined in terms of both rating agreement and rating consistency (Tong et al., 2023 ; Xiong & Schunn, 2021 ) and in regard to the standard of expert judgments and mean peer judgments. Expert judgments are considered the gold standard of validity, but mean peer judgments are often the only available standard in studies with very large datasets. In practice, expert ratings and mean peer ratings are often highly correlated (Li et al., 2016 ).

Expert agreement was calculated as the negative sum absolute value of the difference between the true document quality (assessed by the trained experts; kappa = 0.73) and each reviewer’s judgment of the document quality across the seven dimensions and documents. The peer agreement was calculated in the same way but used the mean ratings across the peers rather than the expert judgments. The negation was applied to the absolute error to create an accuracy measurement in which higher values indicated higher accuracy. A constant of 42 (maximum difference 6 * 7 dimensions) was added to minus the absolute error to make most values sit between 0 and 42, with 42 reflecting high accuracy.

The expert consistency was calculated as the linear correlation between true document quality (assessed by the trained experts) and each reviewer’s judgment of document quality across the seven dimensions. The peer consistency was calculated in the same way, but again using mean ratings across the peers instead of expert ratings. Values logically could vary between -1 and 1 (though rarely were valued negatively), with higher consistency values indicating higher accuracy.

Students were assigned a fixed number of documents to review but sometimes did not complete all the required reviews and sometimes completed extra reviews. Within a review, students had to give at least one comment for each of the seven dimensions, but they could give more than one comment for each dimension, and there was no required minimum or maximum length for a given comment. As a result, students could provide one or several comments, each consisting of a single word or several paragraphs. Prior research on peer feedback has found that comments involving more than 50 words typically include useful information for receivers (Wu & Schunn, 2020a ) and tend to produce more learning for comment providers (Zong et al., 2022 ). Also, there may be a tradeoff in that students could submit fewer longer comments or more total comments. Thus, we also calculated the percentage of long comments: the total number of long comments (i.e., having more than 50 words) divided by the total number of comments. To capture the three main ways in which amount varied, we included the number of reviews completed for the peer assessment task ( #Reviews ), the mean number of comments ( #Comments ), the percentage of long comments ( %Long comments ).

Perceived comment quality

All students were required to judge the helpfulness of the comments they received on a 1-to-5 scale, and students using the experimental revision planning interface had to select the priority with which they would implement each comment on a 1-to-3 scale. Both sources of data address perceived comment quality, with one involving a mixture of the value of comments for revision and for learning, and the other focusing exclusively on whether comments were useful for revision. Thus, two measures were created, one based on mean comment helpfulness and the other based on mean comment implementation priority.

Actual comment quality

The measures of actual comment quality were based upon hand-coding by the experts and trained research assistants. The first approach to actual comment quality focused on the usefulness of the comments. The experts coded feedback in terms of implementation in three ways: implementable (Kappa = 0.92), implemented (Kappa = 0.76) and improvement (Kappa = 0.69). Implementable ( N  = 14,793) refers to whether the comments could be addressed in a revision (i.e., was not pure praise or just a summary of the author’s work). By contrast, implemented refers to whether the comment was incorporated in the submitted document revision: a change in the document was made that could be related to the provided comment ( N  = 11,252). Non implementable comments were coded, by definition, as not implemented.

The improvement value of comments was coded by the experts for how much the comment could improve document quality ( N  = 1,758; kappa = 0.69). The two points were given when addressing a comment would measurably improve the document’s quality on the given rubrics (e.g., moving from a 5 to a 7 on a scale). One point was awarded when addressing a comment could improve document quality in terms of the underlying rubric dimensions, but not by enough to be a measurable change on the 7-point rubric scale. No points were given when addressing a comment would not improve document quality, would make the document worse, or would involve both improvements and declines (Wu & Schunn, 2020b ). Improvement was only coded for implementable comments.

Another approach to actual comment quality focused on specific feedback features that typically are helpful for revision or learning (Jin et al., 2022 ; Tan & Chen, 2022 ; Wu & Schunn, 2020a ). Research assistants coded the comments for whether they provided a specific solution (Kappa = 0.76), gave a more general suggestion for how to address the problem but not an exact solution (Kappa = 0.79), explicitly identified the problem (Kappa = 0.81) and explained the problem (Kappa = 0.80). Separate measures were created for each feature, calculated as the percentage of comments having each feature. There was also an aggregate features measure, calculated as the mean number of features contained in each comment ( #Features ).

Data analysis

Table 4 in Appendix shows the descriptive information for all the measures of peer reviewing quality at the reviewer level. Because of the different data sources, N s varied substantially across measures. In addition, some of the measures tended to have relatively high means with negative skews, like # of reviews, rating agreement and rating accuracy measures, and helpfulness. Other measures had low means and positive skews, like the specific comment features, %implemented, and mean improvement.

The peer reviewing measures were first analyzed for reliability across reviews. Conceptually, this analysis examines whether reviewers tended to give reviews of similar quality on a given measure across the reviews they completed on an assignment. It is possible that the reviewing quality was heavily influenced by characteristics of the object being reviewed (e.g., it is easier to include solutions for weaker documents), and thus not a measure of peer feedback literacy. Other incidental factors such as order of the reviews or presence of a distraction could also have mattered, but those factors likely would influence the reliability of all the measures rather than just isolated measures.

Reliability was measured via an Intraclass Correlation Coefficient ( ICC ). There are many forms of ICC. In terms of the McGraw and Wong ( 1996 ) framework, we used ICC(k) , which represents the agreement reliability (meaning level of deviation from the same exact rating) across k ratings (typically 4 in our data) using a one-way random analysis, because each reviewer was given different documents to review from a larger population of possible documents (Koo & Li, 2016 ). We used the Landis and Koch ( 1977 ) guidelines for interpreting the ICC values for the reliabilty of the measures: almost perfect for values above 0.80; substantial for values from 0.61 to 0.80; moderate for values of 0.41 to 0.60; fair for values of 0.21 to 0.40; slight for values of 0.01 to 0.20, and poor for values less than 0.

Finally, to show the interrelationship among the variables, we conducted a three-step process of: 1) pairwise correlation among all measures with pairwise rather than listwise deletion given the high variability in measure N s (see Figure 3 in Appendix for sample sizes); 2) multidimensional scaling (MDS) applied to the correlation data to visualize the relative proximity of the measures; and 3) a hierarchical cluster analysis applied to the results of the correlation matrix to extract conceptual clusters of measures. We conducted the analyses in R: pairwise correlations using the “GGally” package, multidimensional scaling using the “magrittr” package, and hierarchical clustering using the “stats” package. For the correlational analysis, we applied both linear and rank correlations since there were strong skews to some of the measures. The two approaches produced similar results. 

Multidimensional scaling (MDS) is a statistical technique employed to visualize and analyze similarities or dissimilarities among variables in a dataset (Carroll & Arabie, 1998 ). While factor analysis is typically used to test or identify separable dimensions among many specific measures, MDS provides a useful visualization of the interrelationship of items, particularly when some items inherently straddle multiple dimensions. It also provides a useful visualization of the interrelationship of the dimensions rather than just of the items (Ding, 2006 ). The outcome of MDS is a “map” that represents these variables as points within a lower-dimensional space, typically two or three dimensions, while preserving the original distances between them as much as possible (Hout et al., 2013 ). In the current study, we chose two dimensions based on a scree plot of the eigenvalues associated with each MDS dimension (see Figure 4 in Appendix )—two dimensions offered a relatively good fit and is much easier to visualize. We expected measures within each conceptual dimension to sit close together on the MDS map.

Hierarchical cluster analysis, a general family of algorithms, is the dominant approach to grouping similar variables or data points based on their attributes or features (Murtagh & Contreras, 2017 ). It can accurately identify patterns within even small datasets (e.g., a 18*18 correlation matrix) since it leverages pairwise distances between all contributing measures. Further, it requires no assumptions about cluster shape, while other common algorithms like K-means assume that clusters are spherical and have similar sizes. However, we note that a K-means clustering algorithm produced similar clusters, so the findings are not heavily dependent upon the algorithm that was used. We expected to obtain the five clusters of dimensions as proposed in Table  2 .

We first focus on the reliability of each peer reviewing quality (defined by agreement in values across completed reviews). As shown by the blue cells along the main diagonal in Fig.  1 , the measures #Comments , %Long comments, and %Suggestions showed perfect reliability [0.81, 0.95], and the rest of measures of peer reviewing quality, except Improvement , showed moderate to substantial reliability [0.48, 0.79]. Only the Improvement measure showed only a slight level of measure reliability across reviews. It is possible that Improvement is primarily driven by the document, perhaps because some documents have limited potential for improvement or that the scope for improvement relies heavily on the match between what the reviewer can perceive and the specific needs of the document. Taken together, all but one measures fell within the required range to be considered reliable, and the results involving the Improvement measure may be inconsistent due to measurement noise.

figure 1

Measure reliability (diagonal cells and white font; / = NA) and linear inter-correlations (bold values for p  < .05, and italic values for not significant values), organized by proposed peer feedback literacy dimension

The linear measure intercorrelation shown in Fig.  1 revealed that, except for Peer agreement , almost all measures were significantly and positively correlated with one another. Based on the patterns, one of the measures— %Long comment was removed from the amount dimension in the analyses that follow. Focusing on the rating accuracy measures, except for the correlations of Peer agreement with Expert consistency and Peer consistency with Expert agreement , all the correlations were positive and statistically significant. Further, the correlations with measures in other dimensions were often non-significant and always small: Peer agreement , Max out group  = 0.18; Peer consistency , Max out group  = 0.18; Expert Agreement , Max out group  = 0.31; and Expert consistency , Max out group  = 0.26. The largest cross-dimension correlations occurred for the two expert accuracy measures with actual comment quality measures such as %Implementable and Improvement . The results supported treating these measures as one dimension, even though the intercorrelations within the dimensions are relatively weak.

Turning to the amount dimension, we again note that %Long Comments only had weak correlations with #Reviews and #Comments ( r  = 0.15 and r  = 0.1) compared to the relationship between #Reviews and #Comments ( r  = 0.63). After removing %Long Comments from the amount dimension, the in-group correlation ( r  = 0.63) was much higher than the out-group correlations ( #Reviews , Max out group  = 0.14; #Comments , Max out group  = 0.20). Thus, the support for treating amount involving #Review and #Comment as a dimension was strong.

The support for a perceived quality dimension, as originally defined, was weak. The two measures correlated with one another at only r  = 0.22. Correlations with measures in the amount and accuracy dimensions were also weak, but correlations with actual quality measures were often moderate. The results suggest some reorganization of the perceived and actual comment quality dimensions may be required.

Finally, the eight measures in the actual comment quality dimension were generally highly correlated with one another. Compared with out-group correlations, %Implementable (Min in group  = 0.32 > Max out group  = 0.31), %Implemented (Min in group  = 0.41 > Max out group  = 0.34), #Features (Min in group  = 0.51 > Max out group  = 0.39) and %Identifications (Min in group  = 0.34 > Max out group  = 0.25) were well nested in this group. However, some measures blurred somewhat with measures in the perceived comment quality dimension: Improvement (Min in group  = 0.22 < Max out group  = 0.28), %Solution (Min in group  = 0.22 < Max out group  = 0.28), %Suggestions (Min in group  = 0.34 = Max out group  = 0.34), %Explanation s (Min in group  = 0.34 < Max out group  = 0.36). Overall, the correlation results revealed some overlap with perceived comment quality, particularly for %Solutions .

Further, to better understand the similarities among these measures, MDS and hierarchical cluster analysis were conducted based on measure intercorrelation data. The MDS results are shown in Fig.  2 . Conceptually, the y-axis shows reviewing quality measures reflecting effort near the bottom (e.g., #Reviews and #Comments ) and reviewing quality measures reflecting expertise near the top (e.g., the rating accuracy group and Improvement ). By contrast, the x-axis involves review-level measures to the left and comment-level measures to the right. This pattern within the intercorrelations of measures illustrates what can be learned from MDS but would be difficult to obtain from factor analysis.

figure 2

A map of peer feedback literacy based upon MDS and cluster analysis

The clustering algorithm produced five clusters, which are labeled and color-coded in Fig.  2 . The five clusters were roughly similar to the originally hypothesized construct groups in Table  1 , especially treating rating accuracy, amount, and reviewing process as distinct from each other and from perceived/actual comment quality. However, perceived and actual comment quality did not separate as expected. In particular, %Long comments and %Solutions were clustered together with helpfulness and priority. We call this new dimension Initial Impact , reflecting comment recipients’ initial reactions to feedback (without having to consider the feedback in light of the document). The remaining measures that were all proposed to be part of the actual comment quality dimension clustered together. We propose calling this dimension Ultimate Impact , reflecting their closer alignment with actual improvements and the aspects of comments are most likely to lead to successful revisions.

General discussion

Understanding the fundamental structure of peer review literacy from a behavioral/skills perspective, rather than a knowledge and attitudes perspective, was a fundamental goal of our study. With the support of online tools, peer feedback is becoming increasingly implemented in a wide range of educational levels, contexts, disciplines, course types, and student tasks. As a form of student-centered instruction, it has great potential to improve learning outcomes, but then also critically depends upon effective full participation by students in their reviewing roles. Thus, it is increasingly important to fully conceptualize and develop methods for studying and supporting peer feedback literacy.

Our proposed framework sought to build a coherent understanding of peer reviewing quality in terms of six dimensions—reviewing process, rating accuracy, feedback amount, perceived comment quality, actual comment quality, and feedback content—offering a unified perspective on the scattered and fragmented notions of peer reviewing quality (Ramachandran et al., 2017 ; Yu et al., 2023 ). Consolidating the disparate measures from the literature into dimensions serves many purposes. For example, when university educators understand the intricacies of the reviewing process, they can provide clearer guidance and training to students, improving the quality of feedback provided. Similarly, understanding the dimensional structure can organize investigations of what dimensions are shaped by various kinds of supports/training, and which dimensions influence later learning outcomes, either for the reviewer or the reviewee.

Unlike previous studies that primarily explored relationships among reviewing quality dimensions at the comment level (Leijen, 2017 ; Misiejuk et al., 2021 ; Wu & Schunn, 2021b ), our work focuses on the reviewer level, as an approach to studying the behavioral elements of peer feedback literacy, complementing the predominantly knowledge and attitudes focus of interview and survey studies on peer feedback literacy. This shift in level of analysis is important because reviewing quality measures at the comment level might exhibit weak or even negative relationships due to varied structures or intentions. However, at the reviewer level, these measures may exhibit positive correlations, reflecting overarching strategies, motivations, or skills.

Our findings, as illustrated by the linear intercorrelation analysis, illuminate the interconnectedness of various factors shaping peer feedback literacy. The overarching theme emerging from the analysis is inherent multidimensionality, a facet of peer review literacy that has been previously highlighted in the literature (Winstone & Carless, 2020 ). The findings from the current study also suggest that peer feedback literacy can be organized into relative emphasis on expertise vs. effort and relative focus on review level vs. comment level aspects. It will be especially interesting to examine the ways in which training and motivational interventions will shape those different behavioral indicators.

It is important to note that survey-based measures of peer feedback literacy find that all of the dimensions identified within those studies were strongly correlated with one another (e.g., Dong et al., 2023 ) to the extent that the pragmatic and theoretical value of measuring them separately could be questioned. For example, feedback-related knowledge and willingness to participate in peer feedback were correlated at r  = 0.76, and all the specific indicators on those scales loaded at high levels on their factors. Within our framework, those factors could be framed as representing the expertise vs. effort ends of the literacy continuum, which our findings suggest should be much more distinguishable than r  = 0.76. Indeed, we also found dimensional structure to peer feedback literacy, but the correlations among dimensions are quite low, and even the correlations among different measures within a dimension were modest. If survey measures are going to be used in future studies on peer feedback literacy, it will be important to understand how well they align with students’ actual behaviors. Further, it may be necessary to extend what kinds of behaviors are represented on those surveys.

Our findings also suggest a strong separation of ratings accuracy from the impact that comments will have on their recipients. While there is some relationship among the two, particularly when focusing on expert evaluations of ratings accuracy and expert judgments of the improvement that comments will produce, the r  = 0.26 correlation is quite modest. Both constructs represent a kind of expertise in the reviewer. But ratings accuracy represents attending to and successfully diagnosing all the relative strengths and weaknesses in a submission (i.e., having a review level competence), whereas improvements offered in comments can involve more focus on particular problems, not requiring a reviewer to be broadly proficient (i.e., having a comment level competence). In addition, especially useful comments require not only diagnosing a major problem but also offering strategies addressing that problem.

Our findings also help to situate specific measures of feedback quality that have drawn increasing attention given their pragmatic value in data collection and data analysis: comment helpfulness ratings and %long comments. On the one hand, they are central measures of the larger landscape of peer feedback quality. On the other hand, the only represent one dimension of peer feedback literacy: the initial impact of the comments being produced. Adding in rating accuracy measures like peer agreement or peer consistency and amount measures likes # of reviews and # of comments, would provide a broader measurement of peer feedback literacy while still involving easy to collect and analyze measures. To capture the ultimate impact dimension, studies would need to invest in the laborious task of hand coding comments (which is still much less laborious than hand coding implementation or expensive than expert coding of improvement) or perhaps turn to innovations in NLP and generative AI to automatically code large numbers of comments.

Limitations and future directions

We note two key limitations to our current study. First, the exclusion of the feedback content dimension potentially left out a critical element of the peer reviewing process, which future research should aim to incorporate, possibly being implemented with larger datasets like the current study through automated techniques like Natural Language Processing (Ramachandran et al., 2017 ). Such technological advances could reveal hidden patterns and correlations with the feedback content, potentially leading to a more comprehensive understanding of peer reviewing quality.

Furthermore, the geographical and contextual constraints of our study—specifically to an introductory university writing course in the US using one online peer feedback system—may limit the generalizability of our findings. Past meta-analyses and meta-regressions suggest minimal impact of discipline, class size, or system setup on the validity of peer review ratings or the derived learning benefits (Li et al., 2016 ; Sanchez et al., 2017 ; Yu & Schunn, 2023 ). However, it is important to replicate these novel findings of this study across various contexts.

Our investigation sought to investigate the dimensionality of peer feedback literacy, a common concern in ongoing research in this domain. In previous studies, the dimensionality of peer feedback literacy has been largely shaped by data from interviews and surveys (e.g., Dong et al., 2023 ; Zhan, 2022 ). These approaches offered valuable insights into domains of learners’ knowledge and attitudes towards peer feedback (e.g., willingness to participate in peer feedback is separable from appreciation of its value or knowledge of how to participate). But such studies provided little insight into the ways in which the produced feedback varied in quality, which can be taken as the behavioral dimensions of peer feedback literacy (Gielen et al., 2010 ). It is important to note that knowledge and attitudes do not always lead to effective action (Becheikh et al., 2010 ; Huberman, 1990 ). Further, the actual quality of feedback generated by students is crucial for their learning through the process (Lu et al., 2023 ; Topping, 2023 ; Zheng et al., 2020 ; Zong et al., 2021a , b ). In the current study, we have clarified the dimensionality of the behavioral dimension, highlighting motivational vs. expertise elements at review and comment levels. These findings can become the new foundations of empirical investigations and theoretical development into the causes and consequences of peer feedback literacy.

The current findings offer actionable recommendations for practitioners (e.g., instructors, teaching assistants, instructional designers, online tool designers) for enhancing peer review processes. First, our findings identify four major areas in which practitioners need to scaffold peer reviewing quality: rating accuracy, the volume of feedback, the initial impact of comments, and the ultimate impact of comments. Different approaches are likely required to address these major areas given their relative emphasis on effort vs. expertise. For example, motivational scaffolds and considerations (e.g., workload) may be needed for improving volume of feedback, back-evaluations steps for improvement of initial impact, training on rubric dimensions for improvement of rating accuracy, and training on effective feedback structure for improvement of ultimate impact. Secondly, when resources are very constrained such that assessing the more labor-intensive dimensions of feedback quality is not possible, the multidimensional scale results suggest that length of comments and helpfulness ratings can be taken as an efficiently assessed proxy for overall feedback quality involving a mixture of effort and expertise at the review and comment levels.

Availability of data and materials

The data used to support the findings of this study are available from the corresponding author upon request.

Becheikh, N., Ziam, S., Idrissi, O., Castonguay, Y., & Landry, R. (2010). How to improve knowledge transfer strategies and practices in education? Answers from a systematic literature review. Research in Higher Education Journal , 7 , 1–21.

Google Scholar  

Bolzer, M., Strijbos, J. W., & Fischer, F. (2015). Inferring mindful cognitive-processing of peer-feedback via eye-tracking: Role of feedback-characteristics, fixation-durations and transitions. Journal of Computer Assisted Learning , 31 (5), 422–434.

Article   Google Scholar  

Carroll, J. D., & Arabie, P. (1998). Multidimensional scaling. Measurement, Judgment and Decision Making , 179–250.  https://www.sciencedirect.com/science/article/abs/pii/B9780120999750500051

Cheng, K. H., & Hou, H. T. (2015). Exploring students’ behavioural patterns during online peer assessment from the affective, cognitive, and metacognitive perspectives: A progressive sequential analysis. Technology, Pedagogy and Education , 24 (2), 171–188.

Article   MathSciNet   Google Scholar  

College Board. (2021). Program summary report. https://reports.collegeboard.org/media/pdf/2021-ap-program-summary-report_1.pdf

Cui, Y., Schunn, C. D., Gai, X., Jiang, Y., & Wang, Z. (2021). Effects of trained peer vs. Teacher feedback on EFL students’ writing performance, self-efficacy, and internalization of motivation. Frontiers in Psychology , 12, 5569.

Darvishi, A., Khosravi, H., Sadiq, S., & Gašević, D. (2022). Incorporating AI and learning analytics to build trustworthy peer assessment systems. British Journal of Educational Technology , 53 (4), 844–875.

Dawson, P., Yan, Z., Lipnevich, A., Tai, J., Boud, D., & Mahoney, P. (2023). Measuring what learners do in feedback: The feedback literacy behaviour scale. Assessment & Evaluation in Higher Education , Advanced Published Online. https://doi.org/10.1080/02602938.2023.2240983

Ding, C. S. (2006). Multidimensional scaling modelling approach to latent profile analysis in psychological research. International Journal of Psychology, 41(3), 226–238.

Dong, Z., Gao, Y., & Schunn, C. D. (2023). Assessing students’ peer feedback literacy in writing: Scale development and validation. Assessment & Evaluation in Higher Education , 48(8), 1103–1118.

Gao, Y., Schunn, C. D. D., & Yu, Q. (2019). The alignment of written peer feedback with draft problems and its impact on revision in peer assessment. Assessment & Evaluation in Higher Education , 44 (2), 294–308.

Gao, X., Noroozi, O., Gulikers, J. T. M., Biemans, H. J., & Banihashem, S. K. (2023). A systematic review of the key components of online peer feedback practices in higher education. Educational Research Review , 42, 100588.

Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction , 20 (4), 304–315.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research , 77 (1), 81–112.

Hout, M. C., Papesh, M. H., & Goldinger, S. D. (2013). Multidimensional scaling. Wiley Interdisciplinary Reviews: Cognitive Science , 4 (1), 93–103.

Howard, C. D., Barrett, A. F., & Frick, T. W. (2010). Anonymity to promote peer feedback: Pre-service teachers’ comments in asynchronous computer-mediated communication. Journal of Educational Computing Research , 43 (1), 89–112.

Huberman, M. (1990). Linkage between researchers and practitioners: A qualitative study. American Educational Research Journal , 27 (2), 363–391.

Huisman, B., Saab, N., Van Driel, J., & Van Den Broek, P. (2018). Peer feedback on academic writing: Undergraduate students’ peer feedback role, peer feedback perceptions and essay performance. Assessment & Evaluation in Higher Education , 43 (6), 955–968.

Jin, X., Jiang, Q., Xiong, W., Feng, Y., & Zhao, W. (2022). Effects of student engagement in peer feedback on writing performance in higher education. Interactive Learning Environments, 32(1), 128–143.

Kerman, N. T., Banihashem, S. K., Karami, M., Er, E., Van Ginkel, S., & Noroozi, O. (2024). Online peer feedback in higher education: A synthesis of the literature. Education and Information Technologies, 29(1), 763–813.

Koo, T. K., & Li, M. Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine , 15 (2), 155–163.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics , 33 (1), 159–174.

Leijen, D. A. (2017). A novel approach to examine the impact of web-based peer review on the revisions of L2 writers. Computers and Composition , 43 , 35–54.

Li, H., Xiong, Y., Zang, X., Kornhaber, M. L., Lyu, Y., Chung, K. S., & Suen, H. K. (2016). Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education , 41 (2), 245–264.

Little, T., Dawson, P., Boud, D., & Tai, J. (2024). Can students’ feedback literacy be improved? A scoping review of interventions. Assessment & Evaluation in Higher Education , 49 (1), 39–52.

Liu, J., & Sadler, R. W. (2003). The effect and affect of peer review in electronic versus traditional modes on L2 writing. Journal of English for Academic Purposes , 2 (3), 193–227.

Lu, Q., Yao, Y., & Zhu, X. (2023). The relationship between peer feedback features and revision sources mediated by feedback acceptance: The effect on undergraduate students’ writing performance. Assessing Writing , 56 , 100725.

Lu, Q., Zhu, X., & Cheong, C. M. (2021). Understanding the difference between self-feedback and peer feedback: A comparative study of their effects on undergraduate students' writing improvement. Frontiers in psychology, 12 , 739962.

Man, D., Kong, B., & Chau, M. (2022). Developing student feedback literacy through peer review training. RELC Journal . Advanced Published Online. https://doi.org/10.1177/00336882221078380

McGraw, K. O., & Wong, S. P. (1996). Forming inferences about some intraclass correlation coefficients. Psychological Methods , 1 (1), 30–46.

Misiejuk, K., Wasson, B., & Egelandsdal, K. (2021). Using learning analytics to understand student perceptions of peer feedback. Computers in human behavior, 117 , 106658.

Misiejuk, K., & Wasson, B. (2021). Backward evaluation in peer assessment: A scoping review. Computers & Education , 175 , 104319.

Murtagh, F., & Contreras, P. (2017). Algorithms for hierarchical clustering: An overview, II. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery , 7 (6), e1219.

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science , 37 , 375–401.

Nieminen, J. H., & Carless, D. (2023). Feedback literacy: A critical review of an emerging concept. Higher Education , 85 (6), 1381–1400.

Noroozi, O., Banihashem, S. K., Taghizadeh Kerman, N., Parvaneh Akhteh Khaneh, M., Babayi, M., Ashrafi, H., & Biemans, H. J. (2022). Gender differences in students’ argumentative essay writing, peer review performance and uptake in online learning environments. Interactive Learning Environments, 31(10), 6302–6316.

Patchan, M. M., Schunn, C. D., & Clark, R. J. (2018). Accountability in peer assessment: Examining the effects of reviewing grades on peer ratings and peer feedback. Studies in Higher Education , 43 (12), 2263–2278.

Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. In Proceedings of the 6th international conference on educational data mining (EDM 2013)

Raes, A., Vanderhoven, E., & Schellens, T. (2013). Increasing anonymity in peer assessment by using classroom response technology within face-to-face higher education. Studies in Higher Education , 40(1) , 178–193.

Ramachandran, L., Gehringer, E. F., & Yadav, R. K. (2017). Automated assessment of the quality of peer reviews using natural language processing techniques. International Journal of Artificial Intelligence in Education , 27 , 534–581.

Rietsche, R., Caines, A., Schramm, C., Pfütze, D., & Buttery, P. (2022). The specificity and helpfulness of peer-to-peer feedback in higher education. In Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022) (pp. 107–117).

Sanchez, A., Romero, N., & De Raedt, R. (2017). Depression-related difficulties disengaging from negative faces are associated with sustained attention to negative feedback during social evaluation and predict stress recovery. PLoS One , 12 (3), e0175040.

Schunn, C., Godley, A., & DeMartino, S. (2016). The reliability and validity of peer review of writing in high school AP English classes. Journal of Adolescent & Adult Literacy, 60(1), 13–23.

Schunn, C. D. (2016). Writing to learn and learning to write through SWoRD. In S. A. Crossley & D. S. McNamara (Eds.), Adaptive educational technologies for literacy instruction. Taylor & Francis, Routledge.

Smith, P. L., & Ratcliff, R. (2004). Psychology and neurobiology of simple decisions. Trends in Neurosciences , 27 (3), 161–168.

Sutton, P. (2012). Conceptualizing feedback literacy: Knowing, being, and acting. Innovations in Education and Teaching International , 49 (1), 31–40.

Tan, J. S., & Chen, W. (2022). Peer feedback to support collaborative knowledge improvement: What kind of feedback feed-forward? Computers & Education , 187 , 104467.

Tan, J. S., Chen, W., Su, J., & Su, G. (2023). The mechanism and effect of class-wide peer feedback on conceptual knowledge improvement: Does different feedback type matter?. International Journal of Computer-Supported Collaborative Learning , 18, 393–424.

Tong, Y., Schunn, C. D., & Wang, H. (2023). Why increasing the number of raters only helps sometimes: Reliability and validity of peer assessment across tasks of different complexity. Studies in Educational Evaluation , 76 , 101233.

Topping, K. J. (2023). Advantages and disadvantages of online and Face-to-Face peer learning in higher education: A review. Education Sciences , 13 (4), 326. https://doi.org/10.3390/educsci13040326

van den Bos, A. H., & Tan, E. (2019). Effects of anonymity on online peer review in second-language writing. Computers & Education , 142 , 103638.

Wichmann, A., Funk, A., & Rummel, N. (2018). Leveraging the potential of peer feedback in an academic writing activity through sense-making support. European Journal of Psychology of Education , 33 , 165–184.

Winstone, N., & Carless, D. (2020). Designing effective feedback processes in higher education: A learner-centred approach. Innovations in education and teaching international, 57(3), 386–387.

Woitt, S., Weidlich, J., Jivet, I., Orhan Göksün, D., Drachsler, H., & Kalz, M. (2023). Students’ feedback literacy in higher education: an initial scale validation study. Teaching in Higher Education , Advanced Published Online. https://doi.org/10.1080/13562517.2023.2263838

Wolfe, E. M. (2005). Uncovering rater’s cognitive processing and focus using think-aloud protocols. Journal of Writing Assessment, 2(1) , 37–56.

Wu, Y., & Schunn, C. D. (2020a). From feedback to revisions: Effects of feedback features and perceptions. Contemporary Educational Psychology , 60 , 101826.

Wu, Y., & Schunn, C. D. (2020b). When peers agree, do students listen? The central role of feedback quality and feedback frequency in determining uptake of feedback. Contemporary Educational Psychology , 62 , 101897.

Wu, Y., & Schunn, C. D. (2021a). The effects of providing and receiving peer feedback on writing performance and learning of secondary school students. American Educational Research Journal , 58 (3), 492–526.

Wu, Y., & Schunn, C. D. (2021b). From plans to actual implementation: A process model for why feedback features influence feedback implementation. Instructional Science , 49 (3), 365–394.

Wu, Y., & Schunn, C. D. (2023). Assessor writing performance on peer feedback: Exploring the relation between assessor writing performance, problem identification accuracy, and helpfulness of peer feedback. Journal of Educational Psychology , 115(1) , 118–142.

Xiong, W., & Litman, D. (2011). Understanding differences in perceived peer-review helpfulness using natural language processing. In Proceedings of the sixth workshop on innovative use of NLP for building educational applications  (pp. 10–19).

Xiong, Y., & Schunn, C. D. (2021). Reviewer, essay, and reviewing-process characteristics that predict errors in web-based peer review. Computers & Education , 166 , 104146.

Yu, Q., & Schunn, C. D. (2023). Understanding the what and when of peer feedback benefits for performance and transfer. Computers in Human Behavior , 147 , 107857.

Yu, F. Y., Liu, Y. H., & Liu, K. (2023). Online peer-assessment quality control: A proactive measure, validation study, and sensitivity analysis. Studies in Educational Evaluation , 78 , 101279.

Zhan, Y. (2022). Developing and validating a student feedback literacy scale. Assessment & Evaluation in Higher Education , 47 (7), 1087–1100.

Zhang, Y., & Schunn, C. D. (2023). Self-regulation of peer feedback quality aspects through different dimensions of experience within prior peer feedback assignments. Contemporary Educational Psychology , 74 , 102210.

Zhang, F., Schunn, C., Li, W., & Long, M. (2020). Changes in the reliability and validity of peer assessment across the college years. Assessment & Evaluation in Higher Education , 45 (8), 1073–1087.

Zheng, L., Zhang, X., & Cui, P. (2020). The role of technology-facilitated peer assessment and supporting strategies: A meta-analysis. Assessment & Evaluation in Higher Education , 45 (3), 372–386.

Zong, Z., Schunn, C. D., & Wang, Y. (2021a). What aspects of online peer feedback robustly predict growth in students’ task performance? Computers in Human Behavior , 124 , 106924.

Zong, Z., Schunn, C. D., & Wang, Y. (2021b). Learning to improve the quality peer feedback through experience with peer feedback. Assessment & Evaluation in Higher Education , 46 (6), 973–992.

Zong, Z., Schunn, C., & Wang, Y. (2022). What makes students contribute more peer feedback? The role of within-course experience with peer feedback. Assessment & Evaluation in Higher Education , 47 (6), 972–983.

Zou, Y., Schunn, C. D., Wang, Y., & Zhang, F. (2018). Student attitudes that predict participation in peer assessment. Assessment & Evaluation in Higher Education , 43 (5), 800–811.

Download references

Acknowledgements

Not applicable.

This study was supported by the Philosophy and Social Sciences Planning Youth Project of Guangdong Province under grant [GD24YJY01], and The National Social Science Fund of China [23BYY154].

Author information

Authors and affiliations.

College of Education for the Future, Beijing Normal University, No. 18 Jingfeng Road, Zhuhai, Guangdong Province, 519087, China

Learning Research and Development Center, University of Pittsburgh, 3420 Forbes Avenue, Pittsburgh, PA, 15260, USA

Christian D. Schunn

School of Humanities, Beijing University of Posts and Telecommunications, No. 10, Xitucheng Road, Haidian District, Beijing, 100876, China

You can also search for this author in PubMed   Google Scholar

Contributions

Yi Zhang: Conceptualization, Methodology, Data Curation, Writing - Original Draft, Revision. Christian D. Schunn: Conceptualization, Visualization, Methodology, Supervision, Revision. Yong Wu: Data Curation, Conceptualization, Investigation.

Corresponding author

Correspondence to Yi Zhang .

Ethics declarations

Competing interests.

There is no conflict of interest, as we conducted this study only as part of our research program.

The second author is a co-inventor of the peer review system used in the study.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

figure 3

The sample size for each pairwise correlation

figure 4

Scree plot for MDS

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Zhang, Y., Schunn, C.D. & Wu, Y. What does it mean to be good at peer reviewing? A multidimensional scaling and cluster analysis study of behavioral indicators of peer feedback literacy. Int J Educ Technol High Educ 21 , 26 (2024). https://doi.org/10.1186/s41239-024-00458-1

Download citation

Received : 20 December 2023

Accepted : 18 March 2024

Published : 22 April 2024

DOI : https://doi.org/10.1186/s41239-024-00458-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer reviewing quality
  • Peer feedback literacy
  • Multidimensional scaling

international journal of educational research review

International Journal of Educational Policy Research and Review

  • IJEPRR Home
  • Instructions for Authors
  • Submit Manuscript
  • IJEPRR Articles
  • IJEPRR Archive
  • IJEPRR Indexing

Abbreviation: Int. J. Edu. Pol. Res. Rev.

Issn: 2360-7076, start year: 2013, model: open access/peer reviewed, current issue, june 2023;10(2), all issues  .

About IJEPRR

The International Journal of Educational Policy Research and Review (IJEPRR) ( ISSN 2360-7076 )is a double blind peer review open access journal published  bimonthly (one issue in every two months), six issue in a year.

Covered Research Areas and Related Disciplines

The journal encompasses a wide variety of research topics, including:

– pedagogical organization and development, vocational education, special education;

– early child development, preschool, primary and secondary level education, higher level education;

– reading and writing comprehension, primary and secondary language training;

– modern and classic philosophies of education and educational approaches, alternative education, indigenous education, autodidacticism, unschooling;

– informal learning, open education, educational psychology and philosophy;

– methods and techniques of teaching and learning, critical thinking, standard and special curriculum, collaborative learning, linear learning, distance and e-learning;

– psychological approaches: behaviorism, cognitivism, constructivism, etc.;

– national and international strategies and organizations.

Publication Procedure

1. All our published research works can be freely accessed, visualized, downloaded and modified under a Creative Common License 4.0. No registration or creating of an account are necessary.

2. Authors who choose to publish their research works may contact us at [email protected] and submit their manuscripts at the specified email addresses.

3. After submitting, an informative email containing the present publication procedure will be sent at the email address provided by the submitter. The review procedure begins after the author communicate us his agreement acknowledgement of our publication procedure by signing our consent form.

4. After the manuscript’s review, the submitter is informed about the acceptance of his submission at the email address provided by the submitter. In case of accepted manuscripts, an informative email is containing the details for the processing fee payment procedure.

5. The Article Processing Fee is required to be paid only if the article has been accepted for publication. It has a fixed amount no matter how many authors, pages, graphic content the article may have.

6. After the payment of the processing fee, the manuscript is updated, edited, formatted and published on the current issue of the selected journal. A DOI is assigned and a publication certificate is issued.

7. The article’s file (a pdf file), the publication certificate are sent attached at an informative email containing the addresses of the issue, abstract’s and article’s page.

Types of Articles:

Original Research Articles:  These should describe new and carefully analysed and confirmed findings, backed with experimental procedures. Articles should be given in sufficient details for others to verify the work. The length of a full paper should be concise, required to describe and interpret the work clearly. Please include in the main paper a set of  key words  ; an abstract- summarizing background of the work; research results and its implications. Followed by INTRODUCTION, MATERIALS and METHODS, RESULTS, DISCUSSION, ACKNOWLEDGMENTS and REFERENCES. All these must be in capital letters but not underlined.

Short Research Communication:  These should presents a concise study, or sometimes preliminary but innovative. A research finding that might be less substantial than a full research paper. Short Research Communication is limited to 3000 words (excluding references and abstract). The main sections need not conform to that of full-length papers. It should have a set of  key words  and an abstract summarizing background of the work, the results and their implications. RESULTS AND DISCUSSION Section should be combined and followed by CONCLUSION. MATERIALS AND METHODS will remain as a separate section.

Review or mini-review : A review article typically presents a summary and critical evaluation of information that has already been published, and considers the progress of current research toward clarifying a stated problem or topic. Submissions of reviews and perspectives covering topics of current interest are welcome and should be authoritative. Reviews should be concise, not exceeding 7 printed pages.

For more detailed information see  Guide for Authors .

  • Peer Review Process

1. A designated member of the editorial board will make an initial assessment about the quality and suitability of a submission in terms of scope, originality and plagiarism. This assessment it allows our qualified reviewers a better usage of their time and resources. Paper written within the guidelines regarding the structure of the paper is thus very important.

2. Once the designated internal editorial board member is satisfied about the suitability of a submission, it is sent to two anonymous peer reviewers for their comment.

3. The reviewer will assign marks and make comment about following six categories:

– Originality of the paper

– The value of the research question and research gap.

– The soundness of the conceptual and technical or methodological framework.

– Insightful discussion in result and discussion section

– The policy implications of the paper

– Academic writing quality.

4. The decision/remarks/suggestions will be communicated to the author(s).

Appellation Procedure:

  • If the author does not agree with the reviewer’s corrections, the author has the right to send an appellation to the editors’ office in the format “reviewer’s corrections – author’s comments”. This document is to be sent to the reviewer, and the editorial staff determines conclusively.
  • If no decision is reached, the editorial staff appoints another expert.

The editorial board considers all manuscripts  on the strict condition  that:

  • the manuscript is your own original work, and does not duplicate any other previously published work, including your own previously published work;
  • the manuscript is not currently under consideration or peer review or accepted for publication or in press or published elsewhere; the manuscript contains nothing that is abusive, defamatory, libellous, obscene, fraudulent, or illegal;
  • for all manuscripts non-discriminatory language is mandatory. Sexist or racist terms must not be used.

Publication Frequency

Starting with January 2019 International Journal of Educational Policy Research and Review will publish 6 issues per year(one issue in every two months).

All articles are published online, in the current issue, immediately as they are accepted and ready for publication. A DOI (Digital Object Identifier) number will be assigned to all articles, whereby they will become searchable and citeable without delay. The assigned DOI will be activated when the current issue is closed. The published articles will also be collated into printable issues which will be available on request after the current issue is closed.

  • Latest Articles
  • Popular Articles

  Factors affecting implementation of competence based curriculum in selected Secondary Schools of Kabale Municipality- Kabale District

  mediating role of contextual elements on adult learners’ antecedents of entrepreneurial intention in tanzania’s higher learning institutions,   artists as engineers: perspective of senior high school students in ghana,   arbitrative role of adult learners’ entrepreneurial attitude and the influence of entrepreneurial education on intention in tanzania,   students’ discipline and academic performance indices in uganda certificate of education examinations.

  • Teachers’ workload in relation to burnout and work performance 9.1k views
  • Beliefs and strategies in Filipino language learning and academic performance of indigenous students 5.1k views
  • School administrators’ instructional leadership skills and teachers’ performance and efficacy in senior high schools in the national capital region, Philippines 2.3k views
  • Using mentimeter to enhance learning and teaching in a large class 1.5k views
  • Awareness, compliance and implementation of disaster risk reduction and management in flood-prone public elementary schools in Butuan city, Philippines 1.3k views
  • Copyright and Permissions
  • Editorial Policy
  • Open Access Policy
  • Publication Process
  • Publication Ethics
  • Reviewers’ Guidelines
  • Publication Charge
  • Make Payment
  • Waiver Policy

RESEARCH REVIEW International Journal of Multidisciplinary

About the journal.

RESEARCH REVIEW International Journal of Multidisciplinary (RRIJM) is an international Double-blind peer-reviewed [refereed] open access online journal. Too often a journal’s decision to publish a paper is dominated by what the editor/s think is interesting and will gain greater readership-both of which are subjective judgments and lead to decisions which are frustrating and delay the publication of your work. RRIJM will rigorously peer-review your submissions and publish all papers that are judged to be technically sound. Judgments about the importance of any particular paper are then made after publication by the readership (who are the most qualified to determine what is of interest to them).

Most conventional journals publish papers from tightly defined subject areas, making it more difficult for readers from other disciplines to read them. RRIJM has no such barriers, which helps your research reach the entire scientific community.

  • Title:  RESEARCH REVIEW International Multidisciplinary Research Journal
  • ISSN:  2455-3085 (Online)
  • Impact Factor: 6.849
  • Crossref DOI:  10.31305/rrijm
  • Frequency of Publication:  Monthly  [12 issues per year]
  • Languages:  English/Hindi/Gujarat  [Multiple Languages]
  • Accessibility:  Open Access
  • Peer Review Process:  Double Blind Peer Review Process
  • Subject:  Multidisciplinary
  • Plagiarism Checker:  Turnitin (License)
  • Publication Format:  Online
  • Contact No.:  +91- 99784 40833
  • Email:  [email protected]
  • Old Website: https://old.rrjournals.com/
  • New Website: https://rrjournals.com/ 
  • Address:  15, Kalyan Nagar, Shahpur, Ahmedabad, Gujarat 380001

Key Features of RRIJM

  • Journal was listed in  UGC  with  Journal No. 44945 (Till 14-06-2019)
  • Journal Publishes online every month
  • Online article submission
  • Standard peer review process

Current Issue

international journal of educational research review

Awareness of Right to Education Act 2009 among the Tribal Parents: A Study on Dakshin Dinajpur District

Advances in electrochemical sensing of depression biomarker serotonin: a comprehensive review, analysis of the representation of women's struggles for empowerment in the literature of maitreyi pushpa मैत्रेयी पुष्पा के साहित्य में सशक्तिकरण के लिए महिलाओं के संघर्षों के प्रतिनिधित्व का विश्लेषण, information.

  • For Readers
  • For Authors
  • For Librarians

Make a Submission

RESEARCH REVIEW International Journal of Multidisciplinary is licensed under a Creative Commons Attribution 4.0 International License .

Click here to go to Old Website

international journal of educational research review

international journal of educational research review

(IJESRR.Journal Impact Factor : 7.02 .) 

IJESRR is a double foreign-reviewed and refereed international journal.Online international journal which has the intention of publishing Peer Reviewed original research papers in the field of Applied Science, Humanities.  

This journal is a forum for mathematicians and various international parties to publish reports and papers. IJESRR  Journal submit  theory and manuscripts of our scientific excellence. please contribute your articles related to science and  arts.

The subject area is in the papers are discovered by scientists. I hope that the content of your paper is only like this. Your opinion will improve the quality and content of the journal.

This Journal Was  Started  Jan 2014, We Publish 6 Issues In A Year .

international journal of educational research review

International Journal of Educational Research Review

Diğer dizinler.

  • Dergi Ana Sayfası
  • Amaç ve Kapsam
  • Makale Gönder
  • Dergi Kurulları
  • İstatistikler
  • Yazım Kuralları
  • Etik İlkeler ve Yayın Politikası
  • Ücret Politikası

Lütfen geri bildirimde bulunmak istediğiniz birimi seçin.

International Journal of Educational Research Review (ISSN:2458-9322) International Journal of Educational Research Review publishes scholarly articles that are of general significance to the education research community and that come from a wide range of areas. The International Journal of Educational Research Review aims to provide a forum for scholarly understanding of the field of education. Articles focus upon concepts, research, review and practices that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our understandings of pressing issues. The range of topics covered in the International Journal of Educational Research Review include; read full aims and scope; https://www.ijere.com/page/journal-scopes Peer Review Policy: All articles in this journal will undergo initial editor screening, rigorous double-anonymous peer review, and review by the editorial board. Double-blind peer review International Journal of Educational Research Review follows a double-blind reviewing procedure. This means that the author will remain anonymous to the reviewers throughout peer review. It is the responsibility of the author to anonymize the manuscript and any associated materials. read full Peer Review Policy; https://www.ijere.com/page/peer-review-policy Ownership and management IJERE journal is hosted by Dergipark/Tubitak. IJERE Journal is published with the support of Sakarya University Faculty of Education/TURKEY. Governing body Editor in Chief Dr. Serhat Arslan, Gazi University, Turkey read full Governing body /Ownership and management/ Editorial board member ; https://www.ijere.com/page/editorial-board Copyright and licensing Copyright Statement Copyright violation is an important, and possibly related, ethical issue. Authors should check their manuscripts for possible breaches of copyright law (e.g., where permissions are needed for quotations, artwork or tables taken from other publications or from other freely available sources on the Internet) and secure the necessary permissions before submission to International Journal of Educational Research Review. read full Copyright and licensing; https://www.ijere.com/page/copyright-and-licensing-open-access-statement

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts

Latest science news, discoveries and analysis

international journal of educational research review

China's Moon atlas is the most detailed ever made

international journal of educational research review

Mini-colon and brain 'organoids' shed light on cancer and other diseases

international journal of educational research review

First glowing animals lit up the oceans half a billion years ago

international journal of educational research review

Plastic pollution: three numbers that support a crackdown

The maldives is racing to create new land. why are so many people concerned, nato is boosting ai and climate research as scientific diplomacy remains on ice, dna from ancient graves reveals the culture of a mysterious nomadic people, ecologists: don’t lose touch with the joy of fieldwork chris mantegna, european ruling linking climate change to human rights could be a game changer — here’s how charlotte e. blattner.

international journal of educational research review

Should the Maldives be creating new land?

international journal of educational research review

Lethal AI weapons are here: how can we control them?

international journal of educational research review

Algorithm ranks peer reviewers by reputation — but critics warn of bias

international journal of educational research review

How gliding marsupials got their ‘wings’

Atomic clock keeps ultra-precise time aboard a rocking naval ship, who redefines airborne transmission: what does that mean for future pandemics, monkeypox virus: dangerous strain gains ability to spread through sex, new data suggest, how to freeze a memory: putting worms on ice stops them forgetting.

international journal of educational research review

Retractions are part of science, but misconduct isn’t — lessons from a superconductivity lab

international journal of educational research review

Any plan to make smoking obsolete is the right step

international journal of educational research review

Citizenship privilege harms science

Will ai accelerate or delay the race to net-zero emissions, we must protect the global plastics treaty from corporate interference martin wagner, current issue.

Issue Cover

Surprise hybrid origins of a butterfly species

Stripped-envelope supernova light curves argue for central engine activity, optical clocks at sea, research analysis.

international journal of educational research review

Ancient DNA traces family lines and political shifts in the Avar empire

international journal of educational research review

A chemical method for selective labelling of the key amino acid tryptophan

international journal of educational research review

Robust optical clocks promise stable timing in a portable package

international journal of educational research review

Targeting RNA opens therapeutic avenues for Timothy syndrome

Bioengineered ‘mini-colons’ shed light on cancer progression, galaxy found napping in the primordial universe, tumours form without genetic mutations, marsupial genomes reveal how a skin membrane for gliding evolved.

international journal of educational research review

Breaking ice, and helicopter drops: winning photos of working scientists

international journal of educational research review

Shrouded in secrecy: how science is harmed by the bullying and harassment rumour mill

international journal of educational research review

Londoners see what a scientist looks like up close in 50 photographs

How ground glass might save crops from drought on a caribbean island, deadly diseases and inflatable suits: how i found my niche in virology research, books & culture.

international journal of educational research review

How volcanoes shaped our planet — and why we need to be ready for the next big eruption

international journal of educational research review

Dogwhistles, drilling and the roots of Western civilization: Books in brief

international journal of educational research review

Cosmic rentals

‘shut up and calculate’: how einstein lost the battle to explain quantum reality, las boriqueñas remembers the forgotten puerto rican women who tested the first pill, nature podcast.

Nature Podcast

Latest videos

Nature briefing.

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.

international journal of educational research review

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Advancing social justice, promoting decent work ILO is a specialized agency of the United Nations

Migrated Content

Women working in a cosmetics factory near Nairobi, Kenya.

Occupational accidents and diseases lead to devastating impacts on workers, enterprises and entire communities and economies. Despite many improvements, the prevention of accidents and work-related diseases continues to have a considerable importance on a global scale.

An image showing lightning, a tractor, agricultural workers and pollution

Global Event: Climate change and safety and health at work

Thursday, 25 April 13:30 - 15:00 (CEST) World Day for Safety and Health at Work

News and articles

Workers in the heat and fumes of chemical products in the street

OSH and climate change

Climate change creates a ‘cocktail’ of serious health hazards for 70 per cent of the world’s workers

Placeholder image

Media Advisory

New ILO report to reveal dangerous and long-lasting effects of climate change on workers’ health and safety

Areas of Work

  • Chemical Hazards  
  • Biological Hazards  
  • Psychosocial Factors 
  • Physical Hazards  
  • Learn more 
  • Global Strategy on Occupational Safety and Health 2024-2030 and plan of action for its implementation  
  • National OSH Profiles, Policies and Programmes (coming soon)
  • Labour administration and Inspection
  • International Classification of Radiographs of Pneumoconioses  
  • ILO List of Occupational Diseases (revised 2010) 
  • Diagnostic and exposure criteria for occupational diseases - Guidance notes for diagnosis and prevention of the diseases in the ILO List of Occupational Diseases (revised 2010)  
  • Gender 
  • Young Workers
  • Economic Aspects  

Development Cooperation

Malagasy Man pushing a Blue Barrel

Safety + Health for All

The Programme mobilizes development cooperation resources to improve the safety and health of workers worldwide.

  • Sub programme - The Vision Zero Fund

Social safety nets

ILO/Japan Fund for Building Social Safety Nets in Asia and the Pacific (SSN Fund)

Improving Workers' Rights in Rural Sectors of the Indo-Pacific with a Focus on Women

Normative instruments

View of a session of the Governing Body of the International Labour Organization.

Occupational Safety and Health Standards and Instruments

  • The Occupational Safety and Health Convention, 1981 (No. 155)
  • The Promotional Framework for Occupational Safety and Health Convention, 2006 (No. 187)
  • Other Conventions and Recommendations
  • Codes of practice (link TBC)

Publications

Overview of the key findings of the report

Report at a glance: Ensuring safety and health at work in a changing climate

Global Report

Ensuring safety and health at work in a changing climate

International labour standards

Pakistan’s tanning and leather industries : An overview of trends and labour and environmental conditions

  • INTEROSH - ILO Database on Occupational Safety and Health Agencies, Institutions and Organizations around the world
  • Encyclopaedia of Occupational Health and Safety
  • LEGOSH - Global database on occupational safety and health legislation
  • CISDOC - Archived bibliographic database
  • International Chemical Safety Cards (ICSCs)

Training Courses

The International Training Centre of the ILO (ITC-Turin) provides a variety of free and paid courses online and in-person courses related to occupational safety and health.

VIDEO

  1. Condition of Worth in Fully Functioning Person Theory

  2. Research, Educational research

  3. Educational research review Session 2

  4. Thierry Rocher on Creativity and Critical Thinking

  5. IR Theory Interview Series

  6. ERJ Open Research: a message from the Chief Editor

COMMENTS

  1. IJERE

    The International Journal of Educational Research Review aims to provide a forum for scholarly understanding of the field of education. Articles focus upon concepts, research, review and practices that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our ...

  2. International Journal of Educational Research Review » Home

    The International Journal of Educational Research Review aims to provide a forum for scholarly understanding of the field of education. Articles focus upon concepts, research, review and practices that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our ...

  3. International Journal of Educational Research

    The International Journal of Educational Research Open (IJEDRO) is a companion title of the International Journal of Educational Research (IJER). IJEDRO is an open access, peer-reviewed journal which draws contributions from a wide community of international and interdisciplinary researchers …. View full aims & scope.

  4. International Journal of Educational Research Review

    The International Journal of Educational Research Review aims to provide a forum for scholarly understanding of the field of education. Articles focus upon concepts, research, review and practices that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our ...

  5. Educational Research Review

    The Journal of the European Association for Research on Learning and Instruction (EARLI) Educational Research Review is an international journal addressed to researchers and various agencies interested in the review of studies and theoretical papers in education at any level.The journal accepts high quality articles that are solving educational research problems by using a review approach.

  6. Review of Educational Research: Sage Journals

    The Review of Educational Research (RER) publishes critical, integrative reviews of research literature bearing on education, including conceptualizations, interpretations, and syntheses of literature and scholarly work in a field broadly relevant to education and educational research. View full journal description

  7. Guide for authors

    The International Journal of Educational Research publishes research papers in the field of Education. Besides educational researchers, the journal is widely read by students, educational practitioners, and policy-makers. ... Peer review This journal operates a double anonymized review process. All contributions will be initially assessed by ...

  8. International Journal of Educational Research Review

    The International Journal of Educational Research Review (IJERE) aims to be a scientific reference source where researchers can publish their studies and have access to relevant studies. IJERE is an international journal for researchers interested in the review of studies and theoretical papers on education at any level.

  9. International Journal of Educational Research

    The International Journal of Educational Research publishes research manuscripts in the field of education. Work must be of a quality and context that the Editorial Board think would be of interest to an international readership. The aims and scope of the journals are to: Provide a journal that reports research on topics that are of ...

  10. Home

    Overview. International Review of Education is a scholarly platform dedicated to policy-relevant and theoretically-informed research in lifelong and life-wide learning in international and comparative contexts. Provides insight into adult education, non-formal education, adult literacy, open and distance learning, and vocational education.

  11. Volumes and issues

    Volume 68 February - December 2022. Issue 6 December 2022. Issue 5 October 2022. The Faure report - 50 years on. Issue 4 August 2022. Issue 3 June 2022. Issue 2 April 2022. Special issue on Strengthening the future of adult education and lifelong learning for all: Building bridges between CONFINTEA and the SDGs. Issue 1 February 2022.

  12. International Journal of Evaluation and Research in Education (IJERE)

    International Journal of Evaluation and Research in Education (IJERE), p-ISSN: 2252-8822, e-ISSN: 2620-5440 is an interdisciplinary publication of original research and writing on education which publishes papers to international audiences of educational researchers.This journal aims to provide a forum for scholarly understanding of the field of education and plays an important role in ...

  13. International Journal of Educational Research and Review

    About IJERR. International Journal of Educational Research and Review (ISSN: 2756-4789) is an open access academic refereed journal published monthly by Spectacular Journals.. IJERR publishes research articles that report premier fundamental discoveries and inventions, and the applications of those discoveries, unconfined by traditional discipline barriers.

  14. International Journal of Educational Research

    The global inter-network governance of UN policy programs on climate change education. Marcia McKenzie, Nicolas Stahelin. Article 102093. View PDF. Article preview. Read the latest articles of International Journal of Educational Research at ScienceDirect.com, Elsevier's leading platform of peer-reviewed scholarly literature.

  15. What does it mean to be good at peer reviewing? A multidimensional

    Peer feedback literacy is becoming increasingly important in higher education as peer feedback has substantially grown as a pedagogical approach. However, quality of produced feedback, a key behavioral aspect of peer feedback literacy, lacks a systematic and evidence-based conceptualization to guide research, instruction, and system design. We introduce a novel framework involving six ...

  16. International Journal of Educational Policy Research and Review

    The International Journal of Educational Policy Research and Review (IJEPRR) (ISSN 2360-7076)is a double blind peer review open access journal published bimonthly (one issue in every two months), six issue in a year. Covered Research Areas and Related Disciplines. The journal encompasses a wide variety of research topics, including:

  17. RESEARCH REVIEW International Journal of Multidisciplinary

    Title: RESEARCH REVIEW International Multidisciplinary Research Journal. ISSN: 2455-3085 (Online) Impact Factor: 6.849. Crossref DOI: 10.31305/rrijm. Frequency of Publication: Monthly [12 issues per year] Languages: English/Hindi/Gujarat [Multiple Languages] Accessibility: Open Access. Peer Review Process: Double Blind Peer Review Process.

  18. International Journal of Education and Science Research Review

    Welcome to the " International Journal of Education and Science Research Review". (IJESRR.Journal Impact Factor : 7.02 .) IJESRR is a double foreign-reviewed and refereed international journal.Online international journal which has the intention of publishing Peer Reviewed original research papers in the field of Applied Science, Humanities.

  19. International Journal of Educational Research Review » Dizinler

    The International Journal of Educational Research Review aims to provide a forum for scholarly understanding of the field of education. Articles focus upon concepts, research, review and practices that emphasizes the intersections between education research and other fields, raises new questions, and develops innovative approaches to our ...

  20. International Journal of Educational Research Open

    The International Journal of Educational Research Open (IJEDRO) is a companion title of the International Journal of Educational Research (IJER). IJEDRO is an open access, peer-reviewed journal which draws contributions from a wide community of international and interdisciplinary researchers …. View full aims & scope.

  21. Latest science news, discoveries and analysis

    Find breaking science news and analysis from the world's leading research journal. ... book review. Dogwhistles, drilling and the roots of Western civilization: Books in brief.

  22. Mental Health Outcomes in Teachers of Students with Disabilities: A

    ABSTRACT. Teachers who work with students with disabilities may have high levels of mental health problems, such as burnout or stress. The aim of this systematic review was to summarise the evidence on mental health outcome assessments among teachers of students with disabilities in different work spaces and to compare the levels of burnout and stress among teachers from general/regular versus ...

  23. International Journal of Educational Research

    Understanding the nexus of school types, school cultures and educational outcomes and its implication for policy and practice. Francis Ewulley, Moses Ackah Anlimachie, Might Kojo Abreh, Ewuraba Efua Mills. Article 102237. View PDF. Article preview. Previous vol/issue. Next vol/issue. Read the latest articles of International Journal of ...

  24. Safety and Health at Work

    Occupational accidents and diseases lead to devastating impacts on workers, enterprises and entire communities and economies. Despite many improvements, the prevention of accidents and work-related diseases continues to have a considerable importance on a global scale.