Your browser is not supported

Sorry but it looks as if your browser is out of date. To get the best experience using our site we recommend that you upgrade or switch browsers.

Find a solution

  • Skip to main content
  • Skip to navigation

essay on peer assessment

  • Back to parent navigation item
  • Primary teacher
  • Secondary/FE teacher
  • Early career or student teacher
  • Higher education
  • Curriculum support
  • Literacy in science teaching
  • Periodic table
  • Interactive periodic table
  • Climate change and sustainability
  • Resources shop
  • Collections
  • Remote teaching support
  • Starters for ten
  • Screen experiments
  • Assessment for learning
  • Microscale chemistry
  • Faces of chemistry
  • Classic chemistry experiments
  • Nuffield practical collection
  • Anecdotes for chemistry teachers
  • On this day in chemistry
  • Global experiments
  • PhET interactive simulations
  • Chemistry vignettes
  • Context and problem based learning
  • Journal of the month
  • Chemistry and art
  • Art analysis
  • Pigments and colours
  • Ancient art: today's technology
  • Psychology and art theory
  • Art and archaeology
  • Artists as chemists
  • The physics of restoration and conservation
  • Ancient Egyptian art
  • Ancient Greek art
  • Ancient Roman art
  • Classic chemistry demonstrations
  • In search of solutions
  • In search of more solutions
  • Creative problem-solving in chemistry
  • Solar spark
  • Chemistry for non-specialists
  • Health and safety in higher education
  • Analytical chemistry introductions
  • Exhibition chemistry
  • Introductory maths for higher education
  • Commercial skills for chemists
  • Kitchen chemistry
  • Journals how to guides
  • Chemistry in health
  • Chemistry in sport
  • Chemistry in your cupboard
  • Chocolate chemistry
  • Adnoddau addysgu cemeg Cymraeg
  • The chemistry of fireworks
  • Festive chemistry
  • Education in Chemistry
  • Teach Chemistry
  • On-demand online
  • Live online
  • Selected PD articles
  • PD for primary teachers
  • PD for secondary teachers
  • What we offer
  • Chartered Science Teacher (CSciTeach)
  • Teacher mentoring
  • UK Chemistry Olympiad
  • Who can enter?
  • How does it work?
  • Resources and past papers
  • Top of the Bench
  • Schools' Analyst
  • Regional support
  • Education coordinators
  • RSC Yusuf Hamied Inspirational Science Programme
  • RSC Education News
  • Supporting teacher training
  • Interest groups

A primary school child raises their hand in a classroom

  • More navigation items

Principles of assessment for learning

  • 2 Working in groups
  • 3 Self and peer assessment
  • 4 Sharing objectives and criteria
  • 5 Questioning
  • 6 Using feedback
  • 7 Using tests

Self and peer assessment

  • No comments

Discover how you can use self and peer assessment to actively involve students in their learning, including teaching tips and examples to use in your classroom

An abstract illustration showing two people communicating and working together against a patterned background

Source: © Shutterstock

Self and peer assessment gives students a structure to reflect on their work, what they have learned and how to improve.

What is self and peer assessment?

Self-assessment enables students to take ownership of their learning by judging the extent of their knowledge and understanding. It provides a structure for them to reflect on their work, what they have learned and how to improve.

Peer-assessment, where they act as critical friends and support each other, can help students to develop self-assessment skills.

In order to make any judgements, students must have grasped the learning and the standards of work expected of them.

Why use these techniques?

Through self and peer assessment, students take more responsibility for their own learning. It helps the individual to:

  • assess their own progress objectively
  • crystallise learning objectives
  • recognise their understanding
  • think about what they did not understand
  • grow in confidence
  • take their own learning forwards.

Within the class, it fosters respect and collaboration.

Peer criticism can be more effective than that from the teacher because:

  • The normal shared language will be used.
  • It acts as a stimulus to complete work and to raise standards.
  • Some students are more receptive to comments from their peers.
  • Group feedback can command more attention than that of an individual.

It frees up the teacher to concentrate on what is not known, rather than what is.

How do I set up self or peer assessment?

When preparing for an activity involving self or peer assessment, it is vital to:

  • Create an atmosphere of mutual trust.
  • Decide how the students will discover the learning objectives. Criteria for success must be transparent.
  • Select a technique suitable to the topic (see ’Example activities’ below for some ideas). Give explicit instructions.
  • Encourage students to listen to others, to ask questions on points that they do not understand and to contribute ideas and opinions (see ’Discussion and feedback’ below).

Example activities

Examples of what the students might do include:

  • Research and present within a small group, which then judges each talk.
  • Make a judgement about an answer and suggest improvements.
  • Use the criteria to give feedback about their peer’s work.
  • Research answers in order to give feedback about their peer’s work.
  • Comment on anonymous work.
  • Indicate how confident they are about a topic or task (both before and after an activity).
  • Write questions to match a learning outcome and then answer questions written by others.
  • In groups, generate questions for homework, then select the best through class discussion.
  • Analyse a marking scheme and apply it to their own or others’ work.
  • Develop the learning outcomes for a given area of work for themselves.

Discussion and feedback

  • Have a strategy to tackle the weaknesses that are identified. For example, if it is a small number of students, draw them together for further work whilst giving the rest of the class an extension activity.
  • Allow plenty of time for students to take action following feedback from peers or you. This may be repeating an experiment, carrying out further research or rewriting their notes. You may have to provide input for this.
  • Use plenaries and feedback, to pause and take stock, during and towards the end of the session.
  • Check that, if needed, students have made correct records.

Hints and tips for promoting effective self and peer assessment

Alternative plenary.

In this variation, a small group of students leads the discussion, instead of the teacher. When preparing and running the activity, it is important to:

  • Let students know that they will sometimes lead a plenary themselves.
  • Remind the class of the learning objectives.
  • Use judicious questions to review the learning achieved.
  • Summarise as a basis for working out the next steps.
  • Ensure that the class agrees with any summary (may be by group discussion).
  • Ensure that there is opportunity for students to make additional points.
  • Give supportive, tactful feedback to the leaders.

‘Traffic lights’ or ‘Thumbs up’

Using this technique, students show an instant evaluation of their knowledge and understanding. From this, both teacher and student can recognise problems.

  • thumbs up – confident
  • thumbs sideways – some uncertainty
  • thumbs down – little confidence.

Using green, amber and red ‘traffic light’ cards, instead of thumbs, makes students give a definite response and provides the teacher with a good visual indicator. These cards can also be used for students to show their choice between alternatives, for example, ‘Do you think the answer is 1, 2 or 3?’

Cards or thumbs can be used at any time during a session.

Prompt questions

You can use questions to help students move forward.

Appropriate questions would be based on:

  • What do you think you could improve?
  • Why do you want to improve that?
  • What was the hardest part?
  • What help do you need?

Learning diary

To ensure that the self or peer assessment activity is meaningful, and not a bureaucratic exercise, it can be helpful to make recording an integral part of activities. The diary could be linked to plenaries and written in class notes. Headings or questions might include:

  • What was exciting in chemistry this week.
  • The most important thing I learned this week.
  • What I did well. What I need to do more work on.
  • Which targets I’ve met.

The questions do not need to be the same each week.

Is there anything else teachers should think about?

When preparing and running a self or peer assessment activity, consider:

  • Introducing the technique gradually so that skills are developed.
  • Different methods for introducing students to the learning objectives/outcomes.
  • Setting up a supportive atmosphere, so that students are comfortable about admitting to problems.
  • Giving students sufficient time to work out the problems.
  • Making the encouragement of self-reflection intrinsic to teaching.

Common issues to watch out for

  • It takes time, patience and commitment to develop self and peer assessment. For preference, there should be a strong learning culture and an environment of mutual trust throughout the school.
  • Students will need group skills.
  • There must be an opportunity for the expected learning and standards to be made clear.
  • Teachers need to listen unobtrusively to avert the propagation of misunderstandings (careful group selection also helps).
  • A few students will only respond to work in class as exercises to be completed and not internalised.

How can I tell if self or peer assessment is successful?

When you devise your checklist to evaluate the session, consider how you will measure:

  • How well the students understood the objectives.
  • Whether the student groupings worked as you wished.
  • If the students improved their self-assessment skills.
  • How meaningful the peer assessment was.
  • The students’ response to the technique.
  • The support for different abilities.
  • Whether the lesson correlated with the objectives.
  • Improvement in work standards.

Additional information

This information was originally part of the  Assessment for Learning  website, published in 2008.

A photograph of two secondary school students and their teacher smiling while discussing an experiment in a chemistry lesson

Working in groups

An abstract illustration showing two people communicating and working together against a patterned background

Sharing objectives and criteria

An illustration featuring question marks in a variety of colours and styles

Questioning

A photograph of two students and their teacher discussing a chemistry experiment, with a variety of apparatus set up in front of them

Using feedback

A photograph of a male student in school uniform working individually at a desk

Using tests

  • Working independently
  • Active learning
  • Peer assessment

Related articles

A group of people collaborating

How best to engage students in group work

2024-06-11T05:19:00Z By David Read

Use evidence-based research to help students get the most out of group work

An illustration of a pencil taking notes growing into a tree with graphene in its shadow

5 ways to use structure strips to scaffold learning

2024-05-08T05:08:00Z By Kristy Turner

Boost your students’ ability to digest topics and write independently with these margin-sized prompts

A series of keys with general knowledge and chemistry icons

Escape the classroom: and revise chemistry knowledge

2024-05-03T09:21:00Z By Hayley Russell

Challenge your students to break out of the lab and prepare for exams

No comments yet

Only registered users can comment on this article., more assessment for learning.

2020-09-21T09:50:00Z

Encourage your students to take an active role in their learning using these assessment for learning principles to structure and plan your chemistry lessons.

Using tests | Principles of assessment for learning

Four out of five

Hints and guidance on how to use tests in the classroom as a formative exercise to actively involve your students in their learning.

Using feedback | Principles of assessment for learning

Learn how to use feedback more effectively to actively involve your students in their learning, with tips and ideas to use in the classroom.

  • Contributors
  • Email alerts

Site powered by Webvision Cloud

Peer and Self-Assessment Points Essay

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Why is Peer Assessment Important?

Why is self-assessment important.

  • Peer assessment implies that students evaluate their peers according to the given criteria;
  • Students are involved in peer assessment in many ways, including both direct and indirect forms;
  • Assessment culture and ethics are learned by students during this type of activity;
  • Students may contribute to improving the evaluation criteria;
  • Peer assessment increases motivation and promotes students’ engagement (Panadero, Romero, & Strijbos, 2013).

What Are the Advantages and Disadvantages of Peer Assessment?

  • It encourages the so-called deep learning through proper understanding of the material and assessment criteria instead of mere reading;
  • Students may learn from others’ mistakes and success as they will consider others’ strong and weak points, juxtaposing them to their own experience;
  • A teacher has more time to interact with students by putting more responsibility on them;
  • Timely feedback is a result of proper peer assessment;
  • There is a potential to decrease the “free rider” issue as students are busy with assessment tasks;
  • Students reflect on their role and those of others in the learning process;
  • Development of the creative thinking, critiquing, and judging skills (“University of Reading”, 2017).

Disadvantages

  • Improper assessment may confuse an assessed student, causing disappointment and frustration;
  • Extra briefing time adds workload on a teacher;
  • Reliability and validity of grades are not ensured;
  • Students may feel that they have to give similar grades to each other in terms of equality and solidarity (Panadero et al., 2013);
  • Students may discriminate and be discriminated by others.

What Format is Peer Assessment Done?

  • Feedback to peers that refers to peer review (formative assessment);
  • Summative grades made by a group in general or each of the students in particular;
  • Combination of the two may be beneficial in terms of the comprehensive assessment and critical skills development;
  • Analytic rubrics may be used to correct the assessment that was done, including the following criteria: knowledge of the topic, critical thinking abilities, presentation of ideas, etc. (Brookhart, 2013).
  • Self-assessment means that a student reflects on his or her own performance, skills, and other issues regarding the set assessment criteria;
  • Self-assessment provides a student with the opportunity to understand his or her own strengths and weaknesses, thus coming up with the idea on how to improve;
  • Self-assessment increases students’ responsibility and involvement in learning (Brown & Harris, 2014).
  • Development of reflective and critical thinking skills;
  • Students may focus their efforts on enhancing their weak points to succeed in the future;
  • Promotion of continuous learning;
  • The opportunity to reflect on their peers’ assessment regarding the contribution;
  • Promotion of students’ awareness of their role in the course of learning and evaluation (“University of Reading”, 2017).
  • A teacher may experience the increased workload due to the necessity to guide students on self-assessment;
  • Unreliable and inflated grades may be given by students;
  • Students may encounter the lack of preparation to give grades to themselves;
  • “When self-assessments are disclosed (e.g., traffic light self-assessments displayed to the teacher in front of the class), there are strong psychological pressures on students that lead to dissembling and dishonesty” (Brown & Harris, 2014, p. 23).

What Format is Self-Assessment Done?

  • Reflective exercises in the form of diary;
  • Short essays and presentations to reflect on students’ work;
  • Essay feedback questionnaires to understand students’ perceptions of themselves;
  • Combination of formats, depending on the particular setting and class;
  • Analytic rubrics is the relevant format to correct self-assessment in a comprehensive manner, clearly stating its strong and weak points;
  • The rubric consisting of the following criteria may be used: reliability, validity, adherence to self-assessment points, specific requirements, and so on (Brookhart, 2013).

Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading . Alexandria, VA: ASCD.

Brown, G., & Harris, L. (2014). The future of self-assessment in classroom practice: Reframing self- assessment as a core competency. Frontline Learning Research, 2 (1), 22-30.

Panadero, E., Romero, M., & Strijbos, J. (2013). The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Studies in Educational Evaluation, 39 (4), 195-203.

University of Reading . (2017). Web.

  • Education Impact on the Development
  • Life in School: Interview Report and Analysis
  • Rubrics and Good Instructional Models. Taking Care of the Learning Process
  • Curriculum Development: Understanding Rubrics
  • Summative Assessment Planning and Procedure
  • Evidence-Based Educational Research
  • The Benefits of Pursuing Education
  • Multicultural Education and E-Learning
  • The Northcentral University: Ethics Research in Education
  • Child X's Pragmatic, Discourse and Lexical Analysis
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2021, May 5). Peer and Self-Assessment Points. https://ivypanda.com/essays/peer-and-self-assessment-points/

"Peer and Self-Assessment Points." IvyPanda , 5 May 2021, ivypanda.com/essays/peer-and-self-assessment-points/.

IvyPanda . (2021) 'Peer and Self-Assessment Points'. 5 May.

IvyPanda . 2021. "Peer and Self-Assessment Points." May 5, 2021. https://ivypanda.com/essays/peer-and-self-assessment-points/.

1. IvyPanda . "Peer and Self-Assessment Points." May 5, 2021. https://ivypanda.com/essays/peer-and-self-assessment-points/.

Bibliography

IvyPanda . "Peer and Self-Assessment Points." May 5, 2021. https://ivypanda.com/essays/peer-and-self-assessment-points/.

Advertisement

Advertisement

The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies

  • Meta-Analysis
  • Open access
  • Published: 10 December 2019
  • Volume 32 , pages 481–509, ( 2020 )

Cite this article

You have full access to this open access article

essay on peer assessment

  • Kit S. Double   ORCID: orcid.org/0000-0001-8120-1573 1 ,
  • Joshua A. McGrane 1 &
  • Therese N. Hopfenbeck 1  

109k Accesses

163 Citations

125 Altmetric

Explore all metrics

Peer assessment has been the subject of considerable research interest over the last three decades, with numerous educational researchers advocating for the integration of peer assessment into schools and instructional practice. Research synthesis in this area has, however, largely relied on narrative reviews to evaluate the efficacy of peer assessment. Here, we present a meta-analysis (54 studies, k = 141) of experimental and quasi-experimental studies that evaluated the effect of peer assessment on academic performance in primary, secondary, or tertiary students across subjects and domains. An overall small to medium effect of peer assessment on academic performance was found ( g = 0.31, p < .001). The results suggest that peer assessment improves academic performance compared with no assessment ( g = 0.31, p = .004) and teacher assessment ( g = 0.28, p = .007), but was not significantly different in its effect from self-assessment ( g = 0.23, p = .209). Additionally, meta-regressions examined the moderating effects of several feedback and educational characteristics (e.g., online vs offline, frequency, education level). Results suggested that the effectiveness of peer assessment was remarkably robust across a wide range of contexts. These findings provide support for peer assessment as a formative practice and suggest several implications for the implementation of peer assessment into the classroom.

Similar content being viewed by others

essay on peer assessment

The Impact of Peer Feedback on Student Learning Effectiveness: A Meta-analysis Based on 39 Experimental or Quasiexperimental Studies

essay on peer assessment

A Systematic Review of Peer Assessment Design Elements

essay on peer assessment

Peer-Assisted Learning Strategies (PALS): A Validated Classwide Program for Improving Reading and Mathematics Performance

Avoid common mistakes on your manuscript.

Feedback is often regarded as a central component of educational practice and crucial to students’ learning and development (Fyfe & Rittle-Johnson, 2016 ; Hattie and Timperley 2007 ; Hays, Kornell, & Bjork, 2010 ; Paulus, 1999 ). Peer assessment has been identified as one method for delivering feedback efficiently and effectively to learners (Topping 1998 ; van Zundert et al. 2010 ). The use of students to generate feedback about the performance of their peers is referred to in the literature using various terms, including peer assessment, peer feedback, peer evaluation, and peer grading. In this article, we adopt the term peer assessment, as it more generally refers to the method of peers assessing or being assessed by each other, whereas the term feedback is used when we refer to the actual content or quality of the information exchanged between peers. This feedback can be delivered in a variety of forms including written comments, grading, or verbal feedback (Topping 1998 ). Importantly, by performing both the role of assessor and being assessed themselves, students’ learning can potentially benefit more than if they are just assessed (Reinholz 2016 ).

Peer assessments tend to be highly correlated with teacher assessments of the same students (Falchikov and Goldfinch 2000 ; Li et al. 2016 ; Sanchez et al. 2017 ). However, in addition to establishing comparability between teacher and peer assessment scores, it is important to determine whether peer assessment also has a positive effect on future academic performance. Several narrative reviews have argued for the positive formative effects of peer assessment (e.g., Black and Wiliam 1998a ; Topping 1998 ; van Zundert et al. 2010 ) and have additionally identified a number of potentially important moderators for the effect of peer assessment. This meta-analysis will build upon these reviews and provide quantitative evaluations for some of the instructional features identified in these narrative reviews by utilising them as moderators within our analysis.

Evaluating the Evidence for Peer Assessment

Empirical studies.

Despite the optimism surrounding peer assessment as a formative practice, there are relatively few control group studies that evaluate the effect of peer assessment on academic performance (Flórez and Sammons 2013 ; Strijbos and Sluijsmans 2010 ). Most studies on peer assessment have tended to focus on either students’ or teachers’ subjective perceptions of the practice rather than its effect on academic performance (e.g., Brown et al. 2009 ; Young and Jackman 2014 ). Moreover, interventions involving peer assessment often confound the effect of peer assessment with other assessment practices that are theoretically related under the umbrella of formative assessment (Black and Wiliam 2009 ). For instance, Wiliam et al. ( 2004 ) reported a mean effect size of .32 in favor of a formative assessment intervention but they were unable to determine the unique contribution of peer assessment to students’ achievement, as it was one of more than 15 assessment practices included in the intervention.

However, as shown in Fig. 1 , there has been a sharp increase in the number of studies related to peer assessment, with over 75% of relevant studies published in the last decade. Although it is still far from being the dominant outcome measure in research on formative practices, many of these recent studies have examined the effect of peer assessment on objective measures of academic performance (e.g., Gielen et al. 2010a ; Liu et al. 2016 ; Wang et al. 2014a ). The number of studies of peer assessment using control group designs also appears to be increasing in frequency (e.g., van Ginkel et al. 2017 ; Wang et al. 2017 ). These studies have typically compared the formative effect of peer assessment with either teacher assessment (e.g., Chaney and Ingraham 2009 ; Sippel and Jackson 2015 ; van Ginkel et al. 2017 ) or no assessment conditions (e.g., Kamp et al. 2014 ; L. Li and Steckelberg 2004 ; Schonrock-Adema et al. 2007 ). Given the increase in peer assessment research, and in particular experimental research, it seems pertinent to synthesise this new body of research, as it provides a basis for critically evaluating the overall effectiveness of peer assessment and its moderators.

figure 1

Number of records returned by year. The following search terms were used: ‘peer assessment’ or ‘peer grading or ‘peer evaluation’ or ‘peer feedback’. Data were collated by searching Web of Science ( www.webofknowledge.com ) for the following keywords: ‘peer assessment’ or ‘peer grading’ or ‘peer evaluation’ or ‘peer feedback’ and categorising by year

Previous Reviews

Efforts to synthesise peer assessment research have largely been limited to narrative reviews, which have made very strong claims regarding the efficacy of peer assessment. For example, in a review of peer assessment with tertiary students, Topping ( 1998 ) argued that the effects of peer assessment are, ‘as good as or better than the effects of teacher assessment’ (p. 249). Similarly, in a review on peer and self-assessment with tertiary students, Dochy et al. ( 1999 ) concluded that peer assessment can have a positive effect on learning but may be hampered by social factors such as friendships, collusion, and perceived fairness. Reviews into peer assessment have also tended to focus on determining the accuracy of peer assessments, which is typically established by the correlation between peer and teacher assessments for the same performances. High correlations have been observed between peer and teacher assessments in three meta-analyses to date ( r = .69, .63, and .68 respectively; Falchikov and Goldfinch 2000 ; H. Li et al. 2016 ; Sanchez et al. 2017 ). Given that peer assessment is often advocated as a formative practice (e.g., Black and Wiliam 1998a ; Topping 1998 ), it is important to expand on these correlational meta-analyses to examine the formative effect that peer assessment has on academic performance.

In addition to examining the correlation between peer and teacher grading, Sanchez et al. ( 2017 ) additionally performed a meta-analysis on the formative effect of peer grading (i.e., a numerical or letter grade was provided to a student by their peer) in intervention studies. They found that there was a significant positive effect of peer grading on academic performance for primary and secondary (grades 3 to 12) students ( g = .29). However, it is unclear whether their findings would generalise to other forms of peer feedback (e.g., written or verbal feedback) and to tertiary students, both of which we will evaluate in the current meta-analysis.

Moderators of the Effectiveness of Peer Assessment

Theoretical frameworks of peer assessment propose that it is beneficial in at least two respects. Firstly, peer assessment allows students to critically engage with the assessed material, to compare and contrast performance with their peers, and to identify gaps or errors in their own knowledge (Topping 1998 ). In addition, peer assessment may improve the communication of feedback, as peers may use similar and more accessible language, as well as reduce negative feelings of being evaluated by an authority figure (Liu et al. 2016 ). However, the efficacy of peer assessment, like traditional feedback, is likely to be contingent on a range of factors including characteristics of the learning environment, the student, and the assessment itself (Kluger and DeNisi 1996 ; Ossenberg et al. 2018 ). Some of the characteristics that have been proposed to moderate the efficacy of feedback include anonymity (e.g., Rotsaert et al. 2018 ; Yu and Liu 2009 ), scaffolding (e.g., Panadero and Jonsson 2013 ), quality and timing of the feedback (Diab 2011 ), and elaboration (e.g., Gielen et al. 2010b ). Drawing on the previously mentioned narrative reviews and empirical evidence, we now briefly outline the evidence for each of the included theoretical moderators.

It is somewhat surprising that most studies that examine the effect of peer assessment tend to only assess the impact on the assessee and not the assessor (van Popta et al. 2017 ). Assessing may confer several distinct advantages such as drawing comparisons with peers’ work and increased familiarity with evaluative criteria. Several studies have compared the effect of assessing with being assessed. Lundstrom and Baker ( 2009 ) found that assessing a peer’s written work was more beneficial for their own writing than being assessed by a peer. Meanwhile, Graner ( 1987 ) found that students who were receiving feedback from a peer and acted as an assessor did not perform better than students who acted as an assessor but did not receive peer feedback. Reviewing peers’ work is also likely to help students become better reviewers of their own work and to revise and improve their own work (Rollinson 2005 ). While, in practice, students will most often act as both assessor and assessee during peer assessment, it is useful to gain a greater insight into the relative impact of performing each of these roles for both practical reasons and to help determine the mechanisms by which peer assessment improves academic performance.

Peer Assessment Type

The characteristics of peer assessment vary greatly both in practice and within the research literature. Because meta-analysis is unable to capture all of the nuanced dimensions that determine the type, intensity, and quality of peer assessment, we focus on distinguishing between what we regard as the most prevalent types of peer assessment in the literature: grading, peer dialogs, and written assessment. Each of these peer assessment types is widely used in the classroom and often in various combinations (e.g., written qualitative feedback in combination with a numerical grade). While these assessment types differ substantially in terms of their cognitive complexity and comprehensiveness, each has shown at least some evidence of impactive academic performance (e.g., Sanchez et al. 2017 ; Smith et al. 2009 ; Topping 2009 ).

Freeform/Scaffolding

Peer assessment is often implemented in conjunction with some form of scaffolding, for example, rubrics, and scoring scripts. Scaffolding has been shown to improve both the quality peer assessment and increase the amount of feedback assessors provide (Peters, Körndle & Narciss, 2018 ). Peer assessment has also been shown to be more accurate when rubrics are utilised. For example, Panadero, Romero, & Strijbos ( 2013 ) found that students were less likely to overscore their peers.

Increasingly, peer assessment has been performed online due in part to the growth in online learning activities as well as the ease by which peer assessment can be implemented online (van Popta et al. 2017 ). Conducting peer assessment online can significantly reduce the logistical burden of implementing peer assessment (e.g., Tannacito and Tuzi 2002 ). Several studies have shown that peer assessment can effectively be carried out online (e.g., Hsu 2016 ; Li and Gao 2016 ). Van Popta et al. ( 2017 ) argue that the cognitive processes involved in peer assessment, such as evaluating, explaining, and suggesting, similarly play out in online and offline environments. However, the social processes involved in peer assessment are likely to substantively differ between online and offline peer assessment (e.g., collaborating, discussing), and it is unclear whether this might limit the benefits of peer assessment through one or the other medium. To the authors’ knowledge, no prior studies have compared the effects of online and offline peer assessment on academic performance.

Because peer assessment is fundamentally a collaborative assessment practice, interpersonal variables play a substantial role in determining the type and quality of peer assessment (Strijbos and Wichmann 2018 ). Some researchers have argued that anonymous peer assessment is advantageous because assessors are more likely to be honest in their feedback, and interpersonal processes cannot influence how assessees receive the assessment feedback (Rotsaert et al. 2018 ). Qualitative evidence suggests that anonymous peer assessment results in improved feedback quality and more positive perceptions towards peer assessment (Rotsaert et al. 2018 ; Vanderhoven et al. 2015 ). A recent qualitative review by Panadero and Alqassab ( 2019 ) found that three studies had compared anonymous peer assessment to a control group (i.e., open peer assessment) and looked at academic performance as the outcome. Their review found mixed evidence regarding the benefit of anonymity in peer assessment with one of the included studies finding an advantage of anonymity, but the other two finding little benefit of anonymity. Others have questioned whether anonymity impairs the development of cognitive and interpersonal development by limiting the collaborative nature of peer assessment (Strijbos and Wichmann 2018 ).

Peers are often novices at providing constructive assessment and inexperienced learners tend to provide limited feedback (Hattie and Timperley 2007 ). Several studies have therefore suggested that peer assessment becomes more effective as students’ experience with peer assessment increases. For example, with greater experience, peers tend to use scoring criteria to a greater extent (Sluijsmans et al. 2004 ). Similarly, training peer assessment over time can improve the quality of feedback they provide, although the effects may be limited by the extent of a student’s relevant domain knowledge (Alqassab et al. 2018 ). Frequent peer assessment may also increase positive learner perceptions of peer assessment (e.g., Sluijsmans et al. 2004 ). However, other studies have found that learner perceptions of peer assessment are not necessarily positive (Alqassab et al. 2018 ). This may suggest that learner perceptions of peer assessment vary depending on its characteristics (e.g., quality, detail).

Current Study

Given the previous reliance on narrative reviews and the increasing research and teacher interest in peer assessment, as well as the popularity of instructional theories advocating for peer assessment and formative assessment practices in the classroom, we present a quantitative meta-analytic review to develop and synthesise the evidence in relation to peer assessment. This meta-analysis evaluates the effect of peer assessment on academic performance when compared to no assessment as well as teacher assessment. To do this, the meta-analysis only evaluates intervention studies that utilised experimental or quasi-experimental designs, i.e., only studies with control groups, so that the effects of maturation and other confounding variables are mitigated. Control groups can be either passive (e.g., no feedback) or active (e.g., teacher feedback). We meta-analytically address two related research questions:

What effect do peer assessment interventions have on academic performance relative to the observed control groups?

What characteristics moderate the effectiveness of peer assessment?

Working Definitions

The specific methods of peer assessment can vary considerably, but there are a number of shared characteristics across most methods. Peers are defined as individuals at similar (i.e., within 1–2 grades) or identical education levels. Peer assessment must involve assessing or being assessed by peers, or both. Peer assessment requires the communication (either written, verbal, or online) of task-relevant feedback, although the style of feedback can differ markedly, from elaborate written and verbal feedback to holistic ratings of performance.

We took a deliberately broad definition of academic performance for this meta-analysis including traditional outcomes (e.g., test performance or essay writing) and also practical skills (e.g., constructing a circuit in science class). Despite this broad interpretation of academic performance, we did not include any studies that were carried out in a professional/organisational setting other than professional skills (e.g., teacher training) that were being taught in a traditional educational setting (e.g., a university).

Selection Criteria

To be included in this meta-analysis, studies had to meet several criteria. Firstly, a study needed to examine the effect of peer assessment. Secondly, the assessment could be delivered in any form (e.g., written, verbal, online), but needed to be distinguishable from peer-coaching/peer-tutoring. Thirdly, a study needed to compare the effect of peer assessment with a control group. Pre-post designs that did not include a control/comparison group were excluded because we could not discount the effects of maturation or other confounding variables. Moreover, the comparison group could take the form of either a passive control (e.g., a no assessment condition) or an active control (e.g., teacher assessment). Fourthly, a study needed to examine the effect of peer assessment on a non-self-reported measure of academic performance.

In addition to these criteria, a study needed to be carried out in an educational context or be related to educational outcomes in some way. Any level of education (i.e., tertiary, secondary, primary) was acceptable. A study also needed to provide sufficient data to calculate an effect size. If insufficient data was available in the manuscript, the authors were contacted by email to request the necessary data (additional information was provided for a single study). Studies also needed to be written in English.

Literature Search

The literature search was carried out on 8 June 2018 using PsycInfo , Google Scholar , and ERIC. Google Scholar was used to check for additional references as it does not allow for the exporting of entries. These three electronic databases were selected due to their relevance to educational instruction and practice. Results were not filtered based on publication date, but ERIC only holds records from 1966 to present. A deliberately wide selection of search terms was used in the first instance to capture all relevant articles. The search terms included ‘peer grading’ or ‘peer assessment’ or ‘peer evaluation’ or ‘peer feedback’, which were paired with ‘learning’ or ‘performance’ or ‘academic achievement’ or ‘academic performance’ or ‘grades’. All peer assessment-related search terms were included with and without hyphenation. In addition, an ancestry search (i.e., back-search) was performed on the reference lists of the included articles. Conference programs for major educational conferences were searched. Finally, unpublished results were sourced by emailing prominent authors in the field and through social media. Although there is significant disagreement about the inclusion of unpublished data and conference abstracts, i.e., ‘grey literature’ (Cook et al. 1993 ), we opted to include it in the first instance because including only published studies can result in a meta-analysis over-estimating effect sizes due to publication bias (Hopewell et al. 2007 ). It should, however, be noted that none of the substantive conclusions changed when the analyses were re-run with the grey literature excluded.

The database search returned 4072 records. An ancestry search returned an additional 37 potentially relevant articles. No unpublished data could be found. After duplicates were removed, two reviewers independently screened titles and abstracts for relevance. A kappa statistic was calculated to assess inter-rater reliability between the two coders and was found to be .78 (89.06% overall agreement, CI .63 to .94), which is above the recommended minimum levels of inter-rater reliability (Fleiss 1971 ). Subsequently, the full text of articles that were deemed relevant based on their abstracts was examined to ensure that they met the selection criteria described previously. Disagreements between the coders were discussed and, when necessary, resolved by a third coder. Ultimately, 55 articles with 143 effect sizes were found that met the inclusion criteria and included in the meta-analysis. The search process is depicted in Fig. 2 .

figure 2

Flow chart for the identification, screening protocol, and inclusion of publications in the meta-analyses

Data Extraction

A research assistant and the first author extracted data from the included papers. We took an iterative approach to the coding procedure whereby the coders refined the classification of each variable as they progressed through the included studies to ensure that the classifications best characterised the extant literature. Below, the coding strategy is reviewed along with the classifications utilised. Frequency statistics and inter-rater reliability for the extracted data for the different classifications are presented in Table 1 . All extracted variable showed at least moderate agreement except for whether the peer assessment was freeform or structured, which showed fair agreement (Landis and Koch 1977 ).

Publication Type

Publications were classified into journal articles, conference papers, dissertations, reports, or unpublished records.

Education Level

Education level was coded as either graduate tertiary, undergraduate tertiary, secondary, or primary. Given the small number of studies that utilised graduate samples ( N = 2), we subsequently combined this classification with undergraduate to form a general tertiary category. In addition, we recorded the grade level of the students. Generally speaking, primary education refers to the ages of 6–12, secondary education refers to education from 13–18, and tertiary education is undertaken after the age of 18.

Age and Sex

The percentage of students in a study that were female was recorded. In addition, we recorded the mean age from each study. Unfortunately, only 55.5% of studies recorded participants’ sex and only 18.5% of studies recorded mean age information.

The subject area associated with the academic performance measure was coded. We also recorded the nature of the academic performance variable for descriptive purposes.

Assessment Role

Studies were coded as to whether the students acted as peer assessors, assessees, or both assessors and assessees.

Comparison Group

Four types of comparison group were found in the included studies: no assessment, teacher assessment, self-assessment, and reader-control. In many instances, a no assessment condition could be characterised as typical instruction; that is, two versions of a course were run—one with peer assessment and one without peer assessment. As such, while no specific teacher assessment comparison condition is referenced in the article, participants would most likely have received some form of teacher feedback as is typical in standard instructional practice. Studies were classified as having teacher assessment on the basis of a specific reference to teacher feedback being provided.

Studies were classified as self-assessment controls if there was an explicit reference to a self-assessment activity, e.g., self-grading/rating. Studies that only included revision, e.g., working alone on revising an assignment, were classified as no assessment rather than self-assessment because they did not necessarily involve explicit self-assessment. Studies where both the comparison and intervention groups received teacher assessment (in addition to peer assessment in the case of the intervention group) were coded as no assessment to reflect the fact that the comparison group received no additional assessment compared to the peer assessment condition. In addition, Philippakos and MacArthur ( 2016 ) and Cho and MacArthur ( 2011 ) were notable in that they utilised a reader-control condition whereby students read, but did not assess peers’ work. Due to the small frequency of this control condition, we ultimately classified them as no assessment controls.

Peer assessment was characterised using coding we believed best captured the theoretical distinctions in the literature. Our typology of peer assessment used three distinct components, which were combined for classification:

Did the peer feedback include a dialog between peers?

Did the peer feedback include written comments?

Did the peer feedback include grading?

Each study was classified using a dichotomous present/absent scoring system for each of the three components.

Studies were dichotomously classified as to whether a specific rubric, assessment script, or scoring system was provided to students. Studies that only provided basic instructions to students to conduct the peer feedback were coded as freeform.

Was the Assessment Online?

Studies were classified based on whether the peer assessment was online or offline.

Studies were classified based on whether the peer assessment was anonymous or identified.

Frequency of Assessment

Studies were coded dichotomously as to whether they involved only a single peer assessment occasion or, alternatively, whether students provided/received peer feedback on multiple occasions.

The level of transfer between the peer assessment task and the academic performance measure was coded into three categories:

No transfer—the peer-assessed task was the same as the academic performance measure. For example, a student’s assignment was assessed by peers and this feedback was utilised to make revisions before it was graded by their teacher.

Near transfer—the peer-assessed task was in the same or very similar format as the academic performance measure, e.g., an essay on a different, but similar topic.

Far transfer—the peer-assessed task was in a different form to the academic performance task, although they may have overlapping content. For example, a student’s assignment was peer assessed, while the final course exam grade was the academic performance measure.

We recorded how participants were allocated to a condition. Three categories of allocation were found in the included studies: random allocation at the class level, at the student level, or at the year/semester level. As only two studies allocated students to conditions at the year/semester level, we combined these studies with the studies allocated at the classroom level (i.e., as quasi-experiments).

Statistical Analyses of Effect Sizes

Effect size estimation and heterogeneity.

A random effects, multi-level meta-analysis was carried out using R version 3.4.3 (R Core Team 2017 ). The primary outcome was standardised mean difference between peer assessment and comparison (i.e., control) conditions. A common effect size metric, Hedge’s g , was calculated. A positive Hedge’s g value indicates comparatively higher values in the dependent variable in the peer assessment group (i.e., higher academic performance). Heterogeneity in the effect sizes was estimated using the I 2 statistic. I 2 is equivalent to the percentage of variation between studies that is due to heterogeneity (Schwarzer et al. 2015 ). Large values of the I 2 statistics suggest higher heterogeneity between studies in the analysis.

Meta-regressions were performed to examine the moderating effects of the various factors that differed across the studies. We report the results of these meta-regressions alongside sub-groups analyses. While it was possible to determine whether sub-groups differed significantly from each other by determining whether the confidence interval around their effect sizes overlap, sub-groups analysis may also produce biased estimates when heteroscedasticity or multicollinearity are present (Steel and Kammeyer-Mueller 2002 ). We performed meta-regressions separately for each predictor to test the overall effect of a moderator.

Finally, as this meta-analysis included students from primary school to graduate school, which are highly varied participant and educational contexts, we opted to analyse the data both in complete form, as well as after controlling for each level of education. As such, we were able to look at the effect of each moderator across education levels and for each education level separately.

Robust Variance Estimation

Often meta-analyses include multiple effect sizes from the same sample (e.g., the effect of peer assessment on two different measures of academic performance). Including these dependent effect sizes in a meta-analysis can be problematic, as this can potentially bias the results of the analysis in favour of studies that have more effect sizes. Recently, Robust Variance Estimation (RVE) was developed as a technique to address such concerns (Hedges et al. 2010 ). RVE allows for the modelling of dependence between effect sizes even when the nature of the dependence is not specifically known. Under such situations, RVE results in unbiased estimates of fixed effects when dependent effect sizes are included in the analysis (Moeyaert et al. 2017 ). A correlated effects structure was specified for the meta-analysis (i.e., the random error in the effects from a single paper were expected to be correlated due to similar participants, procedures). A rho value of .8 was specified for the correlated effects (i.e., effects from the same study) as is standard practice when the correlation is unknown (Hedges et al. 2010 ). A sensitivity analysis indicated that none of the results varied as a function of the chosen rho. We utilised the ‘robumeta’ package (Fisher et al. 2017 ) to perform the meta-analyses. Our approach was to use only summative dependent variables when they were provided (e.g., overall writing quality score rather than individual trait measures), but to utilise individual measures when overall indicators were not available. When a pre-post design was used in a study, we adjusted the effect size for pre-intervention differences in academic performance as long as there was sufficient data to do so (e.g., t tests for pre-post change).

Overall Meta-analysis of the Effect of Peer Assessment

Prior to conducting the analysis, two effect sizes ( g = 2.06 and 1.91) were identified as outliers and removed using the outlier labelling rule (Hoaglin and Iglewicz 1987 ). Descriptive characteristics of the included studies are presented in Table 2 . The meta-analysis indicated that there was a significant positive effect of peer assessment on academic performance ( g = 0.31, SE = .06, 95% CI = .18 to .44, p < .001). A density graph of the recorded effect sizes is provided in Fig. 3 . A sensitivity analysis indicated that the effect size estimates did not differ with different values of rho. Heterogeneity between the studies’ effect sizes was large, I 2 = 81.08%, supporting the use of a meta-regression/sub-groups analysis in order to explain the observed heterogeneity in effect sizes.

figure 3

A density plot of effect sizes

Meta-Regressions and Sub-Groups Analyses

Effect sizes for sub-groups are presented in Table 3 . The results of the meta-regressions are presented in Table 4 .

A meta-regression with tertiary students as the reference category indicated that there was no significant difference in effect size as a function of education level. The effect of peer assessment was similar for secondary students ( g = .44, p < .001) and primary school students ( g = .41, p = .006) and smaller for tertiary students ( g = .21, p = .043). There is, however, a strong theoretical basis for examining effects separately at different education levels (primary, secondary, tertiary), because of the large degree of heterogeneity across such a wide span of learning contexts (e.g., pedagogical practices, intellectual and social development of the students). We therefore will proceed by reporting the data both as a whole and separately for each of the education levels for all of the moderators considered here. Education level is contrast coded such that tertiary is compared to the average of secondary and primary and secondary and primary are compared to each other.

A meta-regression indicated that the effect size was not significantly different when comparing peer assessment with teacher assessment, than when comparing peer assessment with no assessment ( b = .02, 95% CI − .26 to .31, p = .865). The difference between peer assessment vs. no assessment and peer assessment vs. self-assessment was also not significant ( b = − .03, CI − .44 to .38, p = .860), see Table 4 . An examination of sub-groups suggested that peer assessment had a moderate positive effect compared to no assessment controls ( g = .31, p = .004) and teacher assessment ( g = .28, p = .007) and was not significantly different compared with self-assessment ( g = .23, p = .209). The meta-regression was also re-run with education level as a covariate but the results were unchanged.

Meta-regressions indicated that the participant’s role was not a significant moderator of the effect size; see Table 4 . However, given the extremely small number of studies where participants did not act as both assessees ( n = 2) and assessors ( n = 4), we did not perform a sub-groups analysis, as such analyses are unreliable with small samples (Fisher et al. 2017 ).

Subject Area

Given that many subject areas had few studies (see Table 1 ) and the writing subject area made up the majority of effect sizes (40.74%), we opted to perform a meta-regression comparing writing with other subject areas. However, the effect of peer assessment did not differ between writing ( g = .30 , p = .001) and other subject areas ( g = .31 , p = .002); b = − .003, 95% CI − .25 to .25, p = .979. Similarly, the results did not substantially change when education level was entered into the model.

The effect of peer assessment did not differ significantly when peer assessment included a written component ( g = .35 , p < .001) than when it did not ( g = .20 , p = .015) , b = .144, 95% CI − .10 to .39, p = .241. Including education as a variable in the model did not change the effect written feedback. Similarly, studies with a dialog component ( g = .21 , p = .033) did not differ significantly from those that did not ( g = .35 , p < .001), b = − .137, 95% CI − .39 to .12, p = .279.

Studies where peer feedback included a grading component ( g = .37 , p < .001) did not differ significantly from those that did not ( g = .17 , p = .138). However, when education level was included in the model, the model indicated significant interaction effect between grading in tertiary students and the average effect of grading in primary and secondary students ( b = .395, 95% CI .06 to .73, p = .022). A follow-up sub-groups analysis showed that grading was beneficial for academic performance in tertiary students ( g = .55 , p = .009), but not secondary school students ( g = .002, p = .991) or primary school students ( g = − .08, p = .762). When the three variables used to characterise peer assessment were entered simultaneously, the results were unchanged.

The average effect size was not significantly different for studies where assessment was freeform, i.e., where no specific script or rubric was given ( g = .42, p = .030) compared to those where a specific script or rubric was provided ( g = .29, p < .001); b = − .13, 95% CI − .51 to .25, p = .455. However, there were few studies where feedback was freeform ( n = 9, k =29). The results were unchanged when education level was controlled for in the meta-regression.

Studies where peer assessment was online ( g = .38, p = .003) did not differ from studies where assessment was offline ( g = .24, p = .004); b = .16, 95% CI − .10 to .42, p = .215. This result was unchanged when education level was included in the meta-regression.

There was no significant difference in terms of effect size between studies where peer assessment was anonymised ( g = .27, p = .019) and those where it was not ( g = .25, p = .004); b = .03, 95% CI − .22 to .28, p = .811). Nor was the effect significant when education level was controlled for.

Studies where peer assessment was performed just a single time ( g = .19, p = .103) did not differ significantly from those where it was performed multiple times ( g = .37, p < .001); b = -.17, 95% CI − .45 to .11, p = .223. Although it is worth noting that the results of the sub-groups analysis suggest that the effect of peer assessment was not significant when only considering studies that applied it a single time. The result did not change when education was included in the model.

There was no significant difference in effect size between studies utilising far transfer ( g = .21, p = .124) than those with near ( g = .42, p < .001) or no transfer ( g = .29, p = .017). Although it is worth noting that the sub-groups analysis suggests that the effect of peer assessment was only significant when there was no transfer to the criterion task. As shown in Table 4 , this was also not significant when analysed using meta-regressions either with or without education in the model.

Studies that allocated participants to experimental condition at the student level ( g = .21, p = .14) did not differ from those that allocated condition at the classroom/semester level ( g = .31, p < .001 and g  = .79, p  = .223 respectively), see Table 4 for meta-regressions.

Publication Bias

Risk of publication bias was assessed by inspecting the funnel plots (see Fig. 4 ) of the relationship between observed effects and standard error for asymmetry (Schwarzer et al. 2015 ). Egger’s test was also run by including standard error as a predictor in a meta-regression. Based on the funnel plots and a non-significant Egger’s test of asymmetry ( b = .886, p = .226), risk of publication bias was judged to be low

figure 4

A funnel plot showing the relationship between standard error and observed effect size for the academic performance meta-analysis

Proponents of peer assessment argue that it is an effective classroom technique for improving academic performance (Topping 2009 ). While previous narrative reviews have argued for the benefits of peer assessment, the current meta-analysis quantifies the effect of peer assessment interventions on academic performance within educational contexts. Overall, the results suggest that there is a positive effect of peer assessment on academic performance in primary, secondary, and tertiary students. The magnitude of the overall effect size was within the small to medium range for effect sizes (Sawilowsky 2009 ). These findings also suggest that that the benefits of peer assessment are robust across many contextual factors, including different feedback and educational characteristics.

Recently, researchers have increasingly advocated for the role of assessment in promoting learning in educational practice (Wiliam 2018 ). Peer assessment forms a core part of theories of formative assessment because it is seen as providing new information about the learning process to the teacher or student, which in turn facilitates later performance (Pellegrino et al. 2001 ). The current results provide support for the position that peer assessment can be an effective classroom technique for improving academic performance. The result suggest that peer assessment is effective compared to both no assessment (which often involved ‘teaching as usual’) and teacher assessment, suggesting that peer assessment can play an important formative role in the classroom. The findings suggest that structuring classroom activities in a way that utilises peer assessment may be an effective way to promote learning and optimise the use of teaching resources by permitting the teacher to focus on assisting students with greater difficulties or for more complex tasks. Importantly, the results indicate that peer assessment can be effective across a wide range of subject areas, education levels, and assessment types. Pragmatically, this suggests that classroom teachers can implement peer assessment in a variety of ways and tailor the peer assessment design to the particular characteristics and constraints of their classroom context.

Notably, the results of this quantitative meta-analysis align well with past narrative reviews (e.g., Black and Wiliam 1998a ; Topping 1998 ; van Zundert et al. 2010 ). The fact that both quantitative and qualitative syntheses of the literature suggest that peer assessment can be beneficial provides a stronger basis for recommending peer assessment as a practice. However, several of the moderators of the effectiveness of peer feedback that have been argued for in the available narrative reviews (e.g., rubrics; Panadero and Jonsson 2013 ) have received little support from this quantitative meta-analysis. As detailed below, this may suggest that the prominence of such feedback characteristics in narrative reviews is more driven by theoretical considerations rather than quantitative empirical evidence. However, many of these moderating variables are complex, for example, rubrics can take many forms, and due to this complexity may not lend themselves as well to quantitative synthesis/aggregation (for a detailed discussion on combining qualitative and quantitative evidence, see Gorard 2002 ).

Mechanisms and Moderators

Indeed, the current findings suggest that the feedback characteristics deemed important by current theories of peer assessment may not be as significant as first thought. Previously, individual studies have argued for the importance of characteristics such as rubrics (Panadero and Jonsson 2013 ), anonymity (Bloom & Hautaluoma, 1987 ), and allowing students to practice peer assessment (Smith, Cooper, & Lancaster, 2002 ). While these feedback characteristics have been shown to affect the efficacy of peer assessment in individual studies, we find little evidence that they moderate the effect of peer assessment when analysed across studies. Many of the current models of peer assessment rely on qualitative evidence, theoretical arguments, and pedagogical experience to formulate theories about what determines effective peer assessment. While such evidence should not be discounted, the current findings also point to the need for better quantitative and experimental studies to test some of the assumptions embedded in these models. We suggest that the null findings observed in this meta-analysis regarding the proposed moderators of peer assessment efficacy should be interpreted cautiously, as more studies that experimentally manipulate these variables are needed to provide more definitive insight into how to design better peer assessment procedures.

While the current findings are ambiguous regarding the mechanisms of peer assessment, it is worth noting that without a solid understanding of the mechanisms underlying peer assessment effects, it is difficult to identify important moderators or optimally use peer assessment in the classroom. Often the research literature makes somewhat broad claims about the possible benefits of peer assessment. For example, Topping ( 1998 , p.256) suggested that peer assessment may, ‘promote a sense of ownership, personal responsibility, and motivation… [and] might also increase variety and interest, activity and interactivity, identification and bonding, self-confidence, and empathy for others’. Others have argued that peer assessment is beneficial because it is less personally evaluative—with evidence suggesting that teacher assessment is often personally evaluative (e.g., ‘good boy, that is correct’) which may have little or even negative effects on performance particularly if the assessee has low self-efficacy (Birney, Beckmann, Beckmann & Double 2017 ; Double and Birney 2017 , 2018 ; Hattie and Timperley 2007 ). However, more research is needed to distinguish between the many proposed mechanisms for peer assessment’s formative effects made within the extant literature, particularly as claims about the mechanisms of the effectiveness of peer assessment are often evidenced by student self-reports about the aspects of peer assessment they rate as useful. While such self-reports may be informative, more experimental research that systematically manipulates aspects of the design of peer assessment is likely to provide greater clarity about what aspects of peer assessment drive the observed benefits.

Our findings did indicate an important role for grading in determining the effectiveness of peer feedback. We found that peer grading was beneficial for tertiary students but not beneficial for primary or secondary school students. This finding suggests that grading appears to add little to the peer feedback process in non-tertiary students. In contrast, a recent meta-analysis by Sanchez et al. ( 2017 ) on peer grading found a benefit for non-tertiary students, albeit based on a relatively small number of studies compared with the current meta-analysis. In contrast, the present findings suggest that there may be significant qualitative differences in the performance of peer grading as students develop. For example, the criteria students use to assesses ability may change as they age (Stipek and Iver 1989 ). It is difficult to ascertain precisely why grading has positive additive effects in only tertiary students, but there are substantial differences in pedagogy, curriculum, motivation of learning, and grading systems that may account for these differences. One possibility is that tertiary students are more ‘grade orientated’ and therefore put more weight on peer assessment which includes a specific grade. Further research is needed to explore the effects of grading at different educational levels.

One of the more unexpected findings of this meta-analysis was the positive effect of peer assessment compared to teacher assessment. This finding is somewhat counterintuitive given the greater qualifications and pedagogical experience of the teacher. In addition, in many of the studies, the teacher had privileged knowledge about, and often graded the outcome assessment. Thus, it seems reasonable to expect that teacher feedback would better align with assessment objectives and therefore produce better outcomes. Despite all these advantages, teacher assessment appeared to be less efficacious than peer assessment for academic performance. It is possible that the pedagogical disadvantages of peer assessment are compensated for by affective or motivational aspects of peer assessment, or by the substantial benefits of acting as an assessor. However, more experimental research is needed to rule out the effects of potential methodological issues discussed in detail below.

Limitations

A major limitation of the current results is that they cannot adequately distinguish between the effect of assessing versus being an assessee. Most of the current studies confound giving and receiving peer assessment in their designs (i.e., the students in the peer assessment group both provide assessment and receive it), and therefore, no substantive conclusions can be drawn about whether the benefits of peer assessment extend from giving feedback, receiving feedback, or both. This raises the possibility that the benefit of peer assessment comes more from assessing, rather than being assessed (Usher 2018 ). Consistent with this, Lundstrom and Baker ( 2009 ) directly compared the effects of giving and receiving assessment on students’ writing performance and found that assessing was more beneficial than being assessed. Similarly, Graner ( 1987 ) found that assessing papers without being assessed was as effective for improving writing performance as assessing papers and receiving feedback.

Furthermore, more true experiments are needed, as there is evidence from these results that they produce more conservative estimates of the effect of peer assessment. The studies included in this meta-analysis were not only predominantly randomly allocated at the classroom level (i.e., quasi-experiments), but in all but one case, were not analysed using appropriate techniques for analysing clustered data (e.g., multi-level modelling). This is problematic because it makes disentangling classroom-level effects (e.g., teacher quality) from the intervention effect difficult, which may lead to biased statistical inferences (Hox 1998 ). While experimental designs with individual allocation are often not pragmatic for classroom interventions, online peer assessment interventions appear to be obvious candidates for increased true experiments. In particular, carefully controlled experimental designs that examine the effect of specific assessment characteristics, rather than ‘black-box’ studies of the effectiveness of peer assessment, are crucial for understanding when and how peer assessment is most likely to be effective. For example, peer assessment may be counterproductive when learning novel tasks due to students’ inadequate domain knowledge (Könings et al. 2019 ).

While the current results provide an overall estimate of the efficacy of peer assessment in improving academic performance when compared to teacher and no assessment, it should be noted that these effects are averaged across a wide range of outcome measures, including science project grades, essay writing ratings, and end-of-semester exam scores. Aggregating across such disparate outcomes is always problematic in meta-analysis and is a particular concern for meta-analyses in educational research, as some outcome measures are likely to be more sensitive to interventions than others (William, 2010 ). A further issue is that the effect of moderators may differ between academic domains. For example, some assessment characteristics may be important when teaching writing but not mathematics. Because there were too few studies in the individual academic domains (with the exception of writing), we are unable to account for these differential effects. The effects of the moderators reported here therefore need to be considered as overall averages that provide information about the extent to which the effect of a moderator generalises across domains.

Finally, the findings of the current meta-analysis are also somewhat limited by the fact that few studies gave a complete profile of the participants and measures used. For example, few studies indicated that ability of peer reviewer relative to the reviewee and age difference between the peers was not necessarily clear. Furthermore, it was not possible to classify the academic performance measures in the current study further, such as based on novelty, or to code for the quality of the measures, including their reliability and validity, because very few studies provide comprehensive details about the outcome measure(s) they utilised. Moreover, other important variables such as fidelity of treatment were almost never reported in the included manuscripts. Indeed, many of the included variables needed to be coded based on inferences from the included studies’ text and were not explicitly stated, even when one would reasonably expect that information to be made clear in a peer-reviewed manuscript. The observed effect sizes reported here should therefore be taken as an indicator of average efficacy based on the extant literature and not an indication of expected effects for specific implementations of peer assessment.

Overall, our findings provide support for the use of peer assessment as a formative practice for improving academic performance. The results indicate that peer assessment is more effective than no assessment and teacher assessment and not significantly different in its effect from self-assessment. These findings are consistent with current theories of formative assessment and instructional best practice and provide strong empirical support for the continued use of peer assessment in the classroom and other educational contexts. Further experimental work is needed to clarify the contextual and educational factors that moderate the effectiveness of peer assessment, but the present findings are encouraging for those looking to utilise peer assessment to enhance learning.

References marked with an * were included in the meta-analysis

* AbuSeileek, A. F., & Abualsha'r, A. (2014). Using peer computer-mediated corrective feedback to support EFL learners'. Language Learning & Technology, 18 (1), 76-95.

Alqassab, M., Strijbos, J. W., & Ufer, S. (2018). Training peer-feedback skills on geometric construction tasks: Role of domain knowledge and peer-feedback levels. European Journal of Psychology of Education, 33 (1), 11–30.

Article   Google Scholar  

* Anderson, N. O., & Flash, P. (2014). The power of peer reviewing to enhance writing in horticulture: Greenhouse management. International Journal of Teaching and Learning in Higher Education, 26 (3), 310–334.

* Bangert, A. W. (1995). Peer assessment: an instructional strategy for effectively implementing performance-based assessments. (Unpublished doctoral dissertation). University of South Dakota.

* Benson, N. L. (1979). The effects of peer feedback during the writing process on writing performance, revision behavior, and attitude toward writing. (Unpublished doctoral dissertation). University of Colorado, Boulder.

* Bhullar, N., Rose, K. C., Utell, J. M., & Healey, K. N. (2014). The impact of peer review on writing in apsychology course: Lessons learned. Journal on Excellence in College Teaching, 25(2), 91-106.

* Birjandi, P., & Hadidi Tamjid, N. (2012). The role of self-, peer and teacher assessment in promoting Iranian EFL learners’ writing performance. Assessment & Evaluation in Higher Education, 37 (5), 513–533.

Birney, D. P., Beckmann, J. F., Beckmann, N., & Double, K. S. (2017). Beyond the intellect: Complexity and learning trajectories in Raven’s Progressive Matrices depend on self-regulatory processes and conative dispositions. Intelligence, 61 , 63–77.

Black, P., & Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5 (1), 7–74.

Google Scholar  

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability (formerly: Journal of Personnel Evaluation in Education), 21 (1), 5.

Bloom, A. J., & Hautaluoma, J. E. (1987). Effects of message valence, communicator credibility, and source anonymity on reactions to peer feedback. The Journal of Social Psychology, 127 (4), 329–338.

Brown, G. T., Irving, S. E., Peterson, E. R., & Hirschfeld, G. H. (2009). Use of interactive–informal assessment practices: New Zealand secondary students' conceptions of assessment. Learning and Instruction, 19 (2), 97–111.

* Califano, L. Z. (1987). Teacher and peer editing: Their effects on students' writing as measured by t-unit length, holistic scoring, and the attitudes of fifth and sixth grade students (Unpublished doctoral dissertation), Northern Arizona University.

* Chaney, B. A., & Ingraham, L. R. (2009). Using peer grading and proofreading to ratchet student expectations in preparing accounting cases. American Journal of Business Education, 2 (3), 39-48.

* Chang, S. H., Wu, T. C., Kuo, Y. K., & You, L. C. (2012). Project-based learning with an online peer assessment system in a photonics instruction for enhancing led design skills. Turkish Online Journal of Educational Technology-TOJET, 11(4), 236–246.

* Cho, K., & MacArthur, C. (2011). Learning by reviewing. Journal of Educational Psychology, 103 (1), 73.

Cho, K., Schunn, C. D., & Charney, D. (2006). Commenting on writing: Typology and perceived helpfulness of comments from novice peer reviewers and subject matter experts. Written Communication, 23 (3), 260–294.

Cook, D. J., Guyatt, G. H., Ryan, G., Clifton, J., Buckingham, L., Willan, A., et al. (1993). Should unpublished data be included in meta-analyses?: Current convictions and controversies. JAMA, 269 (21), 2749–2753.

*Crowe, J. A., Silva, T., & Ceresola, R. (2015). The effect of peer review on student learning outcomes in a research methods course.  Teaching Sociology, 43 (3), 201–213.

* Diab, N. M. (2011). Assessing the relationship between different types of student feedback and the quality of revised writing . Assessing Writing, 16(4), 274-292.

Demetriadis, S., Egerter, T., Hanisch, F., & Fischer, F. (2011). Peer review-based scripted collaboration to support domain-specific and domain-general knowledge acquisition in computer science. Computer Science Education, 21 (1), 29–56.

Dochy, F., Segers, M., & Sluijsmans, D. (1999). The use of self-, peer and co-assessment in higher education: A review. Studies in Higher Education, 24 (3), 331–350.

Double, K. S., & Birney, D. (2017). Are you sure about that? Eliciting confidence ratings may influence performance on Raven’s progressive matrices. Thinking & Reasoning, 23 (2), 190–206.

Double, K. S., & Birney, D. P. (2018). Reactivity to confidence ratings in older individuals performing the latin square task. Metacognition and Learning, 13(3), 309–326.

* Enders, F. B., Jenkins, S., & Hoverman, V. (2010). Calibrated peer review for interpreting linear regression parameters: Results from a graduate course. Journal of Statistics Education , 18 (2).

* English, R., Brookes, S. T., Avery, K., Blazeby, J. M., & Ben-Shlomo, Y. (2006). The effectiveness and reliability of peer-marking in first-year medical students. Medical Education, 40 (10), 965-972.

* Erfani, S. S., & Nikbin, S. (2015). The effect of peer-assisted mediation vs. tutor-intervention within dynamic assessment framework on writing development of Iranian Intermediate EFL Learners. English Language Teaching, 8 (4), 128–141.

Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70 (3), 287–322.

* Farrell, K. J. (1977). A comparison of three instructional approaches for teaching written composition to high school juniors: teacher lecture, peer evaluation, and group tutoring (Unpublished doctoral dissertation), Boston University, Boston.

Fisher, Z., Tipton, E., & Zhipeng, Z. (2017). robumeta: Robust variance meta-regression (Version 2). Retrieved from https://CRAN.R-project.org/package = robumeta

Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76 (5), 378.

Flórez, M. T., & Sammons, P. (2013). Assessment for learning: Effects and impact: CfBT Education Trust . England: Reading.

Fyfe, E. R., & Rittle-Johnson, B. (2016). Feedback both helps and hinders learning: The causal role of prior knowledge. Journal of Educational Psychology, 108 (1), 82.

Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010a). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20 (4), 304–315.

* Gielen, S., Tops, L., Dochy, F., Onghena, P., & Smeets, S. (2010b). A comparative study of peer and teacher feedback and of various peer feedback forms in a secondary school writing curriculum. British Educational Research Journal , 36 (1), 143-162.

Gorard, S. (2002). Can we overcome the methodological schism? Four models for combining qualitative and quantitative evidence. Research Papers in Education Policy and Practice, 17 (4), 345–361.

Graner, M. H. (1987). Revision workshops: An alternative to peer editing groups. The English Journal, 76 (3), 40–45.

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77 (1), 81–112.

Hays, M. J., Kornell, N., & Bjork, R. A. (2010). The costs and benefits of providing feedback during learning. Psychonomic bulletin & review, 17 (6), 797–801.

Hedges, L. V. (1981). Distribution theory for Glass's estimator of effect size and related estimators. journal of . Educational Statistics, 6 (2), 107–128.

Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1 (1), 39–65.

Higgins, J. P., & Green, S. (2011). Cochrane handbook for systematic reviews of interventions. The Cochrane Collaboration. Version 5.1.0, www.handbook.cochrane.org

Hoaglin, D. C., & Iglewicz, B. (1987). Fine-tuning some resistant rules for outlier labeling. Journal of the American Statistical Association, 82 (400), 1147–1149.

Hopewell, S., McDonald, S., Clarke, M. J., & Egger, M. (2007). Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews .

* Horn, G. C. (2009). Rubrics and revision: What are the effects of 3 RD graders using rubrics to self-assess or peer-assess drafts of writing? (Unpublished doctoral thesis), Boise State University

Hox, J. J. (1998). Multilevel modeling: When and why. In I. Balderjahn, R. Mathar, & M. Schader (Eds.), Classification, data analysis, and data highways (pp. 147–154). New Yor: Springer Verlag.

Chapter   Google Scholar  

* Hsia, L. H., Huang, I., & Hwang, G. J. (2016). A web-based peer-assessment approach to improving junior high school students’ performance, self-efficacy and motivation in performing arts courses. British Journal of Educational Technology, 47 (4), 618–632.

* Hsu, T. C. (2016). Effects of a peer assessment system based on a grid-based knowledge classification approach on computer skills training. Journal of Educational Technology & Society , 19 (4), 100-111.

* Hussein, M. A. H., & Al Ashri, El Shirbini A. F. (2013). The effectiveness of writing conferences and peer response groups strategies on the EFL secondary students' writing performance and their self efficacy (A Comparative Study). Egypt: National Program Zero.

* Hwang, G. J., Hung, C. M., & Chen, N. S. (2014). Improving learning achievements, motivations and problem-solving skills through a peer assessment-based game development approach. Educational Technology Research and Development, 62 (2), 129–145.

* Hwang, G. J., Tu, N. T., & Wang, X. M. (2018). Creating interactive E-books through learning by design: The impacts of guided peer-feedback on students’ learning achievements and project outcomes in science courses. Journal of Educational Technology & Society, 21 (1), 25–36.

* Kamp, R. J., van Berkel, H. J., Popeijus, H. E., Leppink, J., Schmidt, H. G., & Dolmans, D. H. (2014). Midterm peer feedback in problem-based learning groups: The effect on individual contributions and achievement. Advances in Health Sciences Education, 19 (1), 53–69.

* Karegianes, M. J., Pascarella, E. T., & Pflaum, S. W. (1980). The effects of peer editing on the writing proficiency of low-achieving tenth grade students. The Journal of Educational Research , 73 (4), 203-207.

* Khonbi, Z. A., & Sadeghi, K. (2013). The effect of assessment type (self vs. peer) on Iranian university EFL students’ course achievement. Procedia-Social and Behavioral Sciences , 70 , 1552-1564.

Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119 (2), 254.

Könings, K. D., van Zundert, M., & van Merriënboer, J. J. G. (2019). Scaffolding peer-assessment skills: Risk of interference with learning domain-specific skills? Learning and Instruction, 60 , 85–94.

* Kurihara, N. (2017). Do peer reviews help improve student writing abilities in an EFL high school classroom? TESOL Journal, 8 (2), 450–470.

Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33 (1), 159–174.

* Li, L., & Gao, F. (2016). The effect of peer assessment on project performance of students at different learning levels. Assessment & Evaluation in Higher Education, 41 (6), 885–900.

* Li, L., & Steckelberg, A. (2004). Using peer feedback to enhance student meaningful learning . Chicago: Association for Educational Communications and Technology.

Li, H., Xiong, Y., Zang, X., Kornhaber, M. L., Lyu, Y., Chung, K. S., & Suen, K. H. (2016). Peer assessment in the digital age: a meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41 (2), 245–264.

* Lin, Y.-C. A. (2009). An examination of teacher feedback, face-to-face peer feedback, and google documents peer feedback in Taiwanese EFL college students’ writing. (Unpublished doctoral dissertation), Alliant International University, San Diego, United States

Lipsey, M. W., & Wilson, D. B. (2001). Practical Meta-analysis . Thousand Oaks: SAGE publications.

* Liu, C.-C., Lu, K.-H., Wu, L. Y., & Tsai, C.-C. (2016). The impact of peer review on creative self-efficacy and learning performance in Web 2.0 learning activities. Journal of Educational Technology & Society, 19 (2):286-297

Lundstrom, K., & Baker, W. (2009). To give is better than to receive: The benefits of peer review to the reviewer's own writing. Journal of Second Language Writing, 18 (1), 30–43.

* McCurdy, B. L., & Shapiro, E. S. (1992). A comparison of teacher-, peer-, and self-monitoring with curriculum-based measurement in reading among students with learning disabilities. The Journal of Special Education , 26 (2), 162-180.

Moeyaert, M., Ugille, M., Natasha Beretvas, S., Ferron, J., Bunuan, R., & Van den Noortgate, W. (2017). Methods for dealing with multiple outcomes in meta-analysis: a comparison between averaging effect sizes, robust variance estimation and multilevel meta-analysis. International Journal of Social Research Methodology, 20 (6), 559–572.

* Montanero, M., Lucero, M., & Fernandez, M.-J. (2014). Iterative co-evaluation with a rubric of narrative texts in primary education. Journal for the Study of Education and Development, 37 (1), 184-198.

Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs. Organizational Research Methods, 11 (2), 364–386.

* Olson, V. L. B. (1990). The revising processes of sixth-grade writers with and without peer feedback. The Journal of Educational Research, 84(1), 22–29.

Ossenberg, C., Henderson, A., & Mitchell, M. (2018). What attributes guide best practice for effective feedback? A scoping review. Advances in Health Sciences Education , 1–19.

* Ozogul, G., Olina, Z., & Sullivan, H. (2008). Teacher, self and peer evaluation of lesson plans written by preservice teachers. Educational Technology Research and Development, 56 (2), 181.

Panadero, E., & Alqassab, M. (2019). An empirical review of anonymity effects in peer assessment, peer feedback, peer review, peer evaluation and peer grading. Assessment & Evaluation in Higher Education , 1–26.

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9 , 129–144.

Panadero, E., Romero, M., & Strijbos, J. W. (2013). The impact of a rubric and friendship on peer assessment: Effects on construct validity, performance, and perceptions of fairness and comfort. Studies in Educational Evaluation, 39 (4), 195–203.

* Papadopoulos, P. M., Lagkas, T. D., & Demetriadis, S. N. (2012). How to improve the peer review method: Free-selection vs assigned-pair protocol evaluated in a computer networking course. Computers & Education, 59 (2), 182–195.

Paulus, T. M. (1999). The effect of peer and teacher feedback on student writing. Journal of second language writing, 8 (3), 265–289.

Pellegrino, J. W., Chudowsky, N., & Glaser, R. (2001). Knowing what students know: the science and design of educational assessment . Washington: National Academy Press.

Peters, O., Körndle, H., & Narciss, S. (2018). Effects of a formative assessment script on how vocational students generate formative feedback to a peer’s or their own performance. European Journal of Psychology of Education, 33 (1), 117–143.

* Philippakos, Z. A., & MacArthur, C. A. (2016). The effects of giving feedback on the persuasive writing of fourth-and fifth-grade students. Reading Research Quarterly, 51 (4), 419-433.

* Pierson, H. (1967). Peer and teacher correction: A comparison of the effects of two methods of teaching composition in grade nine English classes. (Unpublished doctoral dissertation), New York University.

* Prater, D., & Bermudez, A. (1993). Using peer response groups with limited English proficient writers. Bilingual Research Journal , 17 (1-2), 99-116.

Reinholz, D. (2016). The assessment cycle: A model for learning through peer assessment. Assessment & Evaluation in Higher Education, 41 (2), 301–315.

* Rijlaarsdam, G., & Schoonen, R. (1988). Effects of a teaching program based on peer evaluation on written composition and some variables related to writing apprehension. (Unpublished doctoral dissertation), Amsterdam University, Amsterdam

Rollinson, P. (2005). Using peer feedback in the ESL writing class. ELT Journal, 59 (1), 23–30.

Rotsaert, T., Panadero, E., & Schellens, T. (2018). Anonymity as an instructional scaffold in peer assessment: its effects on peer feedback quality and evolution in students’ perceptions about peer assessment skills. European Journal of Psychology of Education, 33 (1), 75–99.

* Rudd II, J. A., Wang, V. Z., Cervato, C., & Ridky, R. W. (2009). Calibrated peer review assignments for the Earth Sciences. Journal of Geoscience Education , 57 (5), 328-334.

* Ruegg, R. (2015). The relative effects of peer and teacher feedback on improvement in EFL students' writing ability. Linguistics and Education, 29 , 73-82.

* Sadeghi, K., & Abolfazli Khonbi, Z. (2015). Iranian university students’ experiences of and attitudes towards alternatives in assessment. Assessment & Evaluation in Higher Education, 40 (5), 641–665.

* Sadler, P. M., & Good, E. (2006). The impact of self- and peer-grading on student learning. Educational Assessment , 11 (1), 1-31.

Sanchez, C. E., Atkinson, K. M., Koenka, A. C., Moshontz, H., & Cooper, H. (2017). Self-grading and peer-grading for formative and summative assessments in 3rd through 12th grade classrooms: A meta-analysis. Journal of Educational Psychology, 109 (8), 1049.

Sawilowsky, S. S. (2009). New effect size rules of thumb. Journal of Modern Applied Statistical Methods, 8 (2), 26.

* Schonrock-Adema, J., Heijne-Penninga, M., van Duijn, M. A., Geertsma, J., & Cohen-Schotanus, J. (2007). Assessment of professional behaviour in undergraduate medical education: Peer assessment enhances performance. Medical Education, 41 (9), 836-842.

Schwarzer, G., Carpenter, J. R., & Rücker, G. (2015). Meta-analysis with R . Cham: Springer.

Book   Google Scholar  

* Sippel, L., & Jackson, C. N. (2015). Teacher vs. peer oral corrective feedback in the German language classroom. Foreign Language Annals , 48 (4), 688-705.

Sluijsmans, D. M., Brand-Gruwel, S., van Merriënboer, J. J., & Martens, R. L. (2004). Training teachers in peer-assessment skills: Effects on performance and perceptions. Innovations in Education and Teaching International, 41 (1), 59–78.

Smith, H., Cooper, A., & Lancaster, L. (2002). Improving the quality of undergraduate peer assessment: A case for student and staff development. Innovations in education and teaching international, 39 (1), 71–81.

Smith, M. K., Wood, W. B., Adams, W. K., Wieman, C., Knight, J. K., Guild, N., & Su, T. T. (2009). Why peer discussion improves student performance on in-class concept questions. Science, 323 (5910), 122–124.

Steel, P. D., & Kammeyer-Mueller, J. D. (2002). Comparing meta-analytic moderator estimation techniques under realistic conditions. Journal of Applied Psychology, 87 (1), 96.

Stipek, D., & Iver, D. M. (1989). Developmental change in children's assessment of intellectual competence. Child Development , 521–538.

Strijbos, J. W., & Wichmann, A. (2018). Promoting learning by leveraging the collaborative nature of formative peer assessment with instructional scaffolds. European Journal of Psychology of Education, 33 (1), 1–9.

Strijbos, J.-W., Narciss, S., & Dünnebier, K. (2010). Peer feedback content and sender's competence level in academic writing revision tasks: Are they critical for feedback perceptions and efficiency? Learning and Instruction, 20 (4), 291–303.

* Sun, D. L., Harris, N., Walther, G., & Baiocchi, M. (2015). Peer assessment enhances student learning: The results of a matched randomized crossover experiment in a college statistics class. PLoS One 10(12),

Tannacito, T., & Tuzi, F. (2002). A comparison of e-response: Two experiences, one conclusion. Kairos, 7 (3), 1–14.

Team, R. (2017). R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2017: R Core Team.

Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68 (3), 249-276.

Topping, K. (2009). Peer assessment. Theory Into Practice, 48 (1), 20–27.

Usher, N. (2018). Learning about academic writing through holistic peer assessment. (Unpiblished doctoral thesis), University of Oxford, Oxford, UK.

* van den Boom, G., Paas, F., & van Merriënboer, J. J. (2007). Effects of elicited reflections combined with tutor or peer feedback on self-regulated learning and learning outcomes. Learning and Instruction , 17 (5), 532-548.

* van Ginkel, S., Gulikers, J., Biemans, H., & Mulder, M. (2017). The impact of the feedback source on developing oral presentation competence. Studies in Higher Education, 42 (9), 1671-1685.

van Popta, E., Kral, M., Camp, G., Martens, R. L., & Simons, P. R. J. (2017). Exploring the value of peer feedback in online learning for the provider. Educational Research Review, 20 , 24–34.

van Zundert, M., Sluijsmans, D., & van Merriënboer, J. (2010). Effective peer assessment processes: Research findings and future directions. Learning and Instruction, 20 (4), 270–279.

Vanderhoven, E., Raes, A., Montrieux, H., Rotsaert, T., & Schellens, T. (2015). What if pupils can assess their peers anonymously? A quasi-experimental study. Computers & Education, 81 , 123–132.

Wang, J.-H., Hsu, S.-H., Chen, S. Y., Ko, H.-W., Ku, Y.-M., & Chan, T.-W. (2014a). Effects of a mixed-mode peer response on student response behavior and writing performance. Journal of Educational Computing Research, 51 (2), 233–256.

* Wang, J. H., Hsu, S. H., Chen, S. Y., Ko, H. W., Ku, Y. M., & Chan, T. W. (2014b). Effects of a mixed-mode peer response on student response behavior and writing performance. Journal of Educational Computing Research , 51 (2), 233-256.

* Wang, X.-M., Hwang, G.-J., Liang, Z.-Y., & Wang, H.-Y. (2017). Enhancing students’ computer programming performances, critical thinking awareness and attitudes towards programming: An online peer-assessment attempt. Journal of Educational Technology & Society, 20 (4), 58-68.

Wiliam, D. (2010). What counts as evidence of educational achievement? The role of constructs in the pursuit of equity in assessment. Review of Research in Education, 34 (1), 254–284.

Wiliam, D. (2018). How can assessment support learning? A response to Wilson and Shepard, Penuel, and Pellegrino. Educational Measurement: Issues and Practice, 37 (1), 42–44.

Wiliam, D., Lee, C., Harrison, C., & Black, P. (2004). Teachers developing assessment for learning: Impact on student achievement. Assessment in Education: Principles, Policy & Practice, 11 (1), 49–65.

* Wise, W. G. (1992). The effects of revision instruction on eighth graders' persuasive writing (Unpublished doctoral dissertation), University of Maryland, Maryland

* Wong, H. M. H., & Storey, P. (2006). Knowing and doing in the ESL writing class. Language Awareness , 15 (4), 283.

* Xie, Y., Ke, F., & Sharma, P. (2008). The effect of peer feedback for blogging on college students' reflective learning processes. The Internet and Higher Education , 11 (1), 18-25.

Young, J. E., & Jackman, M. G.-A. (2014). Formative assessment in the Grenadian lower secondary school: Teachers’ perceptions, attitudes and practices. Assessment in Education: Principles, Policy & Practice, 21 (4), 398–411.

Yu, F.-Y., & Liu, Y.-H. (2009). Creating a psychologically safe online space for a student-generated questions learning activity via different identity revelation modes. British Journal of Educational Technology, 40 (6), 1109–1123.

Download references

Acknowledgements

The authors would like to thank Kristine Gorgen and Jessica Chan for their help coding the studies included in the meta-analysis.

Author information

Authors and affiliations.

Department of Education, University of Oxford, Oxford, England

Kit S. Double, Joshua A. McGrane & Therese N. Hopfenbeck

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kit S. Double .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

(XLSX 40 kb)

Effect Size Calculation

Standardised mean differences were calculated as a measure of effect size. Standardised mean difference ( d ) was calculated using the following formula, which is typically used in meta-analyses (e.g., Lipsey and Wilson 2001 ).

As standardized mean difference ( d ) is known to have a slight positive bias (Hedges 1981 ), we applied a correction to bias-correct estimates (resulting in what is often referred to as Hedge’s g ).

For studies where there was insufficient information to calculate Hedge’s g using the above method, we used the online effect size calculator developed by Lipsey and Wilson ( 2001 ) available http://www.campbellcollaboration.org/escalc . For pre-post design studies where adjusted means were not provided, we used the critical value relevant to the difference between peer feedback and control groups from the reported pre-intervention adjusted analysis (e.g., Analysis of Covariances) as suggested by Higgins and Green ( 2011 ). For pre-post designs studies where both pre and post intervention means and standard deviations were provided, we used an effect size estimate based on the mean pre-post change in the peer feedback group minus the mean pre-post change in the control group, divided by the pooled pre-intervention standard deviation as such an approach minimised bias and improves estimate precision (Morris 2008 ).

Variance estimates for each effect size were calculated using the following formula:

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Double, K.S., McGrane, J.A. & Hopfenbeck, T.N. The Impact of Peer Assessment on Academic Performance: A Meta-analysis of Control Group Studies. Educ Psychol Rev 32 , 481–509 (2020). https://doi.org/10.1007/s10648-019-09510-3

Download citation

Published : 10 December 2019

Issue Date : June 2020

DOI : https://doi.org/10.1007/s10648-019-09510-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer assessment
  • Meta-analysis
  • Experimental design
  • Effect size
  • Formative assessment
  • Find a journal
  • Publish with us
  • Track your research

Peer assessment

  • February 2016
  • New Directions in the Teaching of Physical Sciences

Paul Chin at Liverpool John Moores University

  • Liverpool John Moores University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Abdeladim Samouni

  • Saeid Belkasim

Emiliano Acquila-Natale

  • Isabel Rodríguez-Ruiz

Anna Tso

  • Catherine Haines

Keith Topping

  • T.S. Roberts

Rainer E. Glaser

  • Chun-Ming Tsai

Fu-Yun Yu

  • Yu‐Hsin Liu
  • Tak-Wai Chan
  • Shibley, Ivan A., Jr
  • Louis K. Milakofsky
  • Cynthia L. Nicotera
  • Judy Goldfinch

Robert Raeside

  • Sahana Murthy
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Using peer assessment as an effective learning strategy in the classroom

  • Perspective Article
  • Published on: September 12, 2018

essay on peer assessment

  • Assessment |
  • Developing effective learners |
  • Promoting good progress

If used effectively, peer assessment – a formative assessment strategy that encourages students to comment on the work of their peers – can improve students’ understanding of success criteria, help them to become more engaged in learning and develop their interpersonal skills (Black et al., 2003; Topping, 2017), as well as potentially even reducing teacher workload. Conversely, however, peer assessment can hinder students’ learning if poor-quality, insensitive or unhelpful peer feedback is exchanged, and may strain relationships between learners (Topping, 2017, 2018). The purpose of this article is to examine how these potential issues can be avoided and how peer assessment can be used as an effective learning strategy.  

Training and guiding students to give effective written peer feedback

Guiding students to give effective feedback is important, since they often find it challenging ‘to think of their work in terms of a set of goals’ (Black et al., 2003, p. 49). During peer assessment, learning objectives need to be explicit, and there needs to be clarity around how these can be successfully met (Boon, 2015; Topping, 2018). If students are familiar with success criteria, they can use these to evaluate how far their partner has moved towards accomplishing a learning goal (Min, 2005, 2006).

Feedback should be task-involving, focusing on key elements of the success criteria that have been met and giving details about how the work might be enhanced (Kamins and Dweck, 1999). Guiding students to provide task-involving feedback is more likely to motivate them to learn and make subsequent improvements in their work (Kamins and Dweck, 1999), so research has therefore concentrated on the importance of training learners to provide effective task-involving feedback (e.g. Berg, 1999; Min, 2005, 2006). Several research studies have highlighted the important role teachers play in modelling how to identify strengths and weaknesses in a peer’s work (e.g. Berg, 1999; Min, 2005, 2006). These studies were related to higher education but may still apply to students in primary and secondary schools.

A study focusing on Year 6 children in a primary school also found that training in peer assessment skills helped students to provide focused task-involving feedback (Boon, 2015). The teacher in the study modelled the peer assessment process and gave children time to practise peer assessment on a fictional student, using checklists to scaffold written comments (Min, 2005, 2006; Gielen et al., 2010). These checklists enabled learners to move beyond comments such as ‘You’re really good at this – keep it up!’ to those that referred to success criteria. Whilst interesting, these findings have to be viewed cautiously due to a small sample size and an absence of controls.  

Whilst ensuring that relevant and useful feedback is given, it is vital that students make good use of this too (Gielen et al., 2010). In a study focusing on elementary school peer assessment, children who used peer feedback made more progress in writing than those who had not used it (Olson, 1990). Boon’s action research primary school study (2016a) explored this in more depth and found that students made better use of feedback if it was relevant, if time was given for discussion between peers to clarify misunderstandings, and if students were encouraged to reflect on how the feedback had been used. For example, students were encouraged to say and give examples of how they had used elements of feedback to improve the quality of an information text.

Developing students’ verbal peer feedback

While much research has concentrated on developing written peer feedback (Topping, 2018), students’ use of verbal feedback is also important, particularly in areas such as mathematics, where peer assessment might take place through dialogue rather than a written outcome. Kollar and Fischer (2010, p. 345) suggest that during peer assessment, ‘interactive exchange may be beneficial’ for learning.

Such interaction during peer assessment might involve using dialogue to explore written and verbal comments, and working collaboratively to enhance the products of peer assessment (Kollar and Fischer, 2010). However, Kollar and Fischer’s argument (2010) assumes that students already have the necessary interpersonal skills to interact effectively during peer assessment. This assumption does not sit comfortably with research suggesting that younger students often struggle to work collaboratively (Mercer et al., 2004; Mercer and Sams, 2006). In these cases, peer assessment may become an unmanageable task for students, leading them to use talk in ways that do not support their learning (Mercer et al., 2004; Mercer and Sams, 2006). One such type of dialogue identified by Mercer (2000) has been characterised as disputational talk, where students rigidly adhere to their point of view in discussion and are unwilling to accept alternative viewpoints. A second kind of dialogue that might emerge is cumulative talk, where peers agree with one another’s assertions in an uncritical manner, which again may not be helpful, as students need to be able to constructively examine each other’s work.

However, a third type of dialogue that might be more useful is exploratory talk. This involves hypothesising and reasoning; supporting assertions with good examples; questioning one another; and reaching an agreement based on fruitful discussion (Mercer and Sams, 2006). Students will need guidance in order to use this kind of dialogue rather than disputational or cumulative talk (Mercer et al., 2004). One method of ensuring that exploratory talk takes place is through thinking together , where students are guided to use a set of ground rules for effective communication (Mercer et al., 2004). This involves students asking one another effective questions; reasoning effectively; reaching agreements based on critical discussion and support; and encouraging one another throughout different activities (Mercer et al., 2004; Mercer and Sams, 2006). These ground rules encourage exploratory talk, which shares some features of effective peer assessment. For instance, both exploratory talk and peer assessment involve hypothesising and reasoning (Topping, 2017, 2018).

Black (2007) argues that developing exploratory talk through thinking together is likely to be useful for formative peer assessment because it involves pupils who can reason effectively through discussion (Black, 2007). This argument was supported by Boon’s (2016b) study of effective peer assessment processes in primary schools. This study found that exploratory talk was useful for peer assessment, given that it involves students critiquing ideas effectively, a necessary dimension of effective peer feedback. Boon (2016b) also found that following thinking together , children were able to communicate in ways that supported effective peer assessment. For example, children who had previously engaged in arguments and disputes were able to use talk during a maths activity to assess the ongoing ideas of their partner and drive learning forwards. Having considered how teachers might use peer assessment as a learning strategy to improve the quality of written and verbal feedback, it is now time to offer some recommendations for professional practice.

Recommendations

Peer assessment can be a powerful learning strategy if students are adequately prepared for it and have been guided to develop key interpersonal skills (Black et al., 2003; Boon 2016a, 2016b; Topping, 2017, 2018). For peer assessment to have an impact on learning, students need:

  • training to peer assess effectively, which includes using prompts, checklists of criteria, teacher modelling of how to assess work and regular practice
  • to be given time to discuss and reflect on feedback given to improve work
  • guidance on working collaboratively so that they are able to use talk in ways that enable them to hypothesise, reason and critique.  

Berg EC (1999) The effects of trained peer response on ESL students’ revision types and writing quality. Journal of Second Language Writing 8(3): 215–241.

Black P (2007) Full marks for feedback. Make the Grade (Journal of the Institute of Educational Assessors) 2: 18–21.

Black P, Harrison C, Lee C et al. (2003) Assessment for Learning Known as AfL for short, and also known as formative assessment, this is the process of gathering evidence through assessment to inform and support next steps for a students’ teaching and learning : Putting it into Practice . Maidenhead: Open University Press.

Boon SI (2015) The role of training in improving peer assessment skills amongst year six pupils in primary school writing: An action research enquiry. Education 3 – 13: International Journal of Primary, Elementary and Early Years Education 43(6): 664–680.

Boon SI (2016a) Increasing the uptake of peer feedback in primary school writing: Findings from an action research enquiry. Education 3 – 13: International Journal of Primary, Elementary and Early Years Education 44(2): 212–225.

Boon SI (2016b) How can peer assessment be used in ways which enhance the quality of younger children ’ s learning in primary schools? Ed D thesis, University of Leicester, UK.

Gielen S, Peeters E, Dochy F et al. (2010) Improving the effectiveness of peer feedback for learning. Learning and Instruction 20(4): 304–315.

Kamins ML and Dweck CS (1999) Person versus process praise and criticism: Implications for contingent self-worth and coping. Developmental Psychology 35(3): 835–847.

Kollar I and Fischer F (2010) Peer assessment as collaborative learning: A cognitive perspective. Learning and Instruction 20(4): 344–348.

Mercer N (2000) Words and Minds: How We Use Language to Think Together . London: Routledge.

Mercer N, Dawes R, Wegerif R et al. (2004) Reasoning as a scientist: Ways of helping children to use language to learn science. British Educational Research Journal 30 (3): 367–385.

Mercer N and Sams C (2006) Teaching children how to use language to solve maths problems. Language and Education 20(6): 507–528.

Min HT (2005) Training students to become successful peer reviewers. System 33(2): 293–308.

Min HT (2006) The effects of trained peer review on EFL students’ revision types and writing quality. Journal of Second Language Writing 15(2): 118–141.

Olson VLB (1990) The revising processes of sixth-grade writers with and without peer feedback. Journal of Educational Research 84(1): 22–29.

Topping KJ (2017) Peer assessment: Learning by judging and discussing the work of other learners. Interdisciplinary Education and Psychology 1(7): 1–17.

Topping KJ (2018) Learning by Peer Assessment: Appraising, Reflecting, Discussing . New York and London: Routledge.

From this issue

essay on peer assessment

Issue 8: Cognition and Learning

Spring 2020

Impact Articles on the same themes

essay on peer assessment

From the editor

essay on peer assessment

Transforming assessment principles and practices through collaboration: A case study from a primary school and university

essay on peer assessment

The currency of assessment for learners with SEND

essay on peer assessment

Rethinking assessment: How learner profiles can shift the debate towards equitable and meaningful holistic assessment

essay on peer assessment

Assessing progress in special schools: Reviews and recommendations

essay on peer assessment

  • Original Research

Classroom assessment in flux: Unpicking empirical evidence of assessment practices

essay on peer assessment

The role of frequent assessment in science education at an international school in Singapore

essay on peer assessment

  • Teacher Reflection

Teaching creativity: An international perspective on studying art in the UK

essay on peer assessment

Improving academic resilience and self-efficacy through feedback: Moving from ‘what’ to ‘how’

essay on peer assessment

Mind the gap: What are national assessments really telling us about vocabulary and disadvantaged students?

Pears Pavillion Corum Campus 41 Brunswick Square London WC1N 1AZ

[email protected] 020 3433 7624

Skip to Content

Other ways to search:

  • Events Calendar
  • Student Peer Assessment

Student peer assessments are structured opportunities for students to provide and receive meaningful feedback on their work from their classmates. Engaging in peer assessment (also called peer review) fosters important skills for both the student doing the review and the one receiving the review. By serving as a reviewer, students learn how to provide constructive feedback. Moreover, students receiving feedback can gain a fresh perspective on their work and make improvements prior to their final submission. By helping students learn how to give and receive constructive feedback, peer assessments provide students with valuable skills that transfer to real-world settings where collaborative work is common. This is especially the case for students in STEM disciplines, where peer review is an important component of the publication process. Partaking in a collaborative process to improve each other’s work also promotes higher engagement with the course material, better understanding evaluation criteria and increases a sense of belonging among students, all of which are known to improve student academic success (Cho and MacArthur, 2010; Gilken and Johnson, 2019).

Best Practices to Incorporate Peer Feedback in Assessments

Peer assessments can improve student work without a proportionate increase in instructor workload. Nonetheless, instructors need to prepare students for peer review with adequate instruction and examples to be effective. Expand the boxes below to learn more about best practices and resources to incorporate student peer assessments in your courses.

Before Peer Review

Instructors should identify a particular assignment or task that could benefit from peer-review in the course. This includes multi-step projects and formative assessments (e.g., low-stakes homeworks or 5-minute essays). Ideally, use peer review for formative feedback, and not as a basis for grades since students may sometimes be inconsistent in how the rubric is applied. Instead, consider those assessments on which students have ample time to improve their work in response to the feedback received before a future higher-stakes submission or summative assessment. Some examples of summative assessments which can include a component of peer review on initial drafts include: written essays, storyboards, project reports or presentations. Another way to incorporate peer review is to allow students to reflect on and evaluate the contribution of each group member to a project/presentation, what worked well and what could be done better.

Clearly explain why peer review is important and how it connects to the learning objectives of the course.

Anticipate what tools will be used for the peer-review process, and provide students with instruction and support for using these tools. For example, students can be taught to use the suggesting or track changes feature in Google Docs or MS Word to provide feedback. Certain settings in MS word allow for double-blind review of documents in which the name of the student or commenter is not known, limiting potential biases in review. You may also ask students to bring in a named copy and an unnamed copy of their work to class for engaging in double-blind peer-review. You can create a peer review assignment in Canvas that is associated with a rubric for students to use to evaluate individual or group assignments . Alternatively, you may adopt a template to use Qualtrics or google forms for peer review .

Consider in advance what process you will use to assign peer reviewers to students. Peer-review groups can be directly assigned or set up randomly in Canvas . Always leave room for flexibility in changing assigned groups depending on student needs.

Set expectations around the timeline for providing feedback and if this will be done as an in-class activity or will be completed outside of the classroom. If the assignment for review is shorter than 4 pages, does not require detailed written feedback and the class sizes are small (< 40 students), consider allocating time in class itself. If assigned outside the classroom, provide students with guidelines regarding the time they should spend to review an assignment. You may also assign points for completing the review, to ensure all students receive feedback on their assignment.

Construct a rubric that allows students to provide detailed feedback–for example, a checklist rubric (docx) or single-point rubric (docx) with space for comments. Alternatively, you may create a feedback form that contains specific questions to guide the peer review process accompanied by a detailed analytic rubric. Co-creating rubrics with students may also provide an additional avenue for students to engage with the material because it requires students to identify and consider on what criteria their work should be evaluated. 

Train students on the rubric so that they can apply it effectively and consistently. For example, you may set aside time in class for  students to practice applying the rubric to a sample of work. Then, facilitate a discussion on how students used the rubric and what are ways to provide effective feedback. Consider using at least one class session to discuss best practices in providing feedback, the distinction between reviewing and editing. Remind students that they are evaluating the work and not the person. At the same time, remind them of the vulnerability of the person in sharing their work and avoiding value judgements.

Model the type of feedback students should expect to receive or provide in assessments.This may include providing a short summary of the work, along with what aspects work well and what could be improved on. If needed, take time in class to discuss what good feedback looks like. Some sample prompts include (Bean, 2009):

  • Write out at least two things that you think are particularly strong about this draft.
  • Identify two or three aspects of the draft where there is room for growth or improvement.
  • Make two or three directive statements recommending the most important changes that the writer should make in the next draft.

After Peer Review

The review work has been done; now what? How can you help students integrate what they have learned in the review process and improve their work?

Here are a few options for how to proceed:

  • Before receiving peer feedback, students do a self-assessment using a rubric or based on embedded prompts. Then, they compare their observations to the peer feedback.
  • Instead of directly sharing all feedback with individuals or groups of students, as an instructor you may summarize quantitative and qualitative feedback and address common themes during class. 
  • Students discuss the peer feedback with the instructor to help develop strategies for improvement.
  • In a non-blind review process, students could discuss the feedback with their peer reviewer to seek clarification and prioritize the comments to address.
  • After revising their work in response to peer feedback, students can summarize the feedback they received, describe the changes they made in response to the feedback or provide a justification for not incorporating suggested changes.
  • After revising their work, students can conduct a self-assessment in the form of a memo that describes the changes they made in response to peer review and their reflection on how review improved their work. 

References:

Bean, John C. (2011). Engaging ideas: The professor’s guide to integrating writing, critical thinking, and active learning in the classroom (second edition). San Francisco: Jossey-Bass.

Center for Teaching Innovation. (n.d.). Peer Assessment . Cornell University

Center for Teaching Innovation. (n.d.). Teaching students to evaluate each other . Cornell University

Cho, K., & MacArthur, C. (2010). Student revision with peer and expert reviewing . Learning and Instruction, 20 (4), 328–338.

Gilken, J.M. & Johnson, H.L. (2021). Implementing a Peer Feedback Intervention within a Community of Practice Framework . Community College Journal of Research and Practice , 45:3, 155-166.

Stearns Center Writing Across the Curriculum (n.d.). How to Help Students Give Effective Peer Response . George Mason University.

Stearns Center Writing Across the Curriculum (n.d.). Tips for Commenting on Student Writing . George Mason University.

WAC Clearinghouse. (2006, April). Creating effective peer review groups to improve student writing . Colorado State University.

Further reading and resources:

Stevens, D. D., & Levi, A. J. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (second edition). Sterling, VA: Stylus.

Sweetland Center for Writing.(n.d.). Using peer review to improve student writing . University of Michigan.

  • Assessment in Large Enrollment Classes
  • Classroom Assessment Techniques
  • Creating and Using Learning Outcomes
  • Early Feedback
  • Five Misconceptions on Writing Feedback
  • Formative Assessments
  • Frequent Feedback
  • Online and Remote Exams
  • Student Learning Outcomes Assessment
  • Student Self-assessment
  • Summative Assessments: Best Practices
  • Summative Assessments: Types
  • Assessing & Reflecting on Teaching
  • Departmental Teaching Evaluation
  • Equity in Assessment
  • Glossary of Terms
  • Attendance Policies
  • Books We Recommend
  • Classroom Management
  • Community-Developed Resources
  • Compassion & Self-Compassion
  • Course Design & Development
  • Course-in-a-box for New CU Educators
  • Enthusiasm & Teaching
  • First Day Tips
  • Flexible Teaching
  • Grants & Awards
  • Inclusivity
  • Learner Motivation
  • Making Teaching & Learning Visible
  • National Center for Faculty Development & Diversity
  • Open Education
  • Student Support Toolkit
  • Sustainaiblity
  • TA/Instructor Agreement
  • Teaching & Learning in the Age of AI
  • Teaching Well with Technology
  • Our Mission

Reframing How We Assess Student Writing

The work of assessing writing assignments can be shared with students, creating a critical learning opportunity for them.

A teacher giving a student feedback about his writing

Every teaching role has its unique burden. Science teachers invest long hours in preparing laboratory experiments with expensive and sometimes hazardous materials. Math teachers wrestle with innumeracy and  negative stereotypes of math , especially at the higher levels. History teachers work hard to avoid turning their subject matter into the rote memorization of isolated facts .

English teachers like myself, and upper division ones especially, are witness to a tidal wave of written work: quick writes, compositions, literary responses, annotated bibliographies, timed and take-home essays, and more. I consider myself an effective and disciplined teacher, but I am regularly submerged under the work produced by my 180+ students.

The problem with the traditional method of handling the paper load is that it is still fundamentally teacher-centered, relying on our time management and efficiency rather than on innovations that broaden the number of stakeholders and focus on qualitative outcomes over more traditional grading. A better way, one that I first considered as I learned to teach students to think about their audience, involves turning students from mere producers of writing into scholars and theorists whose experiences carry authority, merit, and value.

Students should write a lot. The following strategies aim to help English teachers in particular, and teachers of writing more broadly, understand how to reframe the assigning and assessing of writing to improve students’ skills—while improving teachers’ mental health, too.

4 Ways to Increase Student Writers’ Authority

1. Complicate the audience: One of the biggest problems with in-school writing, whether in high school or college, is the one-dimensionality of the compositions. They are generally intended for an audience of one—the person doing the grading—and read like an extended advertisement, parroting the grader’s ideas back at them.

My time at the Reynolds High School Journalism Institute as a journalism teacher showed me how powerful writing for a complex, multifaceted audience can be. Rather than writing to impress, students should aim for communicating their ideas , regardless of topic, to a heterogeneous and educated but uninformed audience that is willing to be convinced—if the writer conveys information with careful, measured argument and prose. This requires that teachers teach and reinforce an understanding of audience as an expansive, open category rather than a closed one.

2. Develop the students’ ability to synthesize: The most important aspect of the AP English Language and Composition exam, and one that is increasingly relevant outside of the classroom, is the synthesis essay . For this task, students read a series of short texts on a topic and then create their own informed position. Instead of a straightforward argumentative essay—though there is that, too, on the exam—students must aim to be inclusive and nuanced, situating their thought among ideas they find interesting and those they actively disagree with. The goal here is not to win the argument  but to demonstrate an understanding of the topic and take a stance, anticipating and responding to a reader’s counterarguments and questions.

In the classroom, teachers can easily create opportunities for synthesis writing. For example, teachers can have students draw on their peers’ ideas about a topic as the bank of information they can use in their own responses. The resulting papers will aim to integrate and respond to the many classroom perspectives and teach students that their classmates are sources of wisdom. An essay about America’s place in the world, the purpose of education, or our duties or obligations to the environment, for instance, would produce a wide range of opinions that students could then synthesize, expanding their own understanding and situating themselves among the stances of their peers.

3. Focus on individual growth over grades: Overemphasizing grades can stifle growth and creativity. Students of writing should understand how they are developing relative to their own past performance—in addition to where they stack up against other writers generally. Ideally, this means that teachers would work with students on developing a writing portfolio , a cross-section of work completed throughout the term or class that reflects their strengths and growth as writers.

Using those materials as a historical record, teachers can lead students through the revision process in a deeper, more comprehensive way—not just fixing the small, conventional things but also looking at persistent, high-level structural and organizational issues that, once addressed, can turn adequate writers into exceptional ones.

One of the main differences I have found between the typical and the excellent student writer is an awareness of one’s own prose and style. After taking a close, honest look at their body of work, students become more comfortable speaking to their own writing in a mature and introspective way. At a bare minimum, requiring revisions will ensure that students know how to improve their work, taking a look at it later with clear eyes.

4. Let students practice self and peer assessment: Every year, I tell my students that my objective is to make myself irrelevant—I’ll help during the course of the year, but they eventually need to go it alone. In my class, this has meant incorporating metacognition as a part of my classroom’s daily practice.

Students spend a great deal of time scoring essays —their own, those of their peers, and, when I can get them, sample essays from other teachers or the AP exam. While rubrics are critical to ensuring reflective conversations, it’s not enough to ask students to evaluate the writing against the rubric alone. Rather, I try to have students mimic the kind of conversations that teachers have with colleagues when they’re assessing writing: The goal is not just a grade, but a clear, persuasive explanation that identifies specific passages and choices the author made.

From there, teachers can have high-level conversations about why the student score is right or wrong, and move on to a brief, targeted writing conference on specific ways to tweak the piece. As students gain fluency, these writing conferences can be student-to-student, gradually removing the teacher from the equation (I told you I’d make myself irrelevant!).

The important takeaway here—across all of these strategies—is that teachers are not the best or only source of assessment, and they are not the only audience for writing. It is far better to teach students how to fish; far better for teachers to spread the hard work of assessment around so more writing practice can be incorporated; and far better for student writers to consider the craft from many perspectives, and with an awareness of the complexity of real audiences of readers.

  • Student Voice
  • Strategic Priorities
  • Workshops & Training

Case Studies

  • Assessment Principles
  • Writing an Effective Assessment Brief
  • Writing Assessment Criteria and Rubrics
  • Formative Assessment
  • Options for Peer Assessment and Peer Review
  • Programme Focussed Assessment
  • What Makes Good Feedback
  • Feedback on Exams
  • Presentations and Video Assessment
  • Digital Assessment: Canvas and Turnitin
  • Take Home Exams
  • Choosing a Digital Assessment Tool
  • Design Thinking
  • Artificial Intelligence (AI)
  • Synchronous Online Teaching
  • Scheduling Synchronous Sessions
  • Recording Online Teaching
  • Zoom or Teams?
  • Digital Accessibility
  • Universal Design for Learning
  • 'In-session' Social Activities
  • Facilitated Social Engagement
  • Student-led Spaces
  • Reflective Practice
  • Learning Analytics
  • Recording Videos
  • Editing Videos
  • Sharing Videos
  • Audio and Podcasts
  • Captions and Transcripts
  • Hybrid Teaching
  • Large Group Teaching
  • Laboratory Teaching
  • Field Trips and Virtual Fieldwork
  • Peer Dialogue
  • Digital Polling
  • Copyright and IPR
  • Principal Fellowship of HEA
  • National Teaching Fellowship Scheme
  • Internal Support
  • External Resources
  • Collaborative Award for Teaching Excellence
  • Newcastle Educational Practice Scheme (NEPS)
  • Introduction to Learning and Teaching (ILTHE)
  • UKPSF Experiential Route
  • Evidencing Learning and Teaching Skills (ELTS)
  • Learning and Teaching Conference
  • Education Enhancement Fund
  • Vice-Chancellor's Education Excellence Awards
  • Peer Mentoring
  • Personal Tutoring
  • Virtual Exchange
  • Information and Digital Literacy
  • Academic and Study Skills
  • Special Collections and Archives
  • Reading List Toolkit
  • Employability and Graduate Skills
  • Academic Skills Kit
  • Numeracy, Maths and Stats
  • New Courses (24-25)
  • Access Canvas
  • Canvas Baseline
  • Community Information
  • Help and Support
  • Canvas Quizzes
  • Canvas New Analytics
  • Ally for Canvas
  • Third Party Tool Integrations
  • Roles, Permissions and Access
  • Recording in Teaching Spaces
  • Recording in the office or at home
  • Edit & Share Recordings
  • Captions in ReCap
  • ReCap Live Broadcast
  • ReCap Enabled Venues
  • Events & Conferences
  • ReCap Booking Requests
  • Multiple Bookings
  • ReCap Update Information
  • PCap Updates
  • Create and manage reflections
  • Identify the skills you are developing
  • Communicate and collaborate
  • NU Reflect tools
  • Personal Tutoring & Support landing page
  • Data Explorer
  • Open Badges
  • Supported Software

essay on peer assessment

Peer Assessment

NEW: A vision for education and skills at Newcastle University:  Education for Life 2030+

  • Newcastle University
  • Learning and Teaching @ Newcastle
  • Effective Practice
  • Assessment and Feedback

Peer assessment is students taking responsibility for assessing the work of their peers against defined assessment criteria:

  • it can lead to feedback (peer review) and/or summative grades
  • it can be used for individual work, contributions to group work, or group work outputs
  • it can be anonymous, or not

Why should I use peer assessment?

Phil Race has defined a number of reasons to use peer assessment:

  • students will get the chance to find out more about assessment culture
  • lecturers have less time to assess than before
  • learning is enhanced when students have contributed to their marking criteria
  • assessing is a good way to achieve deep learning
  • students can learn from the successes of others
  • students can learn from other's mistakes

Race, P. (2001).  The Lecturer's Toolkit . London. Kogan Page, pp. 94-95.

Three students sitting at a desk. One students is writing on a piece of paper and the others are using laptops

Where can I use peer assessment?

Potentially anywhere... But wherever you use it you will need to consider the assessment carefully and possibly tweak it to make it fit well.

Peer assessment requires students to develop/have some skill in:

  • evaluating their own performance
  • identifying strengths, weaknesses and areas to improve
  • comprehending the assessment criteria

Setting up a task

Peer review.

With oversight, peer review can provide an effective way of generating feedback, as part of a formative piece of work, or as stage towards a summatively assessed piece of work for example.

Essay → Peer Review → peer feedback

Essay plan → Peer review → Feedback → Final essay → feedback

Peer review is also a valuable way to help students engage more deeply with assessment criteria.  In group settings, peer review can help students reflect on their group processes and on interpersonal attributes. 

Three students discussing during a seminar

More about Peer Assessment and Peer Review

Buddycheck is a Canvas integrated tool for peer evaluation of group work contribution.

Options for peer assessment

You can conduct peer assessment manually (eg ask students to exchange scripts by email) but for larger groups it will be easier to use one of the tools available in Canvas. These will help you to manage the process, make sure it is fair, and have oversight of the quality of reviews that students are writing. Learn more about setting up peer-assessed assignments in Canvas and PeerMark within Turnitin.

essay on peer assessment

Using BuddyCheck to support peer review

Business School policy is that peer review will be carried out on any module where the group work is 50% or more of the module marks.

essay on peer assessment

Peer assessment of lab reports

Peer feedback is used on Stage 1 lab reports in the School of Engineering to encourage students to recognise good and bad practice and to correct mistakes before the final submission of the assessment.

essay on peer assessment

Peer review to improve essay writing

Compulsory Peer Review for First Year students in the School of English Literature, Language and Linguistics.

FIND OUT MORE

Support and additional guidance, get in touch.

Center for Teaching Innovation

Resource library.

  • Teaching students to evaluate each other

Peer assessment

Peer assessment or peer review provides a structured learning process for students to critique and provide feedback to each other on their work. It helps students develop lifelong skills in assessing and providing feedback to others, and also equips them with skills to self-assess and improve their own work. 

If you are interested in facilitating a team member evaluation process for group projects, see the page on  Teaching students to evaluate each other .

Why use peer assessment? 

Peer assessment can: 

  • Empower students to take responsibility for and manage their own learning. 
  • Enable students to learn to assess and give others constructive feedback to develop lifelong assessment skills. 
  • Enhance students' learning through knowledge diffusion and exchange of ideas. 
  • Motivate students to engage with course material more deeply. 

Considerations for using peer assessment 

  • Let students know the rationale for doing peer review. Explain the expectations and benefits of engaging in a peer review process. 
  • Consider having students evaluate anonymous assignments for more objective feedback. 
  • Be prepared to give feedback on students’ feedback to each other. Display some examples of feedback of varying quality and discuss which kind of feedback is useful and why. 
  • Give clear directions and time limits for in-class peer review sessions and set defined deadlines for out-of-class peer review assignments. 
  • Listen to group feedback discussions and provide guidance and input when necessary. 
  • Student familiarity and ownership of criteria tend to enhance peer assessment validity, so involve students in a discussion of the criteria used. Consider involving students in developing an assessment rubric. 

Getting started with peer assessment 

  • Identify assignments or activities for which students might benefit from peer feedback. 
  • Consider breaking a larger assignment into smaller pieces and incorporating peer assessment opportunities at each stage. For example, assignment outline, first draft, second draft, etc. 
  • Design guidelines or   rubrics   with clearly defined tasks for the reviewer. 
  • Introduce rubrics through learning exercises to ensure students have the ability to apply the rubric effectively. 
  • Determine whether peer review activities will be conducted as in-class or out-of-class assignments. For out-of-class assignments, peer assessments can be facilitated through Canvas using tools such as FeedbackFruits peer review and group member evaluation , Canvas peer review assignment , or Turnitin . See the Comparison of peer evaluation tools to learn more and/or set up a consultation by contacting CTI ." 
  • Help students learn to carry out peer assessment by modeling appropriate, constructive criticism and descriptive feedback through your own comments on student work and well-constructed rubrics. 
  • Incorporate small feedback groups where written comments on assignments can be explained and discussed with the receiver. 

X

Teaching & Learning

  • Education Excellence
  • Professional development
  • Case studies
  • Teaching toolkits
  • MicroCPD-UCL
  • Assessment resources
  • Student partnership
  • Generative AI Hub
  • Community Engaged Learning
  • UCL Student Success

Menu

Peer assessment

Students mark and give feedback on their peers work, which helps them understand how they will be assessed. Here's how to incorporate this in your teaching. 

The words Teaching toolkits ucl arena centre on a blue background

1 August 2019

Students grade and/or give feedback comments on each other’s work.

You can use either:

  • formative work that does not receive a mark towards the final grade, or
  • summative work which counts towards final grades and degree classifications.

Examples could include group work participation, oral presentations, essays and lab reports.

Educational benefits for your students

Involving students in assessment is a valuable way to help them understand assessment criteria and academic requirements.

Peer assessment can be especially valuable for international students who may have little understanding of UK assessments practice and can also help students from diverse backgrounds transitioning to university.

How to build peer assessment into your teaching

Preparation.

Good preparation is essential. Most problems with peer assessment arise because learners have not been adequately prepared.

Introduce peer assessment and explain how it will help students. Without this justification, students might think they are just doing your marking.

Good planning should also include ‘low risk’ or practice activities such as guided marking – where students mark and discuss previously-submitted assignments with peers and the teacher. See the Guided marking toolkit .

Give guidance on how to write constructive feedback.

Feedback form

Use a feedback form to guide student feedback comments.  

See Using proformas toolkit  for an example of a feedback form.

Anonymisation

Where possible, ensure assignments are submitted and graded anonymously.

Moodle Workshop can be used to manage online peer assessment.

Digital Education can advise ( [email protected] ).

Rehearsal marking

For summative peer assessment, arrange both a briefing session and a rehearsal marking session.

After submission of the assignment, distribute three sample assignments to the whole cohort. Discuss these samples either online or in a lecture/seminar and clarify any difficult content.

Moderation and complaints

All assessments at UCL must be robust and fair.

Ensure that all peer assessor marks are moderated and that students are reassured of this.

Set up and inform students of the complaints procedure.

Following these steps will ensure trustworthiness of the feedback and it will build students’ confidence in the fairness of the marks.

  • Carnell, B. 2015. Aiming for autonomy: Formative peer assessment in a final-year undergraduate course . Assessment & Evaluation in Higher Education.
  • Falchikov, N. and J. Goldfinch. 2000. Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks . Review of Educational Research 70 (3): 287–322.
  • McConlogue, T. 2014 Making judgements: investigating the process of composing and receiving peer feedback . Studies in Higher Education. 40 (9): 1495-1506.
  • Orsmond, P 2004 Self and Peer Assessment, guidance on practice in the Biosciences , Centre for Bioscience, The Higher Education Academy.
  • Assessment and feedback: resources and useful links .

This guide has been produced by the UCL Arena Centre for Research-based Education . You are welcome to use this guide if you are from another educational facility, but you must credit the UCL Arena Centre. 

Further information

More teaching toolkits  - back to the toolkits menu

[email protected] : contact the UCL Arena Centre

UCL Education Strategy 2016–21

Assessment and feedback: resources and useful links 

Six tips on how to develop good feedback practices  toolkit

Download a printable copy of this guide  

Case studies : browse related stories from UCL staff and students.

Sign up to the monthly UCL education e-newsletter  to get the latest teaching news, events & resources.

Assessment and feedback events

Funnelback feed: https://search2.ucl.ac.uk/s/search.json?collection=drupal-teaching-learn... Double click the feed URL above to edit

Assessment and feedback case studies

Peer Assessment: Definition, Benefits, Challenges

📄 Words: 321
📝 Subject:
📑 Pages: 1
✍️ Type: Essay

Peer assessment is generally described as a range of activities focusing on enabling students to evaluate and provide feedback on the work of the people they study with. It allows students to gain more control over their learning and helps develop essential life skills to constructively critique other people’s work as well as improve their own.

The main benefits of peer assessment are increasing critical thinking, encouraging learning through giving feedback, and active engagement in the process. The study shows that using peer assessment promotes learning and enables a teacher to focus on helping students with more significant difficulties or advanced tasks (Double et al., 2019). It could give a basis to constructively communicate feedback and being able to accept the input of others.

One of the main challenges of giving feedback is the varying knowledge level of students. Some people can have a limited understanding of the subject, thus ineffectively marking other students’ work. Among other challenges, there is friendship bias (elevated grades due to personal relationships among students), as well as gender, age, or ethnicity discrimination. Moreover, receiving feedback may be a challenge as well as it involves dealing with ego management and emotions in case of negative comments. Some students may not be able to take criticism, so they get into a disagreement with other students, which may influence the atmosphere in class. The best strategy is to encourage students to justify their assessment with arguments and create well-understood criteria. Additionally, it would be practical to make both assessors and assessed peers anonymous to minimize the chances of bias or judgment and improve the honesty of the evaluation.

Peer assessment helps to focus on the student-centered form of education, which improves metacognitive thinking and enhances the active engagement of people in their learning. It also enables students to look at the work of peers from the teacher’s angle, which creates a collaborative model of both teaching and learning.

Double, K., McGrane, J., & Hopfenbeck, T. (2019). The impact of peer assessment on academic performance: a meta-analysis of control group studies. Educational Psychology Review . Web.

Cite this paper

Select style

  • Chicago (A-D)
  • Chicago (N-B)

ChalkyPapers. (2023, October 10). Peer Assessment: Definition, Benefits, Challenges. https://chalkypapers.com/peer-assessment-definition-benefits-challenges/

"Peer Assessment: Definition, Benefits, Challenges." ChalkyPapers , 10 Oct. 2023, chalkypapers.com/peer-assessment-definition-benefits-challenges/.

ChalkyPapers . (2023) 'Peer Assessment: Definition, Benefits, Challenges'. 10 October.

ChalkyPapers . 2023. "Peer Assessment: Definition, Benefits, Challenges." October 10, 2023. https://chalkypapers.com/peer-assessment-definition-benefits-challenges/.

1. ChalkyPapers . "Peer Assessment: Definition, Benefits, Challenges." October 10, 2023. https://chalkypapers.com/peer-assessment-definition-benefits-challenges/.

Bibliography

ChalkyPapers . "Peer Assessment: Definition, Benefits, Challenges." October 10, 2023. https://chalkypapers.com/peer-assessment-definition-benefits-challenges/.

Peer Assessment

Using student-to-student peer assessment.

Peer assessment describes a range of activities in which students evaluate and provide feedback on the work of their peers. Formative peer assessment involves feedback on drafts of work before the final product is submitted. Summative peer assessment includes evaluation of other students’ products or participation and/or contributions as part of a grade. Peer assessment is commonly used as a strategy for students to assess their fellow students’ contributions to group work, particularly valuable in team-based learning (learn more from team-based learning ).

Formative and Summative Peer Assessment

Peer assessment can take many forms that can vary depending on the learning goals, the disciplinary context, and available technologies. Peer assessment is often characterized as taking either a formative or summative approach.

Formative Peer Assessment

  • Students are introduced to the assignment and criteria for assessment
  • Students are trained and given practice on how to assess and provide feedback
  • Students complete and submit a draft
  • Students assess the drafts of other students and give feedback
  • Students reflect on the feedback received and revise their work for final submission
  • Assignments are graded by the instructor
  • Instructor reflects on the activity with the class

Summative Peer Assessment

  • Students are trained and given practice on how to use the grading rubric and provide feedback
  • Students complete and submit a final assignment
  • Students assess the assignments of 3 to 6 other students using the grading rubric and provide feedback
  • Grades are determined for each student by taking the median score given by their peers
  • Instructor and students reflect on the activity with an emphasis on reinforcing the learning that occurred in the giving of peer feedback

Key Questions to Answer in Peer Assessment

Before implementing peer assessment within your course, consider the following (modified from Gielen, 2010 and Topping, 1998).

  • Object of assessment – What will students produce? A paper, web page, poster, presentation, video, group project participation/contribution? What skills are students expected to develop and demonstrate as they produce this artifact?
  • Product of peer assessment – What is the output that students create while assessing their peers? Grades, rubrics, rankings, guided questions, qualitative feedback?
  • Formative or Summative – Will students provide both formative and summative feedback or just one?
  • Grading – How will students be graded on the assignment? Will peer assessment replace the instructor assessment (substitutional)? Will students receive marks or feedback from both peers and instructors (partially substitutional)? Or, will peer assessment provide additional feedback but be primarily assessed by the instructor for the final grade (supplementary)? Will you give students feedback or assign a grade, with or without evaluation, for their assessments of their peers?
  • Reviewer organization and directionality – How will peer assessors be assigned? (e.g. randomized, self-selected, instructor selected, small group, pair-matched). How many assessments will you require each student to complete? Will the reviews be anonymous or will there be dialogue between the peers reviewing each other?
  • Training – How experienced and confident are the students with peer assessment? How will students be trained to assess the work of their peers and provide feedback? At what point in the process does the training occur?

Types of Peer Assessment

  • Formative Feedback – Students provide formative and constructive feedback on drafts that students are able to revise before submitting a final product. This can be done as an in-class activity or using a variety of online tools, including Canvas. The final grade is given by the instructor or TA. This can also be done using a sequence of assignments. Students get peer feedback after each assignment and then are able to apply the feedback to each subsequent assignment with the goal of improving over time.
  • Peer grading – Students assign grades to their peers based on assessment criteria. Peer grading is typically done using online tools that randomly and anonymously distribute assignments for review by a specified number of other students. Students grade their peers using an online rubric and final scores for a particular assignment are typically tallied by taking the median value of all peer grades that assignment has received.
  • Peer assessment of group work participation – Grading group work can be a challenge for instructors because it is difficult to determine the contributions of each individual student. Many instructors use peer assessment to supplement instructor grades by adding a participation component to group assignments. Students give a participation score and overall comments for each group member using a rubric that is based on criteria that the instructor establishes. The instructor then uses these evaluations to give each student an overall participation grade for the assignment.

Explore Tools and Strategies

Canvas peer reviews.

Enable students to provide feedback on another student’s assignment submission in Canvas.

Collaboration Tools

Enhance the ability of student teams to collaborate effectively.

Peer Assessment (PA) Resources from McGill University website

  • Sample guiding questions for written assignments and oral assignments and more, download the Designing Peer Assessment Assignments: A Resource Document for Instructors (PDF)
  • For curated peer assessment tools and example forms, download Using Peer Assessment to Make Teamwork Work: A Resource Document for Instructors (PDF)
  • Learn about 4 different case studies presented highlighting how faculty members implemented PA via the Cases: Peer assessment posts on the McGill blog webpage
  • Explore examples intended to inspire you to create guiding questions, rubrics, checklists or rating scales that are appropriate for peer assessment with your students on the Examples of PA assignments webpage .

Additional Resources

  • Baker, K. M. (2016). Peer review as a strategy for improving students’ writing process. Active Learning in Higher Education , 17 (3), 179-192.
  • Carvalho, A. (2013). Students’ perceptions of fairness in peer assessment: evidence from a problem-based learning course. Teaching in Higher Education , 18 (5), 491-505.
  • Cho, K., & MacArthur, C. (2011). Learning by reviewing. Journal of Educational Psychology , 103(1), 73-84.
  • Gielen, S., Dochy, F., & Onghena, P. (2011). An inventory of peer assessment diversity. Assessment & Evaluation in Higher Education , 36(2), 137-155
  • Kaufman, J. and Schunn, C. (2011). Students’ perceptions about peer assessment for writing: their origin and impact on revision work. Instructional Science , 39(3), 387-406.
  • Liu, J. and Law, N. (2012). Online peer assessment: effects of cognitive and affective feedback. Instructional Science. 40(2), 257-275.
  • Moore, C., & Teather, S. (2013). Engaging students in peer review: Feedback as learning. Teaching and Learning Forum 2013 . http://clt.curtin.edu.au/events/conferences/tlf/tlf2013/refereed/moore.html
  • Nicol, D., Thomson, A., Breslin, C. (2013). Rethinking feedback practices in higher education: a peer review perspective. Assessment and Evaluation in Higher Education , 39(1), 102-122.
  • Potter, T., Englund, L., Charbonneau, J., MacLean, M. T., Newell, J., & Roll, I. (2017). ComPAIR: A New Online Tool Using Adaptive Comparative Judgment to Support Learning with Peer Feedback. Teaching & Learning Inquiry , 5 (2), 89-113.
  • Sluijsmans, D. M., Brand-Gruwel, S., & van Merriënboer, J. J. (2002). Peer assessment training in teacher education: Effects on performance and perceptions. Assessment & Evaluation in Higher Education , 27 (5), 443-454.
  • Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research , 68 (3), 249-276.
  • Wride, M. (2017). Guide to peer-assessment. Academic Practice, University of Dublin Trinity College. Retrieved from https://www.tcd.ie/CAPSL/assets/pdf/Academic%20Practice%20Resources/Guide%20to%20Student%20Peer%20Assessment.pdf

Peer Assessment, by the Center for Excellence in Learning and Teaching (CELT) at Iowa State University is licensed under Creative Commons BY-NC-SA 4.0 . This work, Peer Assessment, is a derivative of Ideas and Strategies for Peer Assessments webpage developed by The University of British Columbia. (retrieved on May 13, 2020) from https://isit.arts.ubc.ca/ideas-and-strategies-for-peer-assessments/.

The CELT Peer Assessment webpage is adapted with permission from the Arts ISIT at The University of British Columbia’s Ideas and Strategies for Peer Assessments webpage .

Better learning through peer feedback

PeerStudio makes peer feedback easier for instructors to manage, and students to learn from. It automates away the tedious aspects of peer reviewing, and learners use a comparison-based interface that helps them see work as instructors see it.

More than 39000 learners in dozens of universities use PeerStudio to learn design, writing, psychology, and more.

Used by happy instructors and students at

Ucsd

Research-based, comparative peer review

Comparative peer reviewing allows students to see what instructors see

PeerStudio leverages the theory of contrasting cases: comparing similar artifacts helps people see deeper, subtler distinctions between them. Think of wine tasting. It's much easier to notice the flavors when you compare one wine to another.

How PeerStudio uses comparisons

When students review in PeerStudio, they use a rubric that instructors specify. In addition to the rubric, PeerStudio finds a comparison submission within the pool of submissions from their classmates. While reviewing, students compare the target submission to this comparison submission using the provided rubric.

Review interface

AI-backed, always improving

PeerStudio uses an artifical-intelligence backend to find just the right comparison for each learner and submission. (Among other features, we use learners' history of reviewing, and that of their classmates to identify optimal comparison submissions.) This backend is always learning, with accuracy improving even within the same assignment.

Guided reviewing

Learners often want to review well, but don't know how. We both help students learn how to review through interactive guides, and automatically detect poor reviewing.

How PeerStudio creates a better classroom

Inspire students, inspire students with peer work.

Most students in college today see so little of the amazing work their classmates do. PeerStudio creates an opportunity to see inspiring work and get more feedback on it. Instructors also tell us that their students enjoy writing, designing and completing assignments for a real audience, and how the benefits of peer reviewing spill over into clearer presentations and more insightful questions in class.

CLOUD-BASED

Work as your students do.

PeerStudio is cloud-based, so it is available from any computer, and allows students to view most common assignment materials without any additional software: PDFs, videos (including auto-embedding Youtube), formatted text (bold, italics, tables,...) are all supported. And because it's backed up every night, your students (and you!) never have to worry about losing work.

CLASSROOM MANAGEMENT

Automated assignment of reviewers.

With traditional peer feedback tools, instructors have more, not less, work. You not only need to write the assignment and the rubric, but also assign students to review each other, make sure they do it on time, re-assign students who miss the deadline and so on.

PeerStudio automates it all. It uses an AI-backed to assign reviewers to submissions, automatically deals with balancing out reviewer load, and more. It even sends email reminders to students who haven't submitted their work or reviewed others.

REMAIN IN CONTROL

Grant deadline-extensions, keep an eye on review quality, and more.

PeerStudio automates the busywork, but you remain in control. On your dashboard, you can grant one-time exceptions, track who's late, even read individual reviews. Many instructors also use their dashboard to find examples of excellent work to show in class, or to recognize excellent peer feedback.

We can help you implement peer reviewing correctly and set up PeerStudio in your class. To get started, schedule a time to chat with us.

What our instructors say

PeerStudio has helped teach everything from design, English, music, and more.

Irani

Ready to get started?

We're also happy to help you think through introducing peer reviewing in your classroom, regardless of whether you use PeerStudio. Get in touch with us at [email protected] .

COMMENTS

  1. Self and peer assessment

    Self-assessment enables students to take ownership of their learning by judging the extent of their knowledge and understanding. It provides a structure for them to reflect on their work, what they have learned and how to improve. Peer-assessment, where they act as critical friends and support each other, can help students to develop self ...

  2. Peer and Self-Assessment Points

    Development of reflective and critical thinking skills; Students may focus their efforts on enhancing their weak points to succeed in the future; Promotion of continuous learning; The opportunity to reflect on their peers' assessment regarding the contribution; Promotion of students' awareness of their role in the course of learning and ...

  3. The Impact of Peer Assessment on Academic Performance: A ...

    Peer assessment has been the subject of considerable research interest over the last three decades, with numerous educational researchers advocating for the integration of peer assessment into schools and instructional practice. Research synthesis in this area has, however, largely relied on narrative reviews to evaluate the efficacy of peer assessment. Here, we present a meta-analysis (54 ...

  4. (PDF) Peer assessment

    essay writing using peer assessment. Innovations in . Education and Teaching International, 40 (3) 281-290. 40. Wimpfheimer, T. (2004) Peer-Evalu ated Poster Sessions:

  5. PDF Peer and Self assessment

    A web-based learning tool for group activity (or individuals) that allows users to upload artefacts on set topics and projects. The process and product can then undergo self and/or peer assessment. A web-based form that teachers can create in CloudDeakin that allows teachers and students to peer and self assess.

  6. Using peer assessment as an effective learning strategy in the classroom

    Peer assessment can be a powerful learning strategy if students are adequately prepared for it and have been guided to develop key interpersonal skills (Black et al., 2003; Boon 2016a, 2016b; Topping, 2017, 2018). For peer assessment to have an impact on learning, students need: training to peer assess effectively, which includes using prompts ...

  7. Student Peer Assessment

    Instructors should identify a particular assignment or task that could benefit from peer-review in the course. This includes multi-step projects and formative assessments (e.g., low-stakes homeworks or 5-minute essays). Ideally, use peer review for formative feedback, and not as a basis for grades since students may sometimes be inconsistent in how the rubric is applied.

  8. Self and Peer Assessment on Student Writing

    4 Ways to Increase Student Writers' Authority. 1. Complicate the audience: One of the biggest problems with in-school writing, whether in high school or college, is the one-dimensionality of the compositions. They are generally intended for an audience of one—the person doing the grading—and read like an extended advertisement, parroting ...

  9. Peer Assessment

    Essay → Peer Review → peer feedback. Essay plan → Peer review → Feedback → Final essay → feedback. Peer review is also a valuable way to help students engage more deeply with assessment criteria. In group settings, peer review can help students reflect on their group processes and on interpersonal attributes.

  10. PDF Designing Peer Assessment Assignments: A Resource Document for Instructors

    Peer Assessment (PA) refers to students providing feedback on other students' assignments to help them improve their work. This feedback may or may not involve a grade. When properly implemented, PA can be a reliable and valid method of assessment.2,3,9,12,13,18,19,28,31,32,33,38. 2.1 Benefits.

  11. PDF WEEK FOUR: PEER ASSESSMENTS

    In this example, students first answer very specific questions about their classmate's essay. Then, they complete an outline via a template, which gives them more freedom to interpret the essay. By providing a structured peer assessment, the teacher is able to guide students through the peer assessment process.

  12. Peer assessment

    Assessment & Evaluation. Peer assessment or peer review provides a structured learning process for students to critique and provide feedback to each other on their work. It helps students develop lifelong skills in assessing and providing feedback to others, and also equips them with skills to self-assess and improve their own work.

  13. Peer assessment

    Peer assessment. Students mark and give feedback on their peers work, which helps them understand how they will be assessed. Here's how to incorporate this in your teaching. 1 August 2019. Students grade and/or give feedback comments on each other's work. You can use either: formative work that does not receive a mark towards the final grade, or.

  14. Students' experience of making and receiving peer assessment: the

    The documented effects of peer assessment. The practice of peer assessment may be defined as "an arrangement in which individuals consider the amount, level, value, worth, quality, or success of the products or outcomes of learning of peers of similar status" (Topping Citation 1998, 250).This arrangement sometimes refers to making assessment for one's fellow classmates, and sometimes to ...

  15. Peer Assessment: Definition, Benefits, Challenges

    Cut 15% OFF your first order. The main benefits of peer assessment are increasing critical thinking, encouraging learning through giving feedback, and active engagement in the process. The study shows that using peer assessment promotes learning and enables a teacher to focus on helping students with more significant difficulties or advanced ...

  16. PDF University of Dundee Peer Assessment Topping, Keith

    Abstract. Peer assessment can be defined as "an arrangement for learners to consider and specify the level, value, or quality of a product or performance of other equal-status learners, then learn further by giving elaborated feedback and discussing their judgements with peers to achieve a negotiated agreed outcome.".

  17. Use of self and peer assessment

    Use of Self and Peer assessment. Self Assessment describes the activities employed within and outside the classroom that enable the pupil to reflect on what has been learnt and judge it against a set of criteria, e.g. using traffic light systems which give pupils the opportunity to indicate their own thoughts about a piece of work against the given Success Criteria (SC).

  18. Peer Assessment

    Using Student-to-Student Peer Assessment. Peer assessment describes a range of activities in which students evaluate and provide feedback on the work of their peers. Formative peer assessment involves feedback on drafts of work before the final product is submitted. Summative peer assessment includes evaluation of other students' products or ...

  19. PeerStudio: the peer feedback and assessment tool that helps students learn

    PeerStudio is an online tool that makes it easier to use peer review and grading in the classroom, and helps students learn better by reviewing each others' essays, assignments, and design projects. Research-based, and classroom proven.

  20. PDF Writing Assessment and Evaluation Rubrics

    Holistic scoring is a quick method of evaluating a composition based on the reader's general impression of the overall quality of the writing—you can generally read a student's composition and assign a score to it in two or three minutes. Holistic scoring is usually based on a scale of 0-4, 0-5, or 0-6.

  21. PDF Peer Assessment Resource Document

    Peer Assessment (PA) refers to students providing feedback on other students' assignments to help them improve their work. This feedback may or may not involve a grade. When properly implemented, PA can be a reliable and valid method of assessment.2,3,9,12,13,18,19,28,31,32,33,38. 2.1 Benefits.

  22. Pear Assessment: Formative and Summative Assessments Made Easy

    Simple classroom assessments and scalable common assessments, all in one.