Center for Teaching Innovation

Ideas for group & collaborative assignments, why collaborative learning.

Collaborative learning can help

  • students develop higher-level thinking, communication, self-management, and leadership skills
  • explore a broad range of perspectives and provide opportunities for student voices/expression
  • promote teamwork skills & ethics
  • prepare students for real life social and employment situations
  • increase student retention, self-esteem, and responsibility

Collaborative activities & tools

Group brainstorming & investigation in shared documents.

Have students work together to investigate or brainstorm a question in a shared document (e.g., structured Google doc, Google slide, or sheet) or an online whiteboard, and report their findings back to the class.

  • Immediate view of contributions
  • Synchronous & asynchronous group work
  • Students can come back to the shared document to revise, re-use, or add information
  • Google workspace (Google Docs, Sheets, Forms, & Slides)
  • Microsoft 365 (Word, Excel, PowerPoint, Teams)
  • Cornell Box (document storage)
  • Whiteboarding tools ( Zoom , JamBoard , Miro , Mural , etc.)

Considerations

  • Sharing settings
  • Global access
  • Accessibility

Group discussions with video conferencing and chat

Ask students to post an answer to a question or share their thoughts about class content in the Zoom chat window (best for smaller classes). For large classes, ask students in Zoom breakout rooms to choose a group notetaker to post group discussion notes in the chat window after returning to the main class session.

You can also use a discussion board for asynchronous group work.

  • Students can post their reflections in real time and read/share responses
  • If group work is organized asynchronously, students can come back to the discussion board at their own time

Synchronous group work:

  • Zoom Breakout rooms
  • Microsoft Teams
  • Canvas Conferences
  • Canvas Group Discussions
  • Ed Discussion
  • Stable access to WiFi and its bandwidth
  • Clear expectations about participation and pace for asynchronous discussion boards
  • Monitoring discussion boards

Group projects: creation

Students retrieve and synthesize information by collaborating with their peers to create something new: a written piece, an infographic, a piece of code, or students collectively respond to sample test questions.

  • Group projects may benefit from features offered by shared online space (ability to chat, do video conferencing, share files and links, post announcements and discussion threads, and build content)
  • Canvas groups with all available tools

Setting up groups and group projects for success may require the following steps:

  • Introduce group or peer work early in the semester
  • Establishing ground rules for participation
  • Plan for each step of group work
  • Explain how groups will function and the grading

Peer learning, critiquing, giving feedback

Students submit their first draft of an essay, research proposal, or a design, and the submitted work is distributed for peer review. If students work on a project in teams, they can check in with each other through a group member evaluation activity. Students can also build on each other’s knowledge and understanding of the topic in Zoom breakout room discussions or by sharing and responding in an online discussion board.

When providing feedback and critiquing, students have to apply their knowledge, problem-solving skills, and develop feedback literacy. Students also engage more deeply with the assignment requirements and/or the rubric.

  • FeedbackFruits Peer Review and Group Member Evaluation
  • Canvas Peer Review
  • Turnitin PeerMark
  • Zoom breakout rooms
  • Canvas discussions, and other discussion tools
  • Peer review is a multistep activity and may require careful design and consideration of requirements to help students achieve the learning outcomes. The assignment requirements will inform which platform is best to use and the best settings for the assignment
  • We advise making the first peer review activity a low-stakes assignment for the students to get used to the platform and the flow.
  • A carefully written rubric helps guide students through the process of giving feedback and yields more constructive feedback.
  • It helps when the timing for the activity is generous, so students have enough time to first submit their work and then give feedback.

Group reflection & social annotation activities

Students can annotate, highlight, discuss, and collaborate on text documents, images, video, audio, and websites. Instructors can post guiding questions for students to respond to, and allow students to post their own questions to be answered by peers. This is a great reading activity leading up to an inperson discussion.

  • Posing discussion topics and/or questions for students to answer as they read a paper
  • Students can collaboratively read and annotate synchronously and asynchronously
  • Collaborative annotation helps students to acknowledge some parts of reading that they could have neglected otherwise
  • Annotating in small groups
  • FeedbackFruits
  • Interactive Media (annotations on document, video, and audio)
  • Providing students with thorough instructions
  • These are all third-party tools, so the settings should be selected thoughtfully
  • Accessibility (Perusall)

Group learning with polling and team competitions

Instructors can poll students while they are in breakout rooms using Poll Everywhere. This activity is great for checking understanding and peer learning activities, as students will be able to discuss solutions.

  • Students can share screen in a breakout room and/or answer questions together
  • This activity can be facilitated as a competition among teams
  • Poll Everywhere competitions, surveys, and polls facilitated in breakout rooms
  • Careful construction of questions for students
  • Students may need to be taught how to answer online questions
  • It requires appropriate internet connection and can experience delays in response summaries.

More information

  • Group work & collaborative learning
  • Collaboration tools
  • Active learning
  • Active learning in online teaching
  • Columbia University in the City of New York
  • Office of Teaching, Learning, and Innovation
  • University Policies
  • Columbia Online
  • Academic Calendar
  • Resources and Technology
  • Resources and Guides

Leveraging Annotation Activities and Tools to Promote Collaborative Learning

Collaborative annotation activities support learning by encouraging students to learn with and from their peers. Research has shown that a collaborative learning environment can help strengthen student confidence, as well as foster their critical thinking skills and active engagement in learning. The following resource offers an overview of some of the benefits of collaborative annotation, as well as specific tools and sample activities to help facilitate this collaboration. 

On this page: 

  • Getting Started with Collaborative Annotation Activities
  • Tools to Support Collaborative Annotation Activities
  • Resources and References

The CTL is here to help! 

Want to incorporate annotation activities in your course? Trying to determine what platform might best support your pedagogical needs? The CTL is here to help – email [email protected] to schedule a 1-1 consultation!

Cite this resource: Columbia Center for Teaching and Learning (2022). Leveraging Annotation Activities and Tools to Promote Collaborative Learning. Retrieved [today’s date] from https://ctl.columbia.edu/resources-and-technology/resources/activities-tools-collaborative-learning/

Getting Started with Collaborative Annotation Activities 

Whatever the modality, we must remember that learning is a social process. A student does not learn alone (Garg & Dougherty, 2022). 

Research shows that learning is a social process – students benefit when they have opportunities to collaborate and learn both from and with each other (Garg & Dougherty, 2022; Laal & Ghodsi, 2012). The benefits of a collaborative learning environment span the social, psychological, and academic spheres, and range from helping students build confidence and social support systems, to helping students foster critical thinking skills and encouraging active engagement in their learning (Barkley et al., 2014; Laal & Ghodsi, 2012). To learn more about the benefits of collaborative learning, watch the following video from the CTL’s MOOC “ Inclusive Teaching: Supporting All Students in the College Classroom .”     

One powerful way to promote and encourage social learning is through collaborative annotation activities (e.g., active reading assignments). Collaborative annotation assignments can “promote high pre-class reading compliance, engagement, and conceptual understanding,” leading to deeper student interaction and engagement with course materials, while also helping instructors better gauge students’ understanding, comprehension, and engagement (Miller et al., 2018, p. 3). The table below offers several sample annotation activities that foster collaboration and engage students in active learning. All of the activities included below can be adapted for any classroom setting, with or without the use of additional tools. The following section will highlight tools that instructors can use to meet their specific learning goals. 

Tools to Support Collaborative Annotation Activities 

As with any classroom assignment or activity, it’s important for collaborative annotation activities to align with your course objectives and goals. Likewise, the tool you use for a given activity should align with the goals of the activity itself. The following section introduces several collaborative learning tools. 

Perusall is a social annotation tool that engages students in reading as a communal activity.. In doing so, it makes class reading a more interactive activity for students, and offers instructors tools to gauge student understanding and engagement. For example, instructors can use Perusall’s “confusion report” to identify the most popular questions, as well as specific areas in the text where the most students are commenting and annotating; this report also categorizes questions into theme or topic areas. At the same time, Perusall helps students feel better prepared for class participation, as it encourages reading for comprehension rather than for completion. Students can interact with the text, their peers, and their instructor throughout the text using the variety of Perusall’s commenting and chat features. In this way, Perusall promotes deeper student engagement with course content and their fellow peers. 

Perusall supports a range of course materials and formats, including PDF documents, website pages, podcasts, videos, and more. Additionally, Perusall offers several accessibility features including Open Dyslexic font and built-in text to speech capability. Within Perusall, instructors and students alike have a great deal of flexibility and choice – instructors can assign group reading assignments, specify particular chapters or sections of a text, and pose questions throughout the text; students can upvote their peers’ contributions, ask additional questions, or join a threaded discussion on a particular area of the text. 

To see how Columbia faculty have found success using Perusall, see Dr. Weiping Wu’s (Professor of Architecture, Planning, and Preservation and a 2021 Provost’s Senior Faculty Teaching Scholar) CTL Voices submission Engaging Students Beyond the Classroom by Using Perusall .  

Hypothesis works on any webpage; it requires a browser plugin that allows users to layer their own highlights and comments over an existing webpage. Additionally, there is a process for annotating PDFs provided by instructors. 

Because it overlays existing web pages, there are limitations to the kinds of materials students can annotate with Hypothesis. For example, Hypothesis does not allow for the annotation of individual videos at specific timestamps. Like Perusall, Hypothesis allows for threaded discussions, assigned annotation groups, and deeper engagement with course reading materials. Hypothesis allows students across course sections to annotate one shared document, whereas other tools (e.g., Perusall) require each course section to have its own copy. 

Mediathread

Mediathread is a tool developed by the Columbia Center for Teaching and Learning to support the collection, curation, and close analysis and annotation of a range of media texts. Used across disciplines and Schools at Columbia, Mediathread offers different ways for students to engage with media objects. Given its focus on media engagement, Mediathread helps students curate video collections, annotate at specific timestamps, and embed video clips within their responses to instructor-generated prompts.  For more information on Mediathread including different assignment types and how to get started, see the CTL’s resource on Mediathread . 

Google Suite

The Google suite includes Google Docs, Google Slides, and more; each of the tools in the Google suite can be used for collaborative annotation activities. For example, Google Docs are a great way to set up collaborative note-taking processes, as well as collaborative annotations or comments on a specific text or image. Similarly, the Q&A Feature in Google Slides allows students to annotate and interact with instructors in real time during a lecture. For more details on how to set up the Google Suite tool of your choice, see the CTL’s resource on Collaborative Learning . 

The whiteboard and annotation features in Zoom are also helpful for setting up collaborative annotation spaces. These features allow for annotating images, documents, and more within a single shared screen. For more information about Zoom whiteboard and annotation features, see the following CTL video on setting up and engaging students in Zoom. 

Tools Comparison Table 

The following table highlights several key features in a few of the tools described above. Please note, however, the features included in the table are not exhaustive; in many instances, there are ways of adapting tools’ features to meet a specific need. For additional support in choosing the right tool for your collaborative annotation activities, email the CTL at [email protected]

As you read this table, consider questions such as:

  • What features do I need to run the assignment?
  • How might the tool suit my course and activity goals?

Resources & References

Barkley, E.F., Major, C.H., & Cross, K.P. (2014). C ollaborative learning techniques: A handbook for college faculty, second edition . Jossey-Bass.  

Columbia Center for Teaching and Learning. (2021). Collaborative Learning . 

Garg, N. & Dougherty, K.D. (2022). Education surges when students learn together . Inside Higher Ed.  

Miller, K. Lukoff, B., King, G., & Mazur, E. (2018). Use of social annotation platform for pre-class reading assignments in a flipped introductory physics class . Frontiers in Education .

CTL resources and technology for you.

  • Overview of all CTL Resources and Technology

This website uses cookies to identify users, improve the user experience and requires cookies to work. By continuing to use this website, you consent to Columbia University's use of cookies and similar technologies, in accordance with the Columbia University Website Cookie Notice .

collaborative annotation assignment

6 Remarkable Ideas for Meaningful Collaborative Annotations

  • October 20, 2021
  • AP Literature , English 11

As high school teachers we are always looking for ways to engage our students in authentic and creative ways.  And we love engagement that has sneaky outcomes as well.  Collaborative annotations is one of those tools we have that can really get kids thinking, while increasing the level of inquiry and understanding.

Using Collaborative Annotations in High School English | A variety of ideas for how to incorporate social annotations in High School English | English 9, English 10, English 11 and English 12, AP Literature & AP Language

What are Collaborative Annotations?

Simply put, collaborative annotations, sometimes called social annotations, are a form of collaborative notes in the margins of a text created by a group of people reading the same text.  This is a great way to bring groups into close reading exercises in high school English classes.

Collaborative annotations can be done the “old fashioned way” with pen and paper or on digital platforms and there are so many ways that we can utilize this in the high school English classroom.

Pen & Paper or Digital Platform?

Social annotations completed use paper and a pen, pencil and highlighter and those completed on digital platforms all have their benefits.  There is something to be said about that physical interaction with the text that happens when you underline something and then put a note in the margins.  It is something I have been doing since I was in high school myself (which means we’re talking about 30+ years of annotating text).  Once you add those notes, you are part of the text.  

However, in the age of 1:1 and hybrid classes, digital annotation for collaborative close readings have an equally important place.  Using Google platforms like Docs and Jamboard allow you, as the teacher, more oversight while allowing students to connect even if they are not in the same room at the same time. 

Either way, you cannot go wrong having your students complete some collaborative annotations.

So the next question I get is how big should your groups be.  And, of course, the answer to that question varies.  Really, it depends on the way you want to structure your students’ interactions with the texts and each other.  You can work these annotations in groups with as few as 2 students or with the whole class.  I will give you suggestions for all of those scenarios below.

6 Ideas for Collaborative Annotations in High School English | A variety of ideas for how to incorporate social annotations in High School English | English 9, English 10, English 11 and English 12, AP Literature & AP Language

Ways to Have Students Participate in Collaborative Annotations

Students pairs—silent seminars.

If you are looking to stick to working with one other student, try Silent Seminars.  (This also works for groups of three.) In a silent seminar, students carry on a conversation without speaking out loud.  It’s good for days when you just need quiet or when not all of your students are in the same room.

collaborative annotation assignment

Silent Seminar Set Up

Assign groups, determine the passage(s) give them a topic (if you wish) and let them start “talking.”  This can be done very informally with a blank sheet of paper and a pen or you can use a template that you set up in Google Slides or Google Docs.  Or you can do something more fun. For example you can use characters like this Silent Seminar Tool pictured above.  Students then engage in discussing the text through writing in a back and forth manner.

I used silent seminars during my Hamlet unit last spring. Students focused their attention on revenge in Hamlet after we had done some close reading on the Act 3 soliloquies where both men focus on guilt and the purpose for their actions. Student who were physically in the room were paired with students who were at home.

Here is a fun template you can use for Silent Seminars .

Small Groups (3-6)—Pass-a-Passage Same Text

I love Pass-a-Passage Collaborative Annotations and I have used it with all level of students.  This one works better in person (as my experience attempting it in a hybrid setting was all but a failure!).  Students each have the same text:  a poem, a short story story (for my favorite short short stories, check out this post ) or an excerpt. 

You can give specific focus tasks or author’s craft you want the the students to look for in this collaborative close reading exercise and then they each take a turn writing on the other students’ papers. Each person adds additional notes.

Small Groups (3-6)—Pass-a-Passage Different Texts

You can also do Pass-a-Passage Annotations with different texts.  You will need to have a variety of passages that are approximately the same length and for which the order of reading doesn’t matter too much.  I like to use this for close reading with a longer text to pull out a specific theme or motif.  You will need enough passages for the number of students in the groups.  So if your groups will be three then you need three passages.

Students will follow the same procedure and set up regardless of whether they are reading the same text or different texts.

Pass-a-Passage Set Up

This works best if all of your groups are the same size.  Because attendance can be so variable, I will often wait until the students are in class to determine the size of groups.  This is one of those times when a good old counting off to determine you group can be ideal.

I love to jump in with a group to even things out when a class needs just one more person to makes groups even.  So don’t be afraid to be a participant in this process.  The kids love it when you do.

You will also need a timer for this (you can use your phone or something on-line that you can project on the screen).  Determine the amount of time you want to give for each round, then set a time and have student begin adding notes to their own passages.  When time is up, they pass their paper to the person sitting to their right (or left, you can pick the direction).  Set the timer again and have students begin adding annotations to their group member’s passage.  Keep going until you have gone through all members of the group.

Notes about Timing:  I like to give the first read more time.  They will need more time familiarize themselves with the passage and make notes.  I then reduce the time for the subsequent rounds.  You can reduce it each time or just after the round. If you are choosing to give each student a different passage, you will want to keep all the times even.

Writing Utensils

I like to have students “identify” their notes by using different colored writing utensil.  Having colored pencils, colored pens or skinny markers on hand can insure that you have enough variety.  But even if you only have a supply of blue, black and red pens along with pencils, you will have 4 colors.  You can also have students right their names at the top of the page as they complete their round.  That way both you and the students know where specific annotations came from.

About the Annotations

Make it clear that students must add something each round.  This means that if someone “took” their annotation from a previous round they will need to look for something new or they will need to develop the annotations that have already been made.  So if one student notes that something is a simile, the next student might make a note about what the author is attempting to do through the use of that simile and then next student can even take it a step farther.  You might model this if you have another teacher in your classroom or even with a brave volunteer student.  Have the student identify something in the text and you add to the notation.

Be sure that students understand that just pointing out craft in the text doesn’t take the close reading annotations far enough.  They should start to pose questions, discuss impact of an author move and develop theories.

collaborative annotation assignment

Small Groups (2-4)—Poster Collaborative Annotations

This is another social annotation that has kids working on the same text, but instead of working on one text that they pass around, there are literally working on one text that has been enlarged to let students work on it together.  

Poster Annotations Set Up

This works best if each group has their own text to work on first.  They can be working on different poems or excerpts from the same text.  The text should be enlarged and then attached to a piece of chart paper.  Students can then work on annotating specific things about a text like the syntax or the structure or the characterization right on that large format text.  Once students have completed the annotation, you can have a gallery walk followed by a full class discussion. 

I do a version of this in my Sonnet Group Annotation Assignment .

Mid to Large Groups—Walk Around Collaborative Annotations

This style of collaborative annotations is great for getting kids out of their seats.  Like the poster annotations, you will need large format text available for the students to add their annotation.

Walk Around Annotations Set Up

This works great for short short stories or poems, but definitely could work with excerpts from a larger work.  Again, you need to enlarge what you want students to annotate, but they should also have regular copies to work with as well.  Divide up the text into passages and then hang the enlarged copies around the room.  If you have a large class, you might want two sets of the same text or divide the passage into smaller chunks.

Have students begin by reading or rereading the passages and annotating on their own texts.  Then when they are ready, have them get up and walk around adding annotations to each passage.  Have poster markers ready because it makes it easier to read the annotations from a distance.  

When students have completed the annotation, you can have them walk around and review the annotations, add notes to their own text and prepare to discuss as a full class.  I use this walk-around time to check out what the students have said so that I can pull ideas to bring up in the discussion.

There is a version of this activity in my Flash Fiction Boot Camp Unit .

collaborative annotation assignment

Mid to Large Groups—Digital Collaborative Annotations

There are several ways that you can have larger groups participate in digital collaborative annotations.  The first is through a digital whiteboard program like Jamboard.  You can read about all the ways you can use Jamboard in the High School English classroom here.

Digital Annotations Set UP

To use a digital whiteboard for social annotations, simply post the passage as the background on the slides of the digital white board, share the link making everyone an editor. Then have students use the tools in the program to begin making notes. If you use a program like Jamboard remember that the annotations are anonymous so you will want to have the kids add their names if you want to know who the author is.

Another way to do digital annotations would be through Google Docs or Goole Slides.  Just share the passage in a link.  Make sure that students have the ability to edit.  And if you are using Slides, save an image of the text as a background then the kids can’t move it.

Both Docs and Slides work well for different reason.  In slides, you can anchor the image of the text so that it cannot be changed, while in Docs you can use both the text tools and the comment tool.  Just like when working with paper, you will want to have kids choose a color if you want to be able to identify who the annotations belong to.

If you don’t want to share a Doc or link with the whole class, you can assign a text through Google Classroom Assignments, then assign group leaders and have them share with the rest of their group.

For more on using Jamboard in High School English Class, check out this post .

Digital Collaborations for the Win!

Using social annotations is a great way to have your kids truly engaged in a text and truly collaborating with each other to share their thinking. I have tried all of these in my classroom and they have all been successful in mixing it up and getting kids thinking and sharing their ideas about texts. Give it a try and let me know in the comments if you do and how you adapt it for your own use.

Related Resources

The perfect list of 20 Short Short Stories for AP Literature and more.

For more on using Jamboard in High School English Class .

Use Sonnets for Collaborative Annotations– read more about teaching Sonnets .

Shop this Post

Flash Fiction Boot Camp Unit for AP Literature

Sonnet Group Annotation Assignment

A fun template you can use for Silent Seminars

more from the blog

7 Ways to help AP students focus on text complexity.

Text Complexity for AP English

On the AP® Literature exam, both the Poetry Free Response Question and the Prose Free Response Question ask students to consider the text complexity of

Revenge in Hamlet: Lesson Plans for High School English

Revenge in Hamlet Lesson Plans for High School English

Teaching any Shakespeare play can be a challenge.  Often, the second that students even hear that they will be studying a play by the bard,

Solar Eclipse Poetry and Poems for the Sun for high school English

Solar Eclipse Poetry:  Poems for the Sun in High School English

I don’t know where you live, but in the part of New York State where I live, the April 2024 Solar Eclipse path will be

One Response

  • Pingback: 10 Poetry Pairings for Of Mice and Men - SmithTeaches9to12

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Jeanmarie McLaughlin at McLaughlin Teaches English

Hi, I'm Jeanmarie!

I help AP Literature and High School English teachers create engaging classrooms so that students will be prepared college and beyond.

Learn more about me and how I can help you here 

Let's Connect!

ap-literature-syllabus-planner-and-workbook

Your free guide to planning a full year of AP Literature

Grab your free copy of The Poetry Slam Guide for High School English Teachers.

AP® is a trademark registered by the College Board, which is not affiliated with, and does not endorse, this product.

Book cover

International Conference on Interactive Collaborative Learning

ICL 2023: Towards a Hybrid, Flexible and Socially Engaged Higher Education pp 84–89 Cite as

Integrating Collaborative Annotation into Higher Education Courses for Social Engagement

  • Mark P. McCormack   ORCID: orcid.org/0009-0000-0281-3011 13 &
  • John G. Keating 13  
  • Conference paper
  • First Online: 01 February 2024

36 Accesses

Part of the Lecture Notes in Networks and Systems book series (LNNS,volume 899)

Collaborative Annotation (CA) is a literacy strategy that engages students in critical reading, critical thinking, writing and collaboration all in one activity [ 1 ]. This collaboration amongst students promotes social engagement with course materials and has been shown to be beneficial to higher education by improving learning comprehension [ 2 ] and soft skills amongst students [ 3 ]. For our study, we will investigate the benefits that CA provides higher education courses by means of social engagement with boundary objects in assessment. We have designed several pedagogical pipelines which illustrate how to integrate Collaborative Annotation into several types of assignments. Our research is concerned with the impact CA has on students’ quality of learning. This study aims to design pipelines to integrate Collaborative Annotation into several assessment contexts for social engagement.

  • Collaborative Annotation
  • Online assessment
  • Social learning

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Schwane, E.: Collaborative Annotation: For Any Text and Any Class. http://www.wcteonline.org/wp-content/uploads/2015/10/Collaborative-Annotation.pdf

Razon, S., Turner, J., Johnson, T.E., Arsal, G., Tenenbaum, G.: Effects of a collaborative annotation method on students’ learning and learning-related motivation and affect. Comput. Hum. Behav. 28 (2), 350–359 (2012)

Article   Google Scholar  

England, T.K., Nagel, G.L., Salter, S.P.: Using collaborative learning to develop students’ soft skills. J. Educ. Bus. 95 (2), 106–114 (2020)

Dahal, N.: Understanding and uses of collaborative tools for online courses in higher education. Adv. Mobile Learn. Educ. Res. 2 (2), 435–442 (2022)

Penny, L., Murphy, E.: Rubrics for designing and evaluating online asynchronous discussions. Br. J. Edu. Technol. 40 (5), 804–820 (2009)

Wiggins, B.L., Eddy, S.L., Wener-Fligner, L., Freisem, K., Grunspan, D.Z., Theobald, E.J., Timbrook, J., Crowe, A.J.: ASPECT: a survey to assess student perspective of engagement in an active-learning classroom. CBE Life Sci. Educ. 16 (2), ar32 (2017)

Google Scholar  

Download references

Author information

Authors and affiliations.

Maynooth University, Co. Kildare, Ireland

Mark P. McCormack & John G. Keating

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark P. McCormack .

Editor information

Editors and affiliations.

CTI Global, Frankfurt, Germany

Michael E. Auer

UTN—FRBA, Mozart, Argentina

Uriel R. Cukierman

DISA, Technical University of Valencia, Valencia, Spain

Eduardo Vendrell Vidal

UPM, ETSII, Madrid, Spain

Edmundo Tovar Caro

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

McCormack, M.P., Keating, J.G. (2024). Integrating Collaborative Annotation into Higher Education Courses for Social Engagement. In: Auer, M.E., Cukierman, U.R., Vendrell Vidal, E., Tovar Caro, E. (eds) Towards a Hybrid, Flexible and Socially Engaged Higher Education. ICL 2023. Lecture Notes in Networks and Systems, vol 899. Springer, Cham. https://doi.org/10.1007/978-3-031-51979-6_9

Download citation

DOI : https://doi.org/10.1007/978-3-031-51979-6_9

Published : 01 February 2024

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-51978-9

Online ISBN : 978-3-031-51979-6

eBook Packages : Intelligent Technologies and Robotics Intelligent Technologies and Robotics (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Instructional Continuity ​

collaborative annotation assignment

IC Home >> Pedagogies & Strategies >> Collaborative Annotation in Canvas using Hypothes.is

Tip Sheet: Collaborative Annotation in Canvas using Hypothes.is

Hypothes.is is a collaborative online annotation tool that is now available in Canvas. The tool allows students to collaboratively annotate websites and PDF documents. With the Canvas integration, students do not need to create accounts and their annotations can automatically be seen through SpeedGrader if you set it up as an Assignment. You can also create an ungraded activity through Modules in Canvas.

Note: If you are scanning documents, please make sure that they are accessible using OCR technology . Additionally, in order to be used in Hypothesis, the PDF needs to be a document and not an image.

collaborative annotation assignment

This tool is particularly useful for active reading assignments, collaborative research, textual analysis, and other engagement activities. The Hypothes.is for Education page has many great examples to find inspiration and ideas. For more assistance in integrating Hypothes.is into your course, please contact [email protected] .

Setting Up An Hypothes.is Assignment

1. create a new assignment in canvas., 2. under “submission type”, select “external tool”.

collaborative annotation assignment

3. Type “Hypothesis” in the “Enter or find an External Tool URL” box and click “Find”

collaborative annotation assignment

4. Click on Hypothesis from the options presented.

collaborative annotation assignment

5. Select the type of document the students will be annotating. Choose from the following options:

  • a URL for a web page or PDF, 
  • a PDF file you have already uploaded to Canvas, 
  • or a PDF in your GU Google Drive.

collaborative annotation assignment

6. Set up the Assignment to reflect your intended due date, how many points it is worth, etc.

If this is not a graded assignment, enter 0 for points. Note: If you choose “Not Graded” for “Display Grade as”, you will not be able to add Hypothes.is to the assignment.  Find more information on setting up Assignment parameters in Canvas .

7. When you click “Save and Publish,” you will need to authorize Hypothes.is to use Canvas to create accounts.

collaborative annotation assignment

8. After your students have made their annotations, you can access individual students’ annotation through the Canvas SpeedGrader.

Setting up hypothes.is in modules, 1. create a module in canvas., 2. under “add item” to the module, select external tool..

collaborative annotation assignment

3. Select Hypothesis from the list of external tools.

collaborative annotation assignment

4. Follow steps 5-7 listed above.

5. when you have completed steps 5-7, give the page a name and click “add item.”.

collaborative annotation assignment

Using Hypothes.is

Tip: Provide clear guidance to what kinds of annotations you are expecting the students to provide in the Assignment description. As an instructor, you can also use the Hypothes.is tool within the Assignment to model the kinds of annotations you would like to see and respond to students’ comments directly. 

1. When you click on the published assignment, you will see the following screen with directions:

collaborative annotation assignment

2. To make an annotation, highlight any text in the document or on the web page with your cursor and select the “Annotate” button.

collaborative annotation assignment

3. A text box will appear where the annotation can be typed. Annotations can include links to external websites, images, and styled text.

collaborative annotation assignment

4. The name of the person doing the annotation will automatically appear.

5. annotations can also be organized via “tags”; this may include identifying various parts of a text or other notational differences., 6. once the annotation is complete, click the “post to [class name] button. you can also post the annotation privately by clicking on the dropdown arrow after “post to” button and selecting “only me.”, 7. anyone in the course will be able to create an annotation, see others’ annotations, and respond to others’ annotations using the “reply” button..

collaborative annotation assignment

8. Users can also edit or delete their annotations.

9. look to see if there is a red “show new/updated annotations” icon in the upper right corner. if so, click on it to load recently added or edited annotations..

collaborative annotation assignment

For more assistance in integrating Hypothes.is into your course, please contact [email protected] . For other issues, please visit Hypothes.is help page .

css.php

  • Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

EHE Distance Education and Learning Design

Hypothes.is: Social and Collaborative Annotation

What is hypothes.is.

Hypothes.is is an online collaborative annotation tool. With Hypothesis you can leverage the power of social learning in your online course. Benefits of collaborative annotation include increased: student understanding (Miller et al., 2016), intrinsic motivation (Dean & Schulten, 2015), and collective efficacy (Bandura, 2000).

Hypothes.is’ affordances include:

  • Students and teacher co-construct knowledge in the authentic environment of an assigned reading
  • Students and teacher learn through multimodal expression facilitated by annotations that can combine text, GIFs, and video
  • Teacher can model metacognition and other reading strategies by seeding an assigned text with annotations
  • Teacher can ask questions about important passages by seeding an assigned text with annotations
  • Students and teacher use the tool within CarmenCanvas
  • Teacher can choose to provide feedback to students via the SpeedGrader integration

How Does Hypothesis Work?

Hypothesis is a tool integrated into CarmenCanvas that allows the instructor to set up course readings (pdf’s, websites, and EPUB files) so that students can annotate together. Instructors can set up a readings and webpages in Carmen so that students can annotate, comment, and discuss in a shared space. Hypothesis readings can be set up in Carmen in two ways:

  • Assignment in Carmen (graded or ungraded)
  • Standalone page within a Module in Carmen (ungraded)

Setting Up Hypothesis in Carmen

First, begin by creating a new assignment in Carmen . If using a pdf reading, you will also need to add that pdf to the files in your Carmen course. Once you have an assignment you would like to add Hypothesis to, use the following steps:

1. From the Submission Type options, select External Tool.

Screenshot of the External Tool option in Submission Type drop-down menu.

2. Click Find, scroll down and select Hypothesis .

Screenshot of external tools in the drop-down menu.

3. Select the option to either Enter URL (for webpage) or Select PDF from Canvas .

Screenshot of option to "Enter URL of web page or PDF" or "Select PDF from Canvas."

4. Select your PDF or enter your URL location when prompted.

Screenshot of files to select PDF.

5. You will be returned to the same window illustrated in Step 2. The Hypothesis tool is now selected, click Select to confirm .

Screenshot of external tools in the drop-down menu.

6. As a confirmation, you will see the eternal tool window with the location of the tool and your selected reading pre-filled.

a. Select if the tool (Hypothesis) should load in a new tab or directly within the Carmen page.

Screenshot of URL ready to be added for annotation using Hypothes.is.

7. At this juncture you can set up the rest of your assignment: Add instructions, points, decide how you want to display the grade, and due dates. Once ready, scroll to the bottom of the page and click Save . The page should reload, and you should now see the text students will annotate using Hypothesis (if you selected to not load the tool in a new tab). Don’t forget to publish the assignment!

How can the DELD team help you?

The EHE Distance Education and Learning Design (DELD) team is ready to partner with instructors to implement Hypothesis in their course.

  • Assistance setting up Hypothesis
  • Social annotation ideas for your course
  • Group readings and collaboration

Contact the DELD Team through the service portal to set up a consultation.

Additional Resources

Hypothes.is’ website for educators  offers extensive guidance on integrating Hypothes.is into a course, including:

  • YouTube tutorial videos
  • Examples of classroom use
  • Webinars  about online collaborative annotation
  • DELD Workshop Recording: Introduction to Hypothesis

Bandura, A. (2000). Exercise of human agency through collective efficacy. Current Directions in Psychological Science, 9, 75-78. doi: 10.1111/1467-8721.00064​

Dean, J., & Schulten, K. (2015, November 12). Skills and strategies: Annotating to engage, analyze, connect and create. The New York Times. Retrieved from https://learning.blogs.nytimes.com/2015/11/12/skills-and-strategies-annotating-to-engage-analyze-connect-and-create/?_r=0

Miller, K., Zyto S., Karger, D., Yoo, J., & Mazur, E. (2016). Analysis of student engagement in an online annotation system in the context of a flipped introductory physics class. Physical Review Physics Education Research, 12, 1-12. DOI: 10.1103/PhysRevPhysEducRes.12.020143

Meet With Us

Request a Consultation

View all Services

College of Education and Human Ecology

Banner

Digital Toolkit

  • CUNY Academic Commons
  • CUNY Academic Works
  • OpenEd CUNY
  • ArchivesSpace
  • CollectiveAccess
  • JSTOR Forum
  • Digital Preservation & Web Archiving
  • Collaborative Annotation / Hypothes.is

About Collaborative Annotation / Hypothesis

Guides and documentation on collaborative annotation, cuny collaborative annotation / hypothesis projects, collaborative annotation / hypothesis accessibility, questions on collaborative annotation / hypothesis ask us.

  • Data Visualization/Topic Modeling
  • Mapping as Teaching/Research Tools
  • Additional Resources

Hypothes.is company logo

Hypothesis   is an open-source annotation platform and browser extension that adds a conversation layer to (almost) the entire Internet. If you are teaching texts from a variety of different Internet sources, Hypothes.is offers a continuity of engagement and conversation across disparate environments.

Publishing Platforms with Social Annotations:

Some digital publishing platforms, like Manifold , have built-in annotation tools that allow professors to create private or public groups to comment on texts within the Manifold environment. These built-in tools are particularly useful if students do most of their reading in that environment or if they are building toward a publication project in that environment.

Collaborative annotation tools, sometimes called social annotation tools, offer a way for students to interact with a text, with a professor, with each other, and (in some cases) with a public audience. By incorporating these tools into low-stakes assignments, professors can model effective reading and annotating strategies, and students can:

  • Ask questions and get answers
  • Provoke conversation
  • Analyze textual as well as contextual material
  • Participate in informal scholarly activity
  • Practice respectful digital citizenship
  • Hypothesis Quick Start Guide for Teachers
  • Hypothesis Quick Start Guide for Students
  • Teaching with Annotations in Manifold

Go to Baruch College project using Hypothes.is

The latest external review of Hypothesis functionality by the  Inclusive Design Research Centre (IDRC) concluded that the annotation client meets WCAG 2.1 Level AA Success Criteria set out by the W3C in the Web Content Accessibility Guidelines (WCAG) 2.1 . You can access the most recent Hypothesis accessibility conformance status in the latest Voluntary Product Accessibility Template . You can read more about Hypothesis's efforts toward greater inclusivity and accessibility in the public statement on Accessibility at Hypothesis .

Manifold Press is a distributed software, and accessibility compliance can vary from instance to instance. However, the application is itself accessible and can render standards-compliant accessibility frameworks and metadata to assistive technology in the expected ways. You can learn more about Manifold's progress toward WCAG 2.1 AA compliance in the  Accessibility statement on Manifold 's Github repository.

  • Leila Walker Digital Scholarship Librarian, Assistant Professor, Research Services, Queens College
  • << Previous: Teaching & Scholarship
  • Next: Data Visualization/Topic Modeling >>
  • Last Updated: Jun 28, 2023 12:01 PM
  • URL: https://guides.cuny.edu/digital-toolkit

Some documents on this site require Adobe Acrobat Reader. You can download Acrobat Reader for free .

Office of Library Services | CUNY Accessibility

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, collaborative online annotation: pedagogy, assessment and platform comparisons.

collaborative annotation assignment

  • Harvard Medical School, Boston, MA, United States

Annotating a text while reading is commonplace and essentially as old as printed text itself. Collaborative online annotation platforms are enabling this process in new ways, turning reading from a solitary into a collective activity. The platforms provide a critical discussion forum for students and instructors that is directly content-linked, and can increase uptake of assigned reading. However, the student viewpoint regarding collaborative online annotation platforms remains largely unexplored, as do comparisons between annotation and traditional reading assessment methods, and comparisons between the two leading platforms (Hypothes.is vs. Perusall) for annotation by the same student population. The results in this study indicate that collaborative online annotation is largely preferred by students over a traditional reading assessment approach, that students regularly exceed annotation requirements indicated by an instructor, and that overall annotation quality increased as the students gained experience with the platforms. The data analysis in this study can serve as a practical exemplar for measurement of student annotation output, where baselines have yet to be established. These findings link the established research areas of peer learning, formative assessment, and asynchronous learning, with an emerging educational technology.

Introduction

Hovering over a text with a pencil, adding a sticky note to a page, or making digital document highlights and comments, are natural and familiar practices. Some readers feel that they are not giving a text their full attention without adding annotations ( O’Connell, 2012 ). Centuries-old text annotations feature drawings, critical explanation, corrections, and comments to other readers, at times exceeding the amount of primary text itself ( Wolfe and Neuwirth, 2001 ; Wolfe, 2002 ). An annotator might also leave memory prompts, questions, predictions, and connections to other work. For the consumer, the annotations of another student or scholar can be mined for insights that might go unappreciated if reading an unannotated text. Some students prefer second hand textbooks that have already been annotated by previous readers, for precisely this reason ( Van Dam, 1988 ; November, 2020 ). Wolfe and Neuwirth (2001) propose four main functions of annotation: to facilitate reading and later writing tasks by making self-directed annotations, to eavesdrop on the insights of other readers, to provide feedback to writers or promote communication with collaborators, and to call attention to topics and important passages. In Kalir and Garcia’s (2019) comprehensive work, annotation is defined broadly as a note added to a text, which can provide information, share commentary, express power, spark conversation, and aid learning. Accordingly, the online draft copy of their book includes annotations by other scholars that provide extended critical thoughts for any reader willing to consume them. Suggestions for annotations to be positioned as a third independent component of a text ( Bold and Wagstaff, 2017 ), prompt us to consider not only the medium and the message ( McLuhan, 1964 ), but also the marginalia ( Jackson, 2001 ) in all of our reading.

A lack of student attention to assigned reading can be problematic for teachers. In a study of multiple physics courses, only 37% of students regularly read the textbook, and less than 13% read often and before the relevant lecture was occurring ( Podolefsky and Finkelstein, 2006 ). This is in accord with a study of psychology courses, where a similarly low 28% of students did the assigned reading before it was covered in class ( Clump et al., 2004 ). The importance that professors attach to reading appears to be much higher than the importance attached by students. In a Business School study, only 4% of professors thought that a student could score an A or B grade without doing the assigned reading for a course, while 34% of the students thought they could do so ( Braguglia, 2006 ). Furthermore, only 20% of students identified “not having done the reading” as a reason to not participate in discussions ( Howard and Henney, 1998 ), so tutorial-based discussion may not be as strong of a motivator for reading as an instructor would like.

In addition to reading uptake problems, a student’s first experience reading primary literature (i.e., journal research articles, historical documents) can be challenging. They need to adjust to a format that is often less accommodating to the reader, and in the sciences, may have difficulty grasping technical details in experimental protocols and numerous data figures. It may also be greatly rewarding as students gain an appreciation for the structure of an inquiry and how it led to a particular finding, which is often absent when consuming information from a textbook ( Wenk and Tronsky, 2011 ). For article interpretation in the sciences, there is often a focus on the figures, on questions that elicit student confusion, and on questions that would be good follow-up experiments given the data in the article at hand. Approaches for humanities and social science primary source documents may have a distinct, but similarly critical focus. Reading guidance can be provided to students as fillable templates containing thought-prompts from an instructor, an approach that has been repeatedly covered as a beneficial learning scaffold [see Create framework ( Hoskins et al., 2011 ), Figure facts template ( Round and Campbell, 2013 ), and the templates of others ( Wenk and Tronsky, 2011 ; Yeong, 2015 )]. The templates also relate to the process of Just in Time Teaching ( Marrs and Novak, 2004 ), where student misconceptions about a particular reading can be obtained via a template or pre-class questions, so they can be adequately aired and addressed during a subsequent in-person session. Annotation can also be seen as an aid in primary literature comprehension with the Science in the Classroom approach ( Kararo and McCartney, 2019 ), which uses papers pre-annotated by scientific professionals that define key technical terms, highlight previous work, and include news and policy links, in “lenses” that can be toggled on and off.

Distinct from pre-annotated papers is a social or collaborative online annotation approach where the students and instructors input commentary themselves to illuminate and debate various aspects of a text. Two of the leading collaborative online annotation platforms in current use are Hypothes.is and Perusall. Hypothes.is aims ‘‘to enable a conversation over the world’s knowledge,’’ 1 while Perusall positions itself as “the only truly social e-reader,’’ with ‘‘every student prepared for every class 2 .” Since educational technology platforms can change their interfaces and features regularly, going to the platform website will contain the most up to date and complete information for functionality, usage, and implementation tips. Additional guidance for Perusall may be found in King (2016) . Both platforms enable collaborative online annotation of a text. Any part of a text that is highlighted is then linked to a marginal comment box, which can include not only commentary, but also tags, diagrams, and hyperlinks. Both platforms also support Latex for annotating with mathematical notation. Collaborative annotation is possible with any type of material that can be found as a webpage for Hypothes.is, or uploaded as a PDF for Perusall. Textbook material could be annotated if in an online eBook for Hypothes.is, or from Perusall’s catalogue of available textbooks. The two platforms differ in how they are accessed by students, user interface, social functionality, potential audience participation size, and annotation machine learning measurement capabilities.

In contrast to discussion forums that might appear on a learning management system, annotation platform discussions are grounded at a specific place within the document (highlighted word, sentence, paragraph, or figure region) rather than from a description of a figure or reference to a paragraph that is needed to establish context in a discussion forum post. Grounding in a primary document reduces the number of explicit references needed in order for comments to be understood ( Honeycutt, 2001 ). If the source text is absent and not connected to a discussion, participants have to reconstruct the context, which has been referred to as communication overhead ( Weng and Gennari, 2004 ).

Instructional goals in collaborative annotation may change according to the source material. One might expect collaborative annotation of textbook material to have a stronger focus on understanding the fundamental knowledge that the book provides. Annotating research articles may allow for additional goals that build on fundamental knowledge and consider the structure of an inquiry, along with its implications and possible pitfalls.

Prior work on collaborative online annotation ( Miller et al., 2018 ; Lee et al., 2019 ) positions the research alongside theoretical frameworks of: Peer Instruction ( Fagen et al., 2002 ), where students are collaboratively solving problems often focusing on common areas of misconception; Student-Centered Open Learning Environments ( Hannafin et al., 2014 ) where students negotiate complex, open-ended problems in a largely independent manner with web resources and technology tools to complement sense-making; and Social Constructivism ( Vygotsky, 1978 ; Adams, 2006 ), where cognitive functions originate in social interactions and learners integrate into a knowledge community. These three fields hold students’ prior experiences and co-construction of knowledge in high regard.

Theory developed in the formative assessment field also connects naturally to collaborative online annotation. For students to close in on a desired landmark skill for a course, they need to know what they are shooting for and what good practice of that skill looks like ( Sadler, 1989 ). In collaborative online annotation, student thoughts about a text are out in the open. Another student’s reasoning and sense-making on a difficult article can be compared, and the instructor’s reasoning is also there to serve as a model for what criticism in an academic discipline looks like. For this reason, collaborative annotation has been suggested as a signature pedagogy for literary criticism courses, as it embodies the routines and value commitments in that field ( Clapp et al., 2020 ); the sciences and social sciences can surely follow suit. The timing of feedback, another possible weakness in the assessment process, comes in a steady flow in collaborative annotation, as a text is read and analyzed by the instructor and students within a defined time window (less than 1 week in the current study), and the student can expect threads to build and their annotations to be commented upon within hours to days, and occasionally even in real-time if multiple students are active on the platform simultaneously. Peer to peer exchanges may also decrease some of the workload on instructors for feedback provision. Participation norms that used to focus on hands up in a lecture hall are now shifting to other forms of participation in a modern, technology-enhanced classroom ( Jackson et al., 2018 ), if they have not already done so. Asynchronous teaching tools have become increasingly important with transitions to blended and fully online learning environments coming abruptly during a viral pandemic, and can be a welcome remedy for time zone and other technical issues that affect synchronous teaching.

Annotation as an aid for learning has prior support in various settings, both with and without any technological scaffolding. In a pen and paper setting where students were trained in effective textbook annotation routines by their instructors, student annotation was found to be better than a control non-annotation condition on later test performance and self-reported studying efficiency ( Simpson and Nist, 1990 ). In a setting where instructors added marginal notes to course readings, the notes were overwhelmingly affirmed by students as a helpful study aid, and missed when they were absent from other non-annotated course readings ( Duchastel and Chen, 1980 ). In a collaborative synchronous annotation setting using Google docs in English literature classes, annotation was viewed as a technique that allowed instructors to effectively highlight what good performance in literary analysis looks like, and students also felt greatly aided by reading the annotations of others in understanding a given text ( Clapp et al., 2020 ). Collaborative annotation with Hypothes.is facilitated “close reading” with difficult texts ( Kennedy, 2016 ). Perusall provided a stimulus for reading uptake, where 90–95% of students completed all but a few of the reading assignments before class if using Perusall, and also performed better on an exam than those students who took the same class without using Perusall ( Miller et al., 2018 ). Novak et al. (2012) provide an excellent review of research on social annotation platforms; however, many of the platforms they analyzed have relatively small user bases, or are now defunct. Ghadirian et al. (2018) review social annotation tools and suggest that prior research has failed to capture students’ experiences while participating in social annotation activities, and that understanding of how to implement social annotation from disciplines outside of education and computer science is lacking. Wolfe and Neuwirth (2001) have pointed to the absence of studies focusing on participants to solicit their impressions of the technological environments during collaborative annotation. These gaps, coupled with the emergence of Hypothes.is and Perusall as key platforms, should drive new qualitative and quantitative investigation of collaborative online annotation.

There have been no published comparisons of student output and usage preferences between the two leading online annotation platforms, nor direct comparisons of annotation platforms to more traditional classroom assessment techniques such as reading templates, for the same type of content with the same population of students. Furthermore, the student viewpoint regarding collaborative online annotation remains relatively unexplored in prior publications, and pedagogical best practices are still emerging. Instructors and students familiar with more than one annotation platform are well-positioned to provide feedback on the annotation process as a whole. Establishing quantitative baselines for student output on an annotation platform will hold value for instructors to gauge activity in their own classes. To address the above gaps, and situate online annotation platforms for better use in the classroom, this study posed the following research questions:

1. Qualitatively, from the student viewpoint:

a. How do collaborative online annotation platforms compare to a more traditional templated assessment method for the same type of reading content?

b. How do the two leading collaborative online annotation platforms (Hypothes.is and Perusall) compare to each other?

2. Quantitatively, how do Hypothes.is and Perusall compare on student output for the following measures:

a. Number of annotations made per student per paper?

b. Character volume of a student’s annotations per paper?

c. Annotation content quality?

d. Percentage of isolated vs. collaborative threaded annotations?

Also captured are the changes over time in the quantitative measures as students proceed through successive paper analyses on a platform, and then move onto the next platform. The answers to the previous questions are further considered in order to shape more effective pedagogy for collaborative annotation. The educational technology, peer learning, and assessment fields stand to gain valuable insight from readers’ responses to dynamically annotated text.

Research Methods

Participant details and overall workflow.

The study took place with first year Master’s students, in a university in the northeastern United States, in a course focused on the analysis of scientific research papers. Two student cohorts participated: a 2019–2020 cohort of 18 students, and a 2020–2021 cohort of 21 students. Synchronous class sessions were held in-person for the 2019–2020 cohort during the months that the template completion and annotation activities were proceeding, and were held virtually for the 2020–2021 cohort. Annotation and template completion were done by all students on their own time, asynchronously, outside of any synchronous class sessions, with both cohorts. Figure 1 shows the paper analysis routine of students occurring under three different conditions: a traditional assessment template, the Hypothes.is annotation platform, and the Perusall annotation platform. There was 1 week of time allotted for each paper analysis. Both the 2019–2020 and 2020–2021 cohorts used the traditional template first, as it provided an established scaffold for beginners in paper analysis. After completing four assigned papers with the traditional template analysis, the students then used collaborative online annotation for another eight papers. With the 2019–2020 cohort, the Hypothes.is platform (four papers) was used first, followed by Perusall (four papers). The platform order was reversed with the 2020–2021 cohort – Perusall first, and Hypothes.is second. As such, the 2019–2020 cohort analyzed articles A, B, C, D via the traditional template, articles E, F, G, H with Hypothes.is, and articles I, J, K, L with Perusall; the 2020–2021 cohort analyzed articles M, N, O, P with the traditional template, articles Q, R, S, T with Perusall, and articles U, V, W, X with Hypothes.is. The bibliography for all articles A to X is available in Supplementary Table 1 . All papers were recently published (2017–2020) biomedical science research journal articles, deemed to be roughly equivalent in scope and difficulty by the instructor (GWP). The research proposal was reviewed by the Harvard Human Research Protection Program and received the lowest risk categorization. Aid in the informed consent process for students was provided by a program administrator. All student data in this study has been anonymized.

www.frontiersin.org

Figure 1. Study design and student reading assessment overview. Each student cohort analyzed four papers through a traditional reading assessment template, and four papers through each of the two collaborative online annotation platforms. This enabled two comparisons: the traditional template vs. annotation (as reading assessment methods), and Hypothes.is vs. Perusall (as annotation platforms). Full template prompts are available in section “Research Methods,” and the template is also available for download from Supplementary Material . Figure created with Biorender.com .

Research on technological and pedagogical innovations in student-centered open learning environments is thought to be best positioned within authentic classroom settings ( Hannafin et al., 2014 ). This study follows an action research approach in education, as it is instructor-initiated, focused on practical elements of classroom function, and their future improvement ( McMillan, 2015 ; Beck, 2017 ). With template vs. annotation, and Hypothes.is vs. Perusall comparisons, this study also invokes A/B testing. This has grown in popularity as a research method not only in massively open online classes ( Chen et al., 2016 ; Renz et al., 2016 ), but also when testing two instructional approaches with the same or similar student populations, such as comparing two versions of open eTextbooks for readability/user perceptions of quality ( Kimmons, 2021 ), comparing two learning management systems for accessing the same course content ( Porter, 2013 ), or comparing different question formats for impacts on learning ( Van Campenhout et al., 2021 ). Open source A/B testing platforms for education have recently been embraced by major philanthropic foundations ( Carnegie Learning, 2020 ), as a way to aid decision-making surrounding educational practices.

Traditional Assessment Template

In the traditional reading assessment template, for each of the assigned papers, the students filled in the following information/answered the following questions:

• The dates that the paper was read.

• Can you think of an alternative/improved title for the paper?

• What were the first 5 terms that you had to Google? [give a 1–2 sentence description for each].

• What questions do you have related to understanding the paper?

• What questions do you have that could serve as future experiments?

• What other papers could have helped with the understanding of the current paper? ( a means to indicate reading breadth around a particular research topic; students give references to these papers and brief summarizing information on the template ).

• Analyze all figures regarding:

◦ What technique(s) is(are) being used?

◦ What is the main purpose of the figure/what did the researchers find?

This template was considered to be an established form of assessment in a course focused on improving the understanding of primary scientific literature, and is similar to other reading templates referenced in the introduction in that it focuses on figure interpretation, airing student understanding/misunderstanding, and possible future lines of inquiry. It is also included in Supplementary Material , for any instructor to use or adapt.

Annotation Platform Usage

Students were briefed regarding online annotation platform usage and made simple trial annotations when the platforms were first introduced, in order to ensure they could access the articles, highlight and then annotate a piece of text. None of the trial annotations were counted as student output. Examples of annotation from previous students were shared in the briefing so the current students could envision what a collective annotation process entailed. The examples included students posing and answering questions, commenting on annotations of others to agree/disagree/add nuance, adding links to other articles to aid in comprehension, defining key terms, adding diagrams, adding tags, pointing out shortcomings or missing controls in the article, and suggesting future lines of inquiry.

Students were given a guideline of making five annotations per paper, and were given a rubric from Miller et al. (2016) , that the instructor was also following for grading individual annotations. Each annotation was scored 0, 1, or 2 for quality (0 = no demonstration of thoughtful reading of the text, 1 = demonstrates reading of text, but only superficial interpretation, 2 = demonstrates thorough and thoughtful reading and insightful interpretation of the text). There were no pre-existing annotations made by the instructor, so all the annotations were initiated by the students. However, the instructor did occasionally participate in annotation threads to answer a question, clear up a misconception, etc., as was a normal occurrence outside of a research setting. When classifying threaded vs. isolated annotations, instructor comments in threads were excluded. For example, if a thread of 6 annotations had 2 instructor annotations, and 4 student annotations, the length would be counted as 4, and those 4 annotations would be considered to be part of a thread. An annotation with no other student additions is counted as isolated. An annotation with one instructor addition is still counted as isolated if there is no student follow-up after the instructor addition.

In prior studies using student annotation, some instructors gave a weighting of 6% per annotated article ( Lee et al., 2019 ), or 15% of an overall course grade in an undergraduate physics course ( Miller et al., 2016 ). In prior studies using templates, analysis of a paper via the Figure Facts template counted 10% for each paper analyzed ( Round and Campbell, 2013 ). Since the students were expending a considerable amount of effort in reading long and technically challenging papers, the assessment of each paper in this study, either via traditional template or by annotation, carried a 10% weighting in the final course grade.

To sum up the major attributes of the annotation process in this study according to an existing typology of social annotation exercises ( Clapp et al., 2020 ), the annotations were asynchronous as opposed to synchronous (students annotated on their own time); unprompted as opposed to prompted (other than the number of annotations [five] and the shared grading rubric, no specific requests were placed on annotation content); authored as opposed to anonymous (students knew the identity of their classmates making the annotations, and could use the identity in @call outs); and finally, marked as opposed to unmarked (the annotations counted toward course credit). This typology can serve as a useful comparative tool for future collaborative annotation research.

Distinctions Between the Annotation Platforms

The students accessed the Hypothes.is platform as a web browser plug-in. URLs to all the articles for Hypothes.is annotation were given to students through a learning management system module. Although Hypothes.is annotations have the potential for an internet-wide audience, the class grouping for Hypothes.is limited annotation to only the students of the class and the instructor. Public visitors to a particular article’s URL could not access the student annotations because they were not part of the student group. The students accessed Perusall as a stand-alone online platform with a code given by the instructor. Again, the annotations were only available among the students of the course and the instructor. Perusall annotations are generally limited to a course or subgroup within a course.

When this study took place, one could annotate text or a part of a figure with Perusall, but only text annotation was possible with Hypothes.is. Perusall also included some social functions such as student avatars which would let one know when someone else (student or instructor) was also using the platform at the same time, the ability to “upvote” an annotation (express agreement or support), automatic labeling of annotations that were phrased as questions, emoji icons, @student call outs, which will alert someone that they have been mentioned in an annotation, and email notifications for annotation responses. Hypothes.is included tagging and email notification of annotation responses, but did not have the other social-type functionality. Perusall has an additional machine learning capability for grading annotation output in large enrollment classes, as well as a “confusion report” to assess major areas of student confusion, but these were not used and thus not evaluated in the current study.

Survey Questions for Annotation Platform Comparison, and Annotation vs. Traditional Template Comparison

At the end of the academic year, students were given a voluntary, anonymous survey prompting comparison of the collaborative online annotation process to the traditional reading assessment template, and comparison of Hypothes.is to Perusall. For the 2019–2020 cohort, the survey completion rate was 15 out of 18 students. For the 2020–2021 cohort, the survey completion rate was 19 out of 21 students. The overall survey completion rate for all participants was 34/39, or 87%. Survey questions were as follows:

1. Compared to the MS word templated approach, did you find the annotation platform a better or worse tool for your learning of the biology content and experimental procedures in each paper?

2. Which annotation platform, Hypothes.is or Perusall, did you prefer, and why?

3. What did you like about the Hypothes.is platform?

4. What did you dislike about the Hypothes.is platform?

5. What did you like about the Perusall platform?

6. What did you dislike about the Perusall platform?

7. Did you feel that the guideline of 5 annotations per week, with the supplied rubric, was enough guidance in the annotation process? Why or why not?

8. Identify a useful annotation that you came across. What was it about the annotation that made it useful?

9. Identify a useless annotation that you came across. What was it about the annotation that made it useless?

10. How could the annotation platforms and related teaching and learning processes be improved (i.e., features, workflow, teacher prompts, etc.)?

The survey data is available to the reader in full in Supplementary Table 2 , as an unedited student-by-question matrix ( Kuckartz, 2014 ). Categorization of responses for the first two survey items was straightforward, falling into only three categories (Question 1—annotation preferred, template preferred, or no clear preference; Question 2—Hypothes.is preferred, Perusall preferred, or no clear preference). More than three categories were needed to adequately summarize responses for items 3–10, and owing to space constraints in this manuscript, those can be found in Supplementary Table 3 . A few responses were uncategorizable, and occasionally, some questions were left blank by a student. Representative responses for each survey question are included in the body of the paper, with some occasional light editing for clarity. The words “article” and “paper” are used interchangeably throughout.

Annotation Output Analysis and Figure Generation

Quantitative student annotation output measurements included:

i. The number of annotations made per student per paper (how many times does a student annotate?)

ii. The annotation character volume per student per paper (how much do students contribute in their body of annotations for a given paper?)

iii. The annotation character volume per student per annotation per paper (how much do students contribute in each individual annotation?)

iv. The individual annotation quality, as assessed by the course instructor according to the rubric of Miller et al. (2016)

v. Whether annotations were isolated (defined as one solitary annotation) or part of a collaborative thread (defined as two or more annotations)

Anonymized student annotations are available on a student by student basis and on a threaded basis for each cohort and platform as excel files in the Harvard Dataverse. The anonymization replaced names with numbered student labels (student 1, student 2, student 19, etc.). Annotation of one paper was missed by two students (student 19-paper V, student 33-paper Q) in the 2020–2021 cohort due to excused medical absences, so means are calculated for those students with an accordingly adjusted denominator. Character counts were taken for annotations as they appeared, with the name substitutions. If a student typed in a URL or DOI in their annotation, it is included in the character count. If a student included a hyperlink in their annotation, the URL was extracted and placed in a separate column in the excel analysis file, but not counted toward the character length. This approach preserves the links to other resources made by students, but treats the annotation content with as little manipulation as possible. Repeating the character count analyses with URL and DOI text excluded, did not affect any conclusions regarding platform differences (or lack of differences) in annotation output character volumes. Emoji characters in annotations have also been preserved, but were used sparingly by students. Data analysis was performed using a combination of MS Excel and Graphpad Prism 9 software, and figures were generated using Biorender.com ( Figure 1 ), MS Word ( Table 1 and Supplementary Tables 1 – 3 ), or Graphpad Prism ( Figures 2 – 5 ). The number of observations for (i–iv) depended on student cohort size: 2019–2020 cohort ( n = 18), 2020–2021 cohort ( n = 21), or with the students from the two cohorts combined ( n = 39). T -tests for comparing means are paired (Hypothes.is mean vs. Perusall mean) and p values are indicated on the graphs or bar charts. For threaded annotation observations, a threaded vs. isolated percentage was measured for annotation output on each paper, thus there are only four observations (four papers) for a paired t -test within a cohort to compare platforms. Readers should consider whether the difference in means, or data trends in the charts, could be pedagogically significant in their classrooms, along with any consideration of the mean comparison p value. No comparisons for annotation counts, character counts, or annotation quality were undertaken between specific papers, rather the analysis focused on differences between the two annotation platforms. The effect of an individual paper is indeed diluted, as means across four papers for each student within each platform were used to obtain annotation number, character volume, and annotation quality scores which then fed into the comparison of the platforms.

www.frontiersin.org

Table 1. Student survey responses regarding assessment approach and platform preferences.

www.frontiersin.org

Figure 2. Most students exceed the instructor-stipulated annotation requirement. (A) Each data point represents the number of annotations made by a student for each paper. The X-axis is organized to group students together according to platform output means being higher on Hypothes.is, or Perusall, or equal. Lines next to data points indicate standard error of the mean (SEM) of the measurement for each student. Dashed lines indicate the global means for all students, or the minimum stipulated number (5). (B) 2019–2020 cohort (18 students), and (C) 2020–2021 cohort (21 students) mean number of annotations per student (from the four papers on each platform), along with pairing (gray lines) to indicate an individual student’s output on each platform. To the right of each bar chart is a timeline tracking the mean number of annotations per student from the first to the eighth paper annotated, error bars: SEM.

Student Survey Responses

Key comparisons: template vs. annotation, hypothes.is vs. perusall.

Students strongly favored the collaborative online annotation process compared to the traditional paper analysis template ( Table 1 , Q1). 25/34 students (∼74%) felt that the online annotation process was a better content-learning tool compared to the traditional template. Only 6/34 students (∼18%) preferred the template, and 3/34 (∼9%) students had a mixed response with no clear preference for either process.

Those in favor of the online annotation approach indicated that looking through the annotations brought new insights based on the thinking of others, and enabled interaction that was not possible with the traditional reading template.

The annotation platforms were a better tool than the template approach. Having to read through it and analyze it myself, and then re-synthesize it with other people’s comments forced me to go back to the paper more than once and dive in.

Annotation was much better than the templates. Promotes critical thinking and importantly, discussion. With the templates I would never even think about some of the things my classmates bring up.

The power of the annotation platform lays in its capacity to serve as a collective real-time inter-phase in which one can comment, review, and interact with other students. This enables a deeper conversation with respect to questions, concerns, or the analysis of a particular piece of discussion, figure, experimental methodology, and is as a result superior to conventional note-taking which is static by nature.

I thought that the annotation platforms were a lot more helpful because I could see what other students were saying and it wasn’t just my ideas. I felt like I did a lot more thinking when I read the threads of other students.

For those in favor of the traditional template approach, they felt that it prompted a more complete and thorough analysis of the paper, because each figure had to be analyzed.

I personally preferred the templated approach, although it was more difficult and took up significantly more time. It caused me to examine each figure in a lot more detail. With the annotation platforms, it was much easier to “slack off.”

I think the template was better. It gave me a framework for how I’m supposed to learn from and critique a paper. I still follow the template even when I have to use the annotation platform.

Preference between the platforms had a relatively even split ( Table 1 , Q2); 14/34 students (∼41%) preferred Perusall, 12/34 students (∼35%) preferred Hypothes.is, and 8/34 students (∼24%) indicated no clear preference for either platform.

Remaining Qualitative Survey Data

Specifics for platform preference: commentary on hypothes.is.

Students commented favorably on Hypothes.is regarding its simplicity in using the annotation window, overall reading experience with the article being annotated, and having less log in prompts.

I liked Hypothes.is because of its inherent simplicity. You annotate and/or highlight a particular section, can see any replies immediately underneath the annotation, and can in turn can click within the text or within the annotation to go back and forth between the text and comment of interest.

Hypothes.is, because it’s much easier to locate the annotations. In Perusall we have to click on each highlight to navigate to the specific annotation.

Hypothes.is was easy to see all the threads and I didn’t have to login every time I wanted to access it.

I liked in Hypothes.is how hierarchy can be established within a thread of comments when necessary. You can reply to any comment in a particular thread and the system will make it clear which comment you are responding to by adding an indentation before your reply. That makes annotations very neat and organized.

Students disliked Hypothes.is for the inability to annotate figures, and the lack of an upvote button. Some viewed the plug-in nature of Hypothes.is as a positive, because it brought them directly to the paper’s URL, while others viewed this as a negative because they were accessing Hypothes.is through Google Chrome, which was not their preferred browser. As a change from when this data was collected, it appears that Hypothes.is now supports all browsers with the exception of Internet Explorer.

Hypothes.is has fewer functions in the comment section than Perusall.

I wish Hypothes.is had an upvote/question button for my peers’ responses.

I definitely preferred the website nature of Perusall over the plugin of Hypothes.is.

It is less interactive than Perusall, like marking annotation as helpful or marks it as when I have the same question.

There was not a graph annotation function like Perusall, which made it more difficult to annotate figures. The plugin format of Hypothes.is was a bit hard to figure out at times.

I don’t see an option to upvote or mark my favorite annotations. I once clicked on the flag button at the bottom of someone’s annotation and I then realized that it reported the annotation to the moderator, as if there is something inappropriate. That’s not what I meant to do and I couldn’t undo the “reporting.” That was pretty awful.

I do not like how I have to download it and use it as a Google Chrome (not my favorite browser) extension. I also dislike how I cannot label the figures directly—can only highlight text.

Specifics for Platform Preference: Commentary on Perusall

Students favored Perusall’s allowance for annotating images, annotation up-voting, the more social feel and appearance, seeing the presence of other students, and the ease of access via one online platform with all course article PDFs present.

I preferred Perusall over Hypothes.is. It seems like a more user-friendly platform, it allows inserting images (and emojis!) and has good filter functions (e.g., for unread replies, comments by instructor etc.).

Perusall seems to be a more well-polished annotation tool. You can temporarily hide other people’s annotation and focus on the paper only, which gives you a cleaner environment.

Preferred Perusall because it is easier for me to access the papers. With Hypothes.is, I often have to switch from Safari to Chrome or vice versa before it lets me view the paper. Also, with Perusall, I get to draw a highlighted box for annotating figures, with Hypothes.is, I can only highlight text.

I personally preferred Perusall because we can annotate on a figure by highlighting the image and also upvote the threads we think are particularly good.

I like the organization of the Perusall platform, specifically the page by page conversation tracking as well as the up-voting feature.

Perusall’s interface reminds me of social media a bit more than Hypothes.is, which is at least refreshing when going through what could be difficult material.

There is no confusion about what link to follow to annotate the paper because it’s already uploaded. It’s easy to see each person’s annotations because they’re color coded and you also get to see who’s online. You can react to people’s annotations.

Student dislikes of Perusall included problems with highlighting text, having to do frequent sign-ins when accessing the platform, occasional inability to read both the paper and the annotations simultaneously on their screens, and the lack of being able to use the tool themselves for independent study groups.

I always had issues highlighting discrete pieces of the text, whereby my highlighter function would inadvertently highlight an entire page but not allow me to highlight a particular sentence or words.

It’s quite difficult to see the content of annotation without clicking on the highlight. Also can’t download the paper directly from the Perusall platform.

It was hard to have the comments section open and be zoomed in on the paper enough to read it. Additionally, there were a few times where comments would get put on top of each other by different students and the comment that was placed first was not seen.

One suggestion I would make is that I hope I can upload papers myself and set up study groups. I tried to discuss one of the papers with just a small group of students but unfortunately I could not do that.

The final comment is suggestive of the utility of collaborative annotation outside of an instructor-guided setting (i.e., student study groups, group project collaborations).

Instructor Guideline for Numbers of Annotations Made

The majority of students (70%) found the guidelines regarding the number of annotations to be sufficient, although they could perceive the arbitrary nature of the guideline in that five or even fewer annotations could be lengthy and insight-rich, while a higher number of annotations could be terse and not provide as much insight. The annotation number stipulation is straightforward and easy for both instructors and students to keep track of which is likely one of the reasons that it is commonly used as either a guideline or output measurement ( Miller et al., 2018 ; Lee et al., 2019 ; Singh, 2019 ).

5 annotations are more than enough. I think the nature of the platforms naturally enforce conversations and interactions, that in my mind without thinking about it, usually go beyond the suggested 5 annotations.

Yes, I think five annotations are a good amount. But maybe some clarifications on how long those annotations should be. Since some students have five really long annotations, and some have 10 short annotations. Otherwise, the guidance is clear.

I think so; the real power of these guidelines came from the variety of annotations that were given by each of the students. Between background information on methods, critiques of the authors’ analysis, and questions about the scientific background of the papers, I felt like the five annotations per week were sufficient for a robust conversation.

I think the guideline was reasonable. As a suggestion, perhaps the assignment could include introducing a certain number of comments as well as responding to other comments. The goal of using an annotation platform vs. the template is to encourage discussion with other people.

Which Annotations Were Most Useful?

Annotations explaining some aspect of the source text were the most frequently mentioned as useful in the survey (40% of responses). Not surprisingly, the students also found value in annotations encouraging dialogue and raising additional concerns and questions. Corrections from either the instructor or other students, and linkage to other applications or course content, rounded out the categories for useful annotations.

We have seen several times circumstances where multiple people will enter into a conversation and we end up with a whole thread of annotations. I think these can be extremely helpful and also just make reading the paper more interesting. Especially when people argue about a certain point, as getting to see people’s arguments often helps to better understand the author’s motivation behind doing something a certain way.

Corrections from other students and the instructor if I misunderstand something. Connections between the paper we are annotating with lecture materials or other research papers.

Annotations that synthesized something from the paper and then asked a question about it. What I appreciated was that it was sometimes a comprehension question (why would they use X method, not Y?) but sometimes it linked to outside ideas or thoughts (would this translate to Z?).

Sometimes I come across annotations that describe a method, and those are helpful because they make it easier to understand the results. However, this only applies to annotations where someone took the time to make a clear and concise annotation rather than copy-pasting a description from a webpage or linking a paper.

Which Annotations Were Useless?

The most frequent response for this question in the survey was that there was no such thing as a useless annotation (31%). Students placed less value on the re-stating of anything obvious, terse agreement annotations, or information that was easily found through an internet search. They favored annotations that were dialogic. There were some differences in opinion in regards to definition-type annotations; for some they made the reading process easier, while others viewed definitions as a dialogic dead-end and something that they can easily obtain on their own.

Some annotations were superfluous in nature and defined terms and or processes that were canonical and did not need a one paragraph explanation.

Definitions—especially in the beginning were very frustrating. There is no response to them, they don’t make you think any more or differently about the paper.

I dislike annotations that only link to another paper, like ‘Excellent review article (link). What is it about the review article that makes it excellent? What did the student learn from that review article? What about the review article complements this specific paper? Just even a single sentence would be a big improvement.

Annotations that describe straightforward results. Like if there is a graph that shows that some parameter increased with a treatment, then an annotation stating just that is useless. If the annotation links it to the other results and explains the conclusion, that’s useful. However, it shouldn’t be too long and convoluted.

I can’t remember any useless annotation I have come across. I don’t think there is or can be any useless annotation—I think what may seem obvious to one may actually be something that is completely missed by another.

Further Pedagogy-Related Feedback From Students

In the survey responses, students are thinking of ways to operationalize dialogic feedback and achieve “revisits” to the platform after an annotation requirement has been fulfilled. Some students were daunted by the vast amount of annotations on a given paper in a group of approximately 20 students and one instructor annotating. Reading the full body of annotations is a fairly large time commitment for the students, who would also spend a great deal of time reading the content of the paper itself.

I feel like having your recaps in class helps, because I rarely read all of the annotations, or feel overwhelmed doing so.

I think at times there were just too many comments on a paper. It became a race for people to read and annotate the papers early so there was enough left to comment on, without being repetitive. If I was late to the game, sometimes it was easier to just read the comments and find an outside/related subject to Google and link to instead of reading the paper and really thinking about it. I think lowering the number of comments we need to make would help with that.

What happens a lot with me and some of my friends is that by the time we’re done reading and making our annotations, someone else on the platform has already commented what we wanted to say. Then it becomes stressful to think of new things just to stand out. I feel like commenting “I had the same question” on another person’s comment makes it seem like I was lazy and didn’t read the paper when in fact I really did have the same question.

I think it would be interesting to assign different figures to different groups of students, it might allow for more in depth critique of certain sections. Additionally, it would be an opportunity for students to work in groups and get to know each other.

Again, I like the “back and forth” discussions in the annotation. It is like a debate in the annotation form. I think I’ve seen too much “I agree” (though I used it a lot, too). We might be better off to give contrary opinions and then defend each other’s view using lecture or outside source knowledge. I’m sure we’ve all come across some annotations to which we hold completely different opinions. For these annotations, after we’ve given our feedback, I’d expect the other people to defend their ideas too.

I think that perhaps there could be an incentive provided for people to actively go back to the platform (after they made their annotations) to discuss with people who annotate after them—perhaps like extra marks? Because once one makes their annotations, there isn’t really a need for one to go back and “interact.” So perhaps this would encourage more interaction? but I also feel that this may lead to “flooding” of annotations.

The annotation platforms have adopted technical solutions to enhance returns to the platform via email notifications when one has been tagged in an annotation, or had their annotation further commented on. Additional return incentive could be built-in pedagogically by the instructor, perhaps encouraging responses in dialogic threads, or suggesting that while a certain number of responses can be “initiative” (start a new thread), other responses should continue from an existing annotation to make a constructive dialogic thread. Assessment routines could perhaps be shifted away from individual annotations and toward the overall quality of collaborative threaded contributions.

Students suggest some prompting such that all of an individual’s annotations are not directed in one section of a paper, instead being divided among introduction, methods, results, and discussion sections. Teacher prompts taking the form of “seed” annotations could also guide students by superimposing a templated approach onto the annotation approach, if certain seed annotations are regularly included (i.e., Are the researchers missing any controls? Do the conclusions feel supported by the existing data?). In another study, anonymous seed annotations generated from a previous year’s more intriguing threaded discussions, had future value to prompt better annotation quality and more elaborative-generative type of threads in a subsequent class ( Miller et al., 2016 ).

Teacher prompts could be helpful, though I also worry that then students may focus on answering just those prompts and not branching out to really critically analyze the rest of the paper.

I think it worked really well overall, however, it would help to have more guidance/requirements on the types of annotations students should be leaving. Annotation platforms make it really easy to “skim’ the paper, rather than really read into it.

Simply writing five annotations would be very generic. It may be better to restrain, for example, one annotation at least to comment on a new term that wasn’t familiar to you before, three annotations at least to comment on the results/experiments, and perhaps one annotation at least to comment on the biggies (significance? results sound or not? future directions? etc.).

Since the body of annotations can grow to a large size in a group of 20 students, the notion of going to upvoted responses might be a way for students to consume the annotations more selectively. The upvote button on the Perusall platform should help to limit sparse “I agree” type annotations, as the upvote accomplishes the same function. However, there was some concern that threads or comments that were really insightful did not get upvotes, whereas some threads that were viewed as not being particularly helpful did receive multiple upvotes. This is an area where instructor curation and recaps are needed to prevent the loss of quality annotation work from student consideration.

On Hypothes.is, I can’t see which comments get the most “upvotes” or “likes.” Sometimes I don’t have the time to read through every comment, but it’d be helpful to look at comments that were most helpful to a lot of students.

I read some really thoughtful Perusall annotations from other people that didn’t get upvoted. I also read some less thoughtful Perusall annotations from other people that got relatively heavily upvoted.

Quantitative Data: Annotation Output for Hypothes.is vs. Perusall Platforms

Instructors getting started with a collaborative annotation platform may look toward quantitative metrics suggestive of student engagement. Perhaps the platforms themselves will come up with more sophisticated indicators, but some basic usage indicators that are easy for an instructor to grasp include: the number of annotations made by a student (How often does it meet or exceed instructor-stipulated minimum?); annotation character volume per student per paper (Has a student contributed sparse or more lengthy content in their annotations for a paper?); annotation content quality; and the degree to which annotations are isolated (did not receive any further response) or threaded (received at least one response). Looking at these metrics over time, as students progress from one paper to the next, and then from one platform to the next, may also be beneficial to gauge overall student progress.

Number of Annotations per Student per Paper

In considering each data point in Figure 2A , representing the number of annotations made by a student on a given paper, the vast majority of students exceed the five annotation instructor stipulation on a consistent basis. Only 4/39 students consistently adhered to the minimum recommended number. This exceeding of the instructor-stipulated minimum is in line with a study by Lee et al. (2019) , where a mean number of 16.4 annotations per student per paper exceeded a 12 annotation stipulation. 24/39 students had more annotations per paper using Perusall, while 11/39 students had more annotations per paper when using Hypothes.is. 4/39 students annotated to the same degree using both platforms, likely out of habit of sticking to the minimum stipulated annotation number. In considering the output from both cohorts ( n = 39), the mean number of annotations per paper per student using Perusall (7.49, SEM:0.427), was higher than the mean number of annotations per paper per student using Hypothes.is (6.81, SEM:0.375), although the difference was less than one annotation per paper, not statistically significant, and unlikely to be pedagogically significant for reasons mentioned previously.

In Figures 2B,C , means for the number of annotations made per paper were similar when comparing output on the two platforms for either cohort. The mean number of annotations made per student per paper stayed relatively stable over time as the students progressed within and between annotation platforms in the 2019–2020 cohort. There was an initial high activity on the first paper annotated for the 2020–2021 cohort, which then stabilized between 6.5 and 7.5 annotations per paper, similar to the 2019–2020 cohort. An abnormal initial output makes sense, if one considers that students are adjusting to the platforms and may not yet have a good sense of output norms among their peers.

Character Volume in Annotations

In Figure 3A , 28/39 students had higher annotation character volumes per paper using Perusall, while 11/39 students had higher annotation character volumes with Hypothes.is. In combining data from the two cohorts ( n = 39), the overall mean for Perusall was 3,205 characters per paper (SEM: 206), and 2,781 characters per paper (SEM: 210) for Hypothes.is ( p = 0.0157).

www.frontiersin.org

Figure 3. Character volume in annotations. (A) Each data point represents the total character volume within all annotations made by a student for each paper. The X-axis is organized to group students together according to platform output means being higher on Hypothes.is or Perusall. Lines next to data points: SEM. Dashed lines indicate global means. (B) For the 2019–2020 cohort (18 students), and (C) the 2020–2021 cohort (21 students), the mean total character volume for all annotations per student (from four papers on each platform), and mean character volume per annotation are indicated, along with the pairing (gray lines) to indicate an individual student’s output on each platform. Below the bar chart is a timeline tracking the mean total character volume of annotations per student from the first to the eighth paper, error bars: SEM.

In Figure 3B , the 2019–2020 cohort had a higher mean total character volume output per student per paper for Perusall (3,205, SEM:220), than for Hypothes.is (2,287, SEM:215) ( p < 0.0001). They also had a higher mean character volume per annotation for Perusall (503, SEM:19.7), than for Hypothes.is (355, SEM:17.7) ( p < 0.0001). This cohort showed a steady increase in character volume output per student over time.

In Figure 3C , there was no significant difference seen in mean total character volume for the 2020–2021 cohort between the platforms, although Hypothes.is had a higher character volume output per student when looking on a per annotation basis ( p = 0.012). Mean character volume output per student over time was steadier and did not show the same consistently rising pattern as the 2019–2020 cohort. One potential explanation is that the user interface or social nature of the Perusall platform encourages a higher output and this inertia remains when the students transition to Hypothes.is.

Annotation Quality Scores Increase Over Time, Regardless of Platform Order

In keeping with prior studies ( Miller et al., 2018 ), individual annotation quality was generally quite high, with the vast majority of annotations scoring full marks (two out of a possible two in an individual annotation). Figures 4A,B , indicate a decrease in low scoring annotations (0 or 1), as the students go from the first to the second annotation platform. Figures 4C,D indicate an increase in mean annotation quality score as students progressed from one platform to the next, regardless of platform order. Mean annotation quality score for the 2019–2020 cohort went from 1.70 to 1.84 from the first to the second platform (Hypothes.is to Perusall) ( p = 0.0176). For the 2020–2021 cohort, mean annotation quality went from 1.57 to 1.71 from the first to the second platform (Perusall to Hypothes.is) ( p = 0.011). In considering progression over time for annotation quality in Figures 4E,F , there was some fluctuation on a per paper basis, but the trends indicate an improvement from the beginning to the end of the annotation exercise. This data combined, is consistent with a growing fluency with annotation practices and de-emphasizes any platform influence on annotation quality. It is reasonable to conjecture that different attributes of a platform may change student behavior, and this can be seen in regards to annotation lengths. Since both platforms enable an essential basic annotation function, student insight shines through and does not necessarily depend on annotation length. Thus, it is reassuring that the mean quality score measured per student globally ( n = 39) was almost identical (1.71 Hypothes.is, SEM:0.04, 1.70 Perusall, SEM:0.05).

www.frontiersin.org

Figure 4. Annotation quality scores increase from the first to second annotation platform for each cohort, regardless of the platform order. (A) 2019–2020 cohort, and (B) 2020–2021 cohort, mean number of annotations scoring a 0, 1, or 2 (among four papers on each platform), error bars: SEM. (C) 2019–2020 cohort, and (D) 2020–2021 cohort, mean annotation quality score per student (among four papers on each platform), with the pairing (gray lines) to indicate an individual student’s score on each platform. (E) 2019–2020 cohort, and (F) 2020–2021 cohort, timeline tracking the mean annotation quality score per student from the first to the eighth paper, error bars: SEM.

Isolated vs. Threaded Annotations

Threaded annotations can be viewed as preferable to isolated annotations because they provide evidence that the initial annotation has been read and digested by the responder, and then spurred some dialogue for debate, additional nuance, or correction. In considering the percentage of total annotations that were isolated vs. those appearing in a thread, the only time that isolated annotations outnumbered threaded annotations was in the initial use of the Hypothes.is platform with the first assigned paper for the 2019–2020 cohort ( Figure 5A ). In all other papers, annotations that were part of a thread outnumbered those that were isolated. The 2019–2020 cohort showed a clear trend of increasing threaded annotations over time, and a higher mean of percentage threaded annotations in the second platform (Perusall, 80% threaded), vs. the first platform (Hypothes.is, 53% threaded) ( p = 0.0108). The 2020–2021 cohort ( Figure 5B ) showed a relatively steady trend with a mean of ∼70% of annotations occurring in threads on each platform. The final paper annotated on each platform tended to have the highest percentage of collaborative annotations, again indicating an upward trend for dialogue.

www.frontiersin.org

Figure 5. Annotations in collaborative threads over time. (A) 2019–2020 cohort, (B) 2020–2021 cohort, percentage of annotations classified as isolated (no further student responses) [squares], or accompanied by one or more responses (thread length two or greater) [circles] within a given paper. Above the graphs are the mean percentages of threaded responses among the four papers annotated on each platform within a given cohort.

The trend toward an increase in the percentage of threaded annotations, and an increase in mean annotation quality scores over time is reassuring, as it suggests that even in a relatively unprompted setting, students have some natural fluency and become better annotators. This should be encouraging for both the annotation platform designers and for teachers considering a collaborative annotation approach in their courses. Prior studies have not followed the same student population from one platform to another, nor looked at output over time (threaded vs. isolated, annotation numbers, annotation character volume) within and between platforms. The quantitative analysis in this work provides a baseline upon which future quantitative studies on student annotation output can be compared or further built-upon in sophistication. The annotation character volume difference in the 2019–2020 cohort was in favor of output on the Perusall platform, which could suggest that social functionality of a platform may drive some additional engagement, however, that conclusion should be tempered by the data from the 2020–2021 cohort, which was more even. The survey data shows a slight preference for Perusall vs. Hypothes.is (41% vs. 35%).

Caveats and Limitations of Current Study

Since the class sizes in this study were relatively small (<25 students), the body of annotations for a weekly reading were still fully consumable by the instructor with the investment of roughly 4–5 h for reading, processing, grading, and engaging with a subset of those annotations. This does not include a thorough reading of the source document and the planning of the accompanying lecture, which took additional time. The reading time commitment for an entire body of annotations is perhaps even more daunting for students, as was indicated in some survey responses. With larger classes, one instructor may have difficulty managing the body of annotations, and if engaging with students on the platform, would likely be participating within a smaller percentage of the overall student body. Both platforms have the ability to divide a class into smaller subgroups. Perusall’s default setting is for groups of 20 students. If the readings are annotated in assignment mode, Perusall also has a machine learning capability to analyze a large body of annotations that could accrue with a large class, but this was not evaluated in the current study. Annotations of poor quality can contribute noise to the reading experience, and contempt for the annotator ( Wolfe and Neuwirth, 2001 ). In this study, low quality annotations were a relatively minor concern, but could be a greater concern with larger class sizes, or for classes where some subset of students approach the source material in superficial way (i.e., required class outside of student’s main interests, unreasonable difficulty for students in grasping the source material, or desire to troll/abuse other students in the class). In sum, even though the annotation approach worked well in the current study and student population, problems could emerge with another population.

Since the Hypothes.is annotations were occurring on article PDFs hosted as webpages, annotations can be temporarily lost if the article URL changes. This occurred with one article from the 2019 to 2020 cohort, and one article from the 2020 to 2021 cohort. With some technical support from Hypothes.is, the annotations were recovered by using a locally saved PDF where an underlying fingerprint could still be recognized in order to show the annotations. Individual annotations can also become “orphaned” if the text they were directed to disappears from the source webpage. These are listed under another tab in the annotation interface, so are not lost from consideration. If students are annotating web content that is more dynamic with many source edits, then this could be more problematic. In Perusall, the source documents were uploaded PDFs, so the underlying text never changed.

Ideally, the same articles would have been assigned to each cohort (2019–2020 and 2020–2021), however, that was not possible, as the articles needed to relate to a seminar speaker series where the invited speakers change from year to year. Instructors should keep in mind that when students first use an annotation platform, they do not yet have an impression of group output norms, so one might expect higher or lower output on the first paper annotated. This can be seen in both cohorts in this study, as the 2019–2020 cohort had a particularly low character volume on the first paper annotated, while the 2020–2021 cohort had a higher annotation number and character output on the first paper.

The mean number of annotations per paper are surely influenced by teacher guidelines. If one used the platforms with no minimum stipulation, or had a minimum stipulation count of 10 instead of 5, student behavior is likely to change. Some portion of the motivation is driven by instructor stipulation and the grading of the annotations, another portion of the motivation is coming from genuine engagement with a thought-provoking point made by another student, a refutation of one’s annotation, or taking a conversation in an unexpected direction. One cannot be sure of the balance between these forces, but there is prior research indicating that even in ungraded settings, collaborative annotation still appears to engage students with class-associated reading ( Singh, 2019 ).

In retrospect, being able to link identity for student survey comments to the same student’s annotation output would have enabled additional research questions to be asked (i.e., do students that favor one platform in their survey response also make more annotations/have a higher character volume with that platform?). As the surveys in this study were answered anonymously by students, this was not possible.

Finally, the functionality of the platforms can change over time. This is an unavoidable problem for research on any type of educational technology. Some issues mentioned by students may already be in the process of being fixed by the platforms.

Emergence of Annotation Best Practices

The major areas for the shaping of annotation best practices appear to reside in:

1. Scaffolding for students in writing more effective annotations.

2. Affordances of asynchronous participation.

3. Measurement of annotation across texts vs. within texts.

4. Large data set mining/learning analytics approaches.

As the annotation platforms are relatively new to the education technology scene, instructors are now starting to consider what scaffolding is needed in order for students to write high quality annotations. Work by Jackson (2021) parallels two of the qualitative survey prompts here, in that it asks for students to elaborate on what makes for good quality and poor quality annotations, in hopes that they will apply that reflection toward their own annotation output later on. It includes an excellent clarify-connect-extend annotation rubric ( Jackson, 2021 ), which instructors might find useful in an initial briefing of the annotation process for their students, or for remedial tune-ups for those who are contributing less than ideal output.

Asynchronous discussion allows for preparation and analysis not only for students, but also for the instructor. For example, in synchronous situations, the instructor cannot typically ask a student to wait for an hour for a reply to a comment/question, in order that the instructor can go read another article and make a more nuanced and accurate comment. Yet with an asynchronous approach, this is possible. Although one often thinks of how to motivate students, these asynchronous approaches provide a buffer of time that can motivate further engagement from the instructor with the source text or with other related materials. On the other hand, tardy feedback (>2 weeks after an assignment is completed) is detrimental to the feedback’s value and impact ( Brown, 2007 ). With the annotation platforms in the current study, follow-up on student annotations occurred on the order of hours to days, well within the period of significance for feedback usefulness.

Annotation across various texts vs. within a given text both yield valuable information ( Marshall, 2000 ). A student’s annotations across various texts during a semester, or during a degree program, could give some indication of intellectual growth over time. The body of annotations within a given text could provide an important indicator for instructors regarding engagement levels for an assigned text, with the assumption being that a text with a high volume of threaded annotations is more conducive to debate and collective meaning-making by the students than a text with a low volume. This may provide a signal for what reading should be kept or omitted in future course syllabi, while considering that some higher or lower numbers may occur in the initial introduction of the annotation platform, as students become familiar with the annotation routines. Similar consideration of individual student activity vs. course resource usage have been harnessed for LMS dashboards ( Wise and Jung, 2019 ), and for annotations across course documents over time ( Singh, 2019 ). Although just in time teaching was mentioned previously in regards to the traditional template assessment, it may equally apply to annotation output, particularly if collating tags indicating confusion. This could inform instructors on where students are having difficulties ( Singh, 2019 ). Perusall also has the capability to generate a confusion report to summarize general areas of questions/confusion.

For learning analytics practitioners, a body of annotations holds not only the insight within it (i.e., what section of text is highlighted? what is expressed in the annotations added?), but where it was applied (which document or URL?), when it was applied (time stamps), and how students and scholars might form an effective network (who participates in whose threads?). This could collectively yield a staggering amount of data. An estimated 2,900,000 time-stamped learning “traces” were postulated to arise from a 200-student course using an nStudy collaborative annotation tool ( Winne et al., 2019 ). The Hypothes.is and Perusall platforms have vastly larger student user bases, so collaborative online annotation seems ripe for learning analytics and big data inquiries. Statistical properties of online web page tagging practices ( Halpin et al., 2007 ; Glushko et al., 2008 ), or the view of collaborative tagging as distributed cognition ( Steels, 2006 ), may also apply to annotation content when larger groups of annotators are involved.

Annotation Platforms for Peer Review and Post-publication Peer Review

Although the user base for online annotation by students is large, collaborative text-linked annotation could find additional users in a journal’s peer review process or the post-publication peer review process ( Staines, 2018 ), whereby commentary is collected for the purposes of re-contextualizing or further assessing the quality of previously published manuscript ( Kriegeskorte, 2012 ). Some journals already include collaborative stages in peer review, but the discussion occurs in more of a forum type situation, where the commentary is not directly text-linked or marginal. Authors, reviewers, and editors should consider whether commentary that is directly text-linked or figure-linked is more beneficial, or whether they would like to continue to contextualize comments with line numbers and other source document referrals. Critical commentary on a published article may occur already within the introduction and discussion sections of other articles, or on web blogs, but assembling it can be difficult, as it is not anchored within the discussed document, but reconstructed in a labor-intensive way from citation trails. One can contemplate whether post-publication peer review initiatives like Pub Peer ( Townsend, 2013 ), would be more streamlined if commentary was directly content-linked. This could perhaps be aided by a set of common tags among users.

Meta-Commentary, New Teaching Spaces

In reading the primary source paper and accrued commentary in the annotations, which often include praises, snipes, and how the authors “should have done things differently,” one is fairly confident that the commentary drives additional interest in the paper. Although they are not typically marginal or text-linked, comments in newspaper articles are generally supported by authors and may drive more interest in the article itself ( Nielsen, 2012 ). To consider an example outside of academics, some television programs (i.e., Terrace House) include a surrogate audience of commentators to help a home audience interpret and judge the actions of characters on the show ( Rugnetta, 2017 ). The audience tunes in not only to see what the main characters will do, but also how their behavior is commented upon by this panel of observers. Their commentary functions as highly engaging meta-content that indicates how a viewer should receive and process main events in the show ( Urban, 2010 ). Some fear that the show would be mundane without the additional panel commentary which serves as a major engagement tool; the audience is treated to a meta-experience that filters their own experience, and it is this alternative reading that provides additional intrigue ( Kyotosuki, 2018 ).

Consider reading your favorite movie script with annotations by the director, or a draft of your favorite novel including exchanges between the editor and the author, or a landmark scientific paper with annotations by current scientists. These would all inform the reader on the process involved in getting to the final product, or in the latter example could provide a contemporary lens for older content, and thus add value. Some critics have imagined bodies of annotations from a favorite book that could be shared (transportable social marginalia) in a literary communion through a series of “annotation skins” ( Anderson, 2011 ). The collation and screening of quality annotations could also be a value-adding enterprise for those willing to participate.

While one can lament the loss of physical teaching spaces imposed by a viral pandemic or other virtual learning circumstances, new spaces are opened by new technologies ( Pursell and Iiyoshi, 2021 ). The instructor and the student can “meet at the text” via collaborative online annotation, and engage in critical exchanges.

Future Research Questions

Collaborative annotation provides fertile ground for further study. First: To what degree do students read the primary source text on the annotation platform? Is the student marking up a paper copy or separate digital copy (i.e., personally downloaded PDF file) and then going to the annotation platform, or do they treat the platform as a packaged experience where they do both the initial reading and annotating? This question should be included in future annotation usage surveys and could inform platform designers, who would like to enable the smoothest experience in both source text reading and annotating on the same screen. One might expect dialogic interactions to decrease if users were annotating a paper or digital copy by themselves first and then just typing in those isolated points into the annotation platform interface. Second: What percentage of the total body of annotations on a given text are students consuming? To what extent do students revisit their own annotation threads to look for and address new responses, or revisit a growing body of annotations after they have fulfilled an instructor stipulated amount? Survey data in this study indicated that students found value in the annotations of others, which is in accord with the value of “eavesdropping” on the insights of other readers ( Wolfe and Neuwirth, 2001 ), but currently, the only plausible indicator that an annotation has been read by another student is if they have then commented on it to extend the thread. Perhaps technical developments by the platforms might render some measurement on student consumption of annotations in the future. The consumption of high quality, but perhaps unseen, threads can be aided by an instructor’s curation of annotations for a subsequent class session. Third: How much value does a body of annotations hold over time? Although this has been considered for an annotation’s value relative to the potential permanence of the work itself ( Marshall, 2000 ), one could also ponder how much value a body of annotations generated in one class could have for an instructor or group of students at another university who happen to be embarking on an annotation exercise on the same source text. This would seem to provide fertile ground for cross-cultural and cross-institutional comparisons. An instructor could give a series of richly annotated documents to a group of students and have them evaluate that reading experience vs. a set of unannotated documents to test the dirty textbook Hypothes.is ( Van Dam, 1988 ) in the current online portable annotation environment. There will be opportunities for instructor curation and comparison that also relate to pedagogy, as was the strategy for a previous class’s annotations to function as “seed” annotations for the promotion of productive student dialogue by Miller et al. (2016) . Fourth: Would the author of the source text ever wish to engage with the annotators? Some authors might discover new insights, research directions, and caveats for their published work in treating public annotation directly linked to the source text as a form of post-publication peer review. Textbook authors and editors might like to see sections of the book that generate many annotations indicative of confusion. Other authors are opposed, stating that commentary on their article is fine in other locations (separate blogs, twitter, etc.), but do not want any commentary to be superimposed upon their own website URLs ( Watters, 2017 ). Constructive commentary is likely favorable to most, but it would need to also be free of the noise of useless comments, personal attacks, or factually false statements. This useful “wheat” vs. useless “chaff” concern affects all publication systems.

Collaborative online annotation can provide a means for creative discussion and better understanding of a text, including quite challenging primary research texts. As with any educational technology, pedagogical considerations will be of paramount importance. Students recognize and appreciate that an online annotation platform can make their thoughts, and those of their classmates, visible and actionable for an assigned text, thus providing a useful comparator. Also, some solace can be found in a struggle on tough material that is collective as opposed to isolated. Repetition in grading is cut down when collaborative annotation takes the place of an assignment where students are generating a relatively uniform assessment product. Some of the feedback burden on instructors is removed when students beat the instructor to the punch in responding to an annotation with quality feedback.

Early web browsers contemplated annotation as a feature, but were hampered by an inability to host the potentially huge scale of annotations on a proper server ( Andreessen, 2014 ), so a realization of the power of online annotation is not new and has been around since the early 1990s. Now, because of the large and growing user bases, Perusall and Hypothes.is are opening up a new enterprise that classroom instructors, scholars, and learning analytics practitioners can all enter, and hopefully can all benefit students in the process.

In an address entitled: The Revolution will be annotated ( Personal Democracy Forum, 2013 ), Whaley argued that “reasoning tends to work better as a team sport.” The student feedback in the current study supports that argument. In As we may think ( Bush, 1945 ), where multiple aspects of the internet were presupposed, Vannevar Bush predicted:

There is a new profession of trail blazers, those who find delight in the task of establishing useful trails through the enormous mass of the common record. The inheritance from the master becomes, not only his additions to the world’s record, but for his disciples the entire scaffolding by which they were erected.

This idea may gain traction in the rapidly accruing mass of annotations and post-publication commentary. Since annotation platforms like Perusall are now serving students in the millions, and Hypothes.is annotators have made over 20 million annotations, approximately one year after they marked 10 million annotations, 3 research into usage and impact of these platforms seems particularly pressing. Hypothes.is, through iAnnotate, 4 and Perusall, through the Perusall Exchange 5 are generating excitement in their own dedicated conferences. Learning Management Systems (LMSs) and Audience Response Systems (“clickers” or ARSs) have become so ubiquitous in higher education as to gain a common label. With the number of students currently served, it seems fitting that collaborative online annotation platforms (COAPs) acquire a common label too. To examine the scope of the current study, the students in two cohorts made altogether over 2,200 annotations, totaling over 920,000 characters. Although most of the focus in this and other annotation papers is how the collaborative annotation process helps the students, one can also consider how this spotlight into student thoughts helps the teacher. The students repeatedly had insights into the scientific content of the assigned papers which expanded the thinking of the instructor.

These annotation platforms are bringing new value to the educational technology landscape, new ways of achieving prompt and valuable feedback that is often dialogic in nature, may lessen instructor burden, and increase instructor and student motivation. The task we now face as educators is to make the annotation trails as useful as possible as we engage in the team sport of reasoning in the sciences, social sciences, and humanities.

Data Availability Statement

Anonymized datasets for this study can be found in the Harvard Dataverse online repository: Full survey responses ( https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/0GG53Z ), and annotation content excel files ( https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/G8SR2G ).

Ethics Statement

The studies involving human participants were reviewed and approved by the Harvard Human Research Protection Program. Written informed consent for participation was not required for this study in accordance with the National Legislation and the Institutional Requirements.

Author Contributions

GWP conceived the study, analyzed the data, and wrote the manuscript.

Conflict of Interest

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Acknowledgments

I would like to acknowledge the consistent group effort of the students in annotating challenging papers, and providing frank commentary on the annotation process. The author would also like to acknowledge the time spent with Harvard Medical School’s Curriculum Fellows for broadening his exposure to pedagogical approaches and tools. Valuable feedback on early drafts of this manuscript came from Dr. Taralyn Tan of Harvard’s Neuroscience Program, and Dr. Rachel Wright of Smith College. Valuable guidance on IRB navigation was provided by Dr. Madhvi Venkatesh, Vanderbilt University, formerly of Harvard.

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2022.852849/full#supplementary-material

  • ^ https://web.Hypothes.is/about/
  • ^ https://perusall.com/about
  • ^ https://web.Hypothes.is/blog/our-view-from-20-million-annotations/
  • ^ https://iannotate.org/
  • ^ https://perusall.com/exchange

Adams, P. (2006). Exploring social constructivism: theories and practicalities. Education 34, 243–257. doi: 10.1080/03004270600898893

CrossRef Full Text | Google Scholar

Anderson, S. (2011). What I Really Want is Someone Rolling Around in the Text. New York: New York Times,

Google Scholar

Andreessen, M. (2014). Why Andreessen Horowitz Is Investing in Rap Genius . Available online at: http://genius.com/Marc-andreessen-why-andreessen-horowitz-is-investing-in-rap-genius-annotated [Accessed on July 19, 2021]

Beck, C. (2017). “Informal action research: The nature and contribution of everyday classroom inquiry,” in The Palgrave International Handbook of Action Research , eds L. Rowell, C. Bruce, J. Shosh, and M. Riel (New York: PalgraveMacmillan), 37–48. doi: 10.1111/1751-7915.13576

PubMed Abstract | CrossRef Full Text | Google Scholar

Bold, M. R., and Wagstaff, K. L. (2017). Marginalia in the digital age: are digital reading devices meeting the needs of today’s readers? Libr. Inf. Sci. Res. 39, 16–22. doi: 10.1016/j.lisr.2017.01.004

Braguglia, K. H. (2006). Perceptions Of Reading Assignments: J. Coll. Teach. Learn. 3, 51–56.

Brown, J. (2007). Feedback: the student perspective. Res. Post Compul. Educ. 12, 33–51.

Bush, V. (1945). As we may think. Atl. Mon. 176, 101–108.

Carnegie Learning (2020). UpGrade: An Open Source Platform for A/B Testing in Education. Available online at: https://www.carnegielearning.com/blog/upgrade-ab-testing/ [Accessed on Jun 28, 2021]

Chen, Z., Chudzicki, C., Palumbo, D., Alexandron, G., Choi, Y.-J., Zhou, Q., et al. (2016). Researching for better instructional methods using AB experiments in MOOCs: results and challenges. Res. Pract. Technol. Enhanc. Learn. 11, 1–20. doi: 10.1186/s41039-016-0034-4

Clapp, J., DeCoursey, M., Lee, S. W. S., and Li, K. (2020). “Something fruitful for all of us”: social annotation as a signature pedagogy for literature education. Arts Humanit. High. Educ. 20, 147402222091512.

Clump, M. A., Bauer, H., and Bradley, C. (2004). The Extent to which Psychology Students Read Textbooks: a Multiple Class An. Discovery Service for Purdue University. J. Instr. Psychol. 31, 227–232.

Duchastel, P., and Chen, Y. P. (1980). The use of marginal notes in text to assist learning. Educ. Technol. 20, 41–45.

Fagen, A. P., Crouch, C. H., and Mazur, E. (2002). Peer instruction: results from a range of classrooms. Phys. Teach. 40, 206–209. doi: 10.1119/1.1474140

Ghadirian, H., Salehi, K., and Ayub, A. F. M. (2018). Social annotation tools in higher education: a preliminary systematic review. Int. J. Learn. Technol. 13, 130–162. doi: 10.1504/ijlt.2018.092096

Glushko, R. J., Maglio, P. P., Matlock, T., and Barsalou, L. W. (2008). Categorization in the wild. Trends Cogn. Sci. 12, 129–135. doi: 10.1016/j.tics.2008.01.007

Halpin, H., Robu, V., and Shepherd, H. (2007). The Complex Dynamics of Collaborative Tagging. in Proceedings of the 16th International Conference on World Wide Web. New York: ACM 211–220.

Hannafin, M. J., Hill, J. R., Land, S. M., and Lee, E. (2014). “Student-centered, open learning environments: Research, theory, and practice,” in Handbook of Research on Educational Communications and Technology , eds M. Spector, M. D. Merrill, J. Merrienboer, and M. Driscoll (London, UK: Routledge), 641–651. doi: 10.1007/978-1-4614-3185-5_51

Honeycutt, L. (2001). Comparing e-mail and synchronous conferencing in online peer response. Writ. Commun. 18, 26–60. doi: 10.1177/0741088301018001002

Hoskins, S. G., Lopatto, D., and Stevens, L. M. (2011). The CREATE approach to primary literature shifts undergraduates’ self-assessed ability to read and analyze journal articles, attitudes about science, and epistemological beliefs. CBE Life Sci. Educ. 10, 368–378. doi: 10.1187/cbe.11-03-0027

Howard, J. R., and Henney, A. L. (1998). Student participation and instructor gender in the mixed-age college classroom. J. Higher Educ. 69, 384–405. doi: 10.2307/2649271

Jackson, H., Nayyar, A., Denny, P., Luxton-Reilly, A., and Tempero, E. (2018). HandsUp: An In-Class Question Posing Tool. in 2018 International Conference on Learning and Teaching in Computing and Engineering (LaTICE). New Jersey: IEEE, 24–31.

Jackson, H. J. (2001). Marginalia: Readers Writing in Books. London: Yale University Press.

Jackson, P. (2021). How to Write a High Quality Reading Annotation. Available online at: https://www.saltise.ca/wp-content/uploads/2021/03/How-to-write-high-quality-reading-annotations.pdf [Accessed on Jun 30, 2021]

Kalir, R., and Garcia, A. (2019). Annotation: The MIT Press Essential Knowledge Series. Annotation. Available online at: https://mitpressonpubpub.mitpress.mit.edu/annotation [Accessed on May 17, 2021].

Kararo, M., and McCartney, M. (2019). Annotated primary scientific literature: a pedagogical tool for undergraduate courses. PLoS Biol. 17:e3000103. doi: 10.1371/journal.pbio.3000103

Kennedy, M. (2016). Open annotation and close reading the Victorian text: using hypothes. is with students. J. Vic. Cult. 21, 550–558. doi: 10.1080/13555502.2016.1233905

Kimmons, R. (2021). A/B Testing on Open Textbooks. A Feasibility Study for Continuously Improving Open Educational Resources. J. Appl. Instr. Des. 10, 1–9. doi: 10.51869/102/rk

King, G. (2016). Introduction to Perusall. Available online at: https://perusall.com/downloads/gary-king-webinar-slides.pdf (accessed June 3, 2020).

Kriegeskorte, N. (2012). Open evaluation: a vision for entirely transparent post-publication peer review and rating for science. Front. Comput. Neurosci. 6:79. doi: 10.3389/fncom.2012.00079

Kuckartz, U. (2014). Qualitative Text Analysis: A Guide to Methods, Practice and Using Software. California: Sage.

Kyotosuki, N. (2018). Terrace House: Visualising ‘Asian Modernity. Available online at: https://atmafunomena.wordpress.com/2018/08/31/terrace-house-visualising-asian-modernity/ [Accessed on July 1, 2021].

Lee, S. C., Lee, Z.-W., and Yeong, F. M. (2019). “Using social annotations to support collaborative learning in a Life Sciences module,” in personalized Learing. Diverse Goals. One Heart, ASCILITE , ed. Y. W. Chew (Singapore: ASCILITE), 487–492.

Marrs, K. A., and Novak, G. (2004). Just-in-time teaching in biology: creating an active learner classroom using the internet. Cell Biol. Educ. 3, 49–61. doi: 10.1187/cbe.03-11-0022

Marshall, C. (2000). The Future of Annotation in a Digital (paper) World. Successes & Failures of Digital Libraries: [papers presented at the 1998 Clinic on Library Applications of Data Processing, March 22-24, 1998] . Available online at: http://hdl.handle.net/2142/25539

McLuhan, M. (1964). Understanding Media: The Extensions of Man. Cambridge: MIT press.

McMillan, J. H. (2015). Fundamentals of Educational Research. London: Pearson.

Miller, K., Lukoff, B., King, G., and Mazur, E. (2018). Use of a Social Annotation Platform for Pre-Class Reading Assignments in a Flipped Introductory Physics Class. Front. Educ. 3:1–12. doi: 10.3389/feduc.2018.00008

Miller, K., Zyto, S., Karger, D., Yoo, J., and Mazur, E. (2016). Analysis of student engagement in an online annotation system in the context of a flipped introductory physics class. Phys. Rev. Phys. Educ. Res. 12, 1–12. doi: 10.1103/PhysRevPhysEducRes.12.020143

Nielsen, C. (2012). Newspaper journalists support online comments. Newsp. Res. J. 33, 86–100. doi: 10.1093/geront/gns046

Novak, E., Razzouk, R., and Johnson, T. E. (2012). The educational use of social annotation tools in higher education: a literature review. Internet High. Educ. 15, 39–49. doi: 10.1016/j.iheduc.2011.09.002

November, A. (2020). Interview with Eric Mazur: Socrates Meets the Web! Novemb. Learn. Webpage. Available online at: https://novemberlearning.com/2020/04/29/interview-with-eric-mazur-socrates-meets-the-web/ [Accessed on May 25, 2020].

O’Connell, M. (2012). The Marginal Obsession with Marginalia. New York: New Yorker Times 26.

Personal Democracy Forum (2013). Dan Whaley: The Revolution Will Be Annotated . Available online at: https://www.youtube.com/watch?v=2jTctBbX_kw

Podolefsky, N., and Finkelstein, N. (2006). The Perceived Value of College Physics Textbooks: students and Instructors May Not See Eye to Eye. Phys. Teach. 44, 338–342. doi: 10.1119/1.2336132

Porter, G. W. (2013). Free choice of learning management systems: do student habits override inherent system quality? Interact. Technol. Smart Educ. 10, 84–94. doi: 10.1108/ITSE-07-2012-0019

Pursell, C., and Iiyoshi, T. (2021). Policy Dialogue: online Education as Space and Place. Hist. Educ. Q. 61, 534–545. doi: 10.1017/heq.2021.47

Renz, J., Hoffmann, D., Staubitz, T., and Meinel, C. (2016). Using A/B testing in MOOC environments. in Proceedings of the Sixth International Conference on Learning Analytics & Knowledge , New York: ACM. 304–313.

Round, J. E., and Campbell, A. M. (2013). Figure facts: encouraging undergraduates to take a data-centered approach to reading primary literature. CBE Life Sci. Educ. 12, 39–46. doi: 10.1187/cbe.11-07-0057

Rugnetta, M. (2017). How is Terrace House like a Let’s Play. Available online at: https://www.youtube.com/watch?v=24MWwO_Gpg8 [Accessed on July 1, 2021]

Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instr. Sci. 18, 119–144. doi: 10.1007/bf00117714

Simpson, M. L., and Nist, S. L. (1990). Textbook annotation: an effective and efficient study strategy for college students. J. Read. 34, 122–129.

Singh, S. (2019). Exploring the potential of social annotations for predictive and descriptive analytics. Annual Conference on Innovation and Technology in Computer Science Education. New York: Association for Computing Machinery 247–248. doi: 10.1145/3304221.3325547

Staines, H. R. (2018). Digital Open Annotation with Hypothes.is: Supplying the Missing Capability of the Web. J. Sch. Publ. 49, 345–365. doi: 10.3138/jsp.49.3.04

Steels, L. (2006). Collaborative tagging as distributed cognition. Pragmat. Cogn. 14, 287–292. doi: 10.1016/j.chb.2015.04.053

Townsend, F. (2013). Post-publication peer review: PubPeer. Ed. Bull. 9, 45–46. doi: 10.1080/17521742.2013.865333

Urban, G. (2010). A method for measuring the motion of culture. Am. Anthropol. 112, 122–139. doi: 10.1111/j.1548-1433.2009.01201.x

Van Campenhout, R., Brown, N., Jerome, B., Dittel, J. S., and Johnson, B. G. (2021). Toward Effective Courseware at Scale: Investigating Automatically Generated Questions as Formative Practice. in Proceedings of the Eighth ACM Conference on Learning@Scale. New York: ACM 295–298.

Van Dam, A. (1988). Hypertext’87: keynote address. Commun. ACM 31, 887–895. doi: 10.1145/48511.48519

Vygotsky, L. (1978). Mind in Society: The Development of Higher Psychological Processes. Cambridge: Harvard University Press

Watters, A. (2017). Un-Annotated. Available online at: http://hackeducation.com/2017/04/26/no-annotations-thanks-bye [Accessed on August 12, 2021]

Weng, C., and Gennari, J. H. (2004). Asynchronous Collaborative Writing through Annotations. in Proceedings of the 2004 ACM Conference on Computer Supported Cooperative Work. Washington: University of Washington 578–581.

Wenk, B. L., and Tronsky, L. (2011). First-Year Students Benefit From Reading Primary Research Articles. J. Coll. Sci. Teach. 40, 60–67.

PubMed Abstract | Google Scholar

Winne, P. H., Teng, K., Chang, D., Lin, M. P. C., Marzouk, Z., Nesbit, J. C., et al. (2019). nStudy: software for learning analytics about processes for self-regulated learning. J. Learn. Anal. 6, 95–106.

Wise, A. F., and Jung, Y. (2019). Teaching with analytics: towards a situated model of instructional decision-making. J. Learn. Anal. 6, 53–69.

Wolfe, J. (2002). Annotation technologies: a software and research review. Comput. Compos. 19, 471–497. doi: 10.1016/s8755-4615(02)00144-5

Wolfe, J. L., and Neuwirth, C. M. (2001). From the margins to the center: the future of annotation. J. Bus. Tech. Commun. 15, 333–371. doi: 10.1177/105065190101500304

Yeong, F. M. (2015). Using primary literature in an undergraduate assignment: demonstrating connections among cellular processes. J. Biol. Educ. 49, 73–90. doi: 10.1080/00219266.2014.882384

Keywords : annotation, assessment, asynchronous, feedback, Hypothes.is, peer learning, Perusall

Citation: Porter GW (2022) Collaborative Online Annotation: Pedagogy, Assessment and Platform Comparisons. Front. Educ. 7:852849. doi: 10.3389/feduc.2022.852849

Received: 11 January 2022; Accepted: 23 March 2022; Published: 10 May 2022.

Reviewed by:

Copyright © 2022 Porter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Gavin W. Porter, [email protected]

IT Connect | UW Information Technology

Hypothesis: Collaborative Annotation for Canvas LMS

Available for.

Available for : Instructors Students

Log in to Canvas to access Hypothesis

ABOUT HYPOTHESIS

Hypothesis is a collaborative annotation tool integrated with Canvas that supports shared annotations within a course, discussion in response to annotations, and active reading of text. Instructors select Hypothesis as an external tool when setting up an assignment and can also choose to assign readings to groups. Students can then annotate course readings collaboratively, sharing comments, and replying to each other’s comments with text, links, images, and video. Hypothesis is also fully integrated with SpeedGrader for efficient review and grading of student annotations.

Resources for instructors

  • We recommend selecting the Load In A New Tab option when setting up a Hypothesis assignment. This will allow for a better reading experience for students, especially those who magnify the contents of their screen for accessibility purposes.
  • Set up Hypothesis readings through Canvas Modules
  • Grade Hypothesis annotations in Canvas
  • If you are using Canvas Files or Groups for any Hypothesis readings you will need to take additional steps before the assignment works in the new course.
  • Hypothesis FAQs

Resources for students 

Consider sharing the following links in your Canvas course, or point students to this page

  • Learn the basics of navigating and using Hypothesis
  • Short screen casts show how to highlight, annotate, make page notes, and reply to others’ notes
  • Jazz up your annotations with this deep dive into the editing interface
  • Create stand-out annotations with these five best practices to make your annotations stand out

Hypothesis helps you to

  • Provide a new way for students to discuss class readings
  • Help students consider multiple viewpoints when reading
  • Assist students in close and active reading of texts
  • Encourage students to engage critically with readings

Hypothesis Support

Workshops & webinars, hypothesis 101.

If you’d like to learn more about Hypothesis and see a demo, register for an upcoming Hypothesis 101 webinar or watch a Hypothesis 101 recording .

Hypothesis Partner Workshops

Each quarter, Hypothesis offers a variety of (typically) 30min workshops led by their team . Are you looking for ways to help your students develop their close reading skills and increase their engagement with your course materials? Maybe you’re seeking a more collaborative approach to reading complex texts while building community? Get ideas you can bring back to your courses, students, and colleagues for how to use Hypothesis for social annotation.

Topics for this quarter:

  • Activating annotation in Canvas
  • Using multimedia & tags in annotations
  • Using Hypothesis with small groups
  • Creative ways to use social annotation in your course
  • Show-and-tell participatory workshop

Liquid Margins

Hypothesis hosts a recurring web “show” featuring instructors and staff to talk about collaborative annotation, social learning, and other ways to make knowledge together.

Offered throughout the year

Previous workshop recordings

If you missed any of the Hypothesis partner workshops offered during autumn quarter, you can find recordings on the Hypothesis YouTube channel .

Vendor Help

  • The Hypothesis Knowledge Base includes FAQs, tutorials, how-tos, and troubleshooting tips.
  • Schedule a meeting with Hypothesis Customer Success Specialist Autumn Ottenad for instructional design advice or questions on how to best use Hypothesis in your course.
  • Watch Liquid Margins , the Hypothesis web series, to learn more about how other instructors use collaborative annotations in their course.
  • Email and phone

Banner

Hypothesis for Collaborative Web Annotation: Home

  • For Faculty
  • For Everyone
  • Comments From Evergreen Faculty
  • Bugs and Feature Requests
  • Selective Visual History of Annotation
  • Evergreen Programs and Classes That Have Used Hypothesis

Using Hypothesis for Collaborative Annotation in Canvas

Hypothesis is collaborative web annotation software that was integrated with Canvas in Fall 2018, enabling easy integration of social annotation into Evergreen programs.  In the past two years over a dozen Evergreen programs have used Hypothesis, and use has doubled as we rather abruptly moved into remote online learning in the Covid era.  It is very simple to use, and upon request Paul McMillin ([email protected]) can provide a 15-minute demonstration of how Hypothesis works, and how you might use it.

Getting Started

Hypothesis is now automatically included as an option in every program and course Canvas site.

Three options for getting started with Hypothesis:

I. Contact Paul McMillin ([email protected]) to set up a 15-30 minute demo/introduction for your faculty team.  Includes examples of previous Evergreen annotation assignments as well as the basic 'how-to'.

II.  AND/OR, g o to the "For Faculty" or "For Everyone" pages on this guide.

III.  AND/OR, Use the following official Hypothes.is tutorials:

  • How to add Hypothesis as a module item
  • How to add Hypothesis as an assignment
  • How to grade Student Annotations in Canvas  (grading is optional!  But this is still useful for evaluating)
  • Student Guide on how to use Hypothesis

And for insight into how others use annotation:

  • Hypothes.is:  Back to School with Annotation: 10 Ways to Annotate with Students
  • New York Times Learning Network Blog:  Skills and Strategies | Annotating to Engage, Analyze, Connect and Create

And, if you have over an hour to spare and want to geek out on the larger context for Hypothesis along with some demos:  

  • Hypothesis in Canvas vendor demonstration:  Hypothesis in Canvas: Collaborative Annotation as Discussion Forum 2.0

So What Could Go Wrong?

Not much is likely to go wrong.  Hypothesis is easy to learn and simple to use.  

Here is one thing though.  Some texts are not automatically in a good form for annotation with Hypothesis.  At a minimum, you need to be working with html or PDFs.   Paul McMillin can help you figure out which texts will work off the shelf, and which might require some additional work.  Some scanning support is available for when it is needed.  Try to plan in advance to make sure that you will have time to prepare your texts for annotation.  And always do a test annotation to prove that all is well before publishing an annotation assignment for your students.

Why Use Collaborative Web Annotation?

Improves seminar discussions

Develops close reading skills

Student comments are anchored to specific words/phrases in the text, encouraging greater specificity.

Bridges asynchronous work with synchronous work, improving continuity as we transition from one to the other.

Convenient evaluation using SpeedGrader in Canvas.

Students can access their private and all shared comments any time throughout the quarter, always side by side with the text being commented upon.

Faculty can add annotations to pose questions, provide background information, suggest interpretations, etc.

Here is one good argument for collaborative web annotation, from the creators of Hypothesis:

Let’s be honest, discussion forums are a great idea—we all want students to engage more with their assigned readings and with their classmates. But “discussion” forums fail at precisely what they claim to do: cultivate quality conversation. Collaborative annotation assignments are a better way to encourage students to engage more deeply with course content and with each other. For one, conversations that take place in the margins of readings are more organic, initiated by students themselves about what confuses or intrigues them most. In addition, these annotation discussions are directly connected to texts under study, helping to keep conversation grounded in textual evidence. Using Hypothesis, instructors can make PDFs and web pages hosted in Canvas annotatable. Students can then annotate course readings collaboratively, sharing comments, and replying to each other’s comments. Instructors can also create annotation assignments using Hypothesis so that students submit their annotation “sets” for feedback and grading in Canvas.

I would add to this that students are not only making comments "directly connected to texts";  annotations are directly tied to specific words, phrases, sentences, or paragraphs, encouraging close reading.  And when done in a shared environment  online, close reading becomes both a private activity (between reader and text) and a collaborative activity (as students read and reply to the annotations of others).  As such, collaborative annotation is perfect for our era of remote learning -- it helps provide structured activity off-Zoom, while providing a persistent bridge between private reading and engagement with classmates and faculty.

  • Next: For Faculty >>
  • Last Updated: Oct 10, 2023 11:20 AM
  • URL: https://libguides.evergreen.edu/webannotation
  • Staff Directory
  • Workshops and Events
  • For Students

AI-Resistant Assignments? Show Student Thinking and Promote Better Writing with UChicago-Supported Tools

by Michael Hernandez | Feb 8, 2023 | Instructional design , Uncategorized

collaborative annotation assignment

ATS instructional designer Thomas Keith and digital pedagogy fellow Sarah McDaniel contributed content to this article.

If you’re an instructor who values the idea of writing-to-learn in your pedagogy, recent news of higher-quality AI-generated writing may have you wondering whether this threatens your students’ learning. However, for those who are interested in engaging with student thinking and writing at the more generative stages, ATS can recommend learning activities to promote genuine, spontaneous, and original thinking and writing among your students.

In conjunction with scaffolding and a multi-draft process, we– and others in the field –believe that instructors can keep AI-generated text from compromising the practice of writing, both in service of learning and for assessment purposes. Of course, writing in multiple stages will not be new to many writers and educators, but the current panic over AI-generated writing may serve to remind us of the value this practice has always had.

There’s also significant learning to be had from activities like reflecting on a dialogue that’s taken place in class (more on this later). The work of processing small group discussion in whiteboards, collaborative chat tools, or group annotation should be difficult to outsource to an AI for the foreseeable future, and could be a great option for those looking to use writing to learn without fear of AI’s unwelcome intrusion into their teaching. Furthermore, some institutions have suggested that more authentic rhetorical situations are at least part of the solution. It is worth considering that the situation of writing in a vacuum with no one but the instructor as the audience may be somewhat de-motivating.

To create writing activities that make people less interested in outsourcing their writing to an AI tool, we offer the following teaching tools and their proposed applications below. You’ll note that these technology-based solutions are rooted in the idea of presenting students with more situations in which they interact with each other at a meaningful level–and, hopefully offer a more promising future for learning tasks that move beyond the performative busy work of formulaic writing.

With that in mind, here are several ideas you can try right now that support engagement with writing and that can–hopefully—obviate the use of AI-generated writing in your class.

Avoiding the Essay Trap: Promote Discursive Writing Using Chat Technology

Invite non-linear dialogue and idea construction using whiteboards, drill into texts and foreground class reading, promote constructive and specific feedback with canvas peer review, canvas docviewer for self annotation and visible reflection.

While discussion board prompts can be a great way to encourage engagement with readings and key concepts, the kind of online discussions many students are used to may be more vulnerable to the pastiche approach that Large Language Models (LLMs) like ChatGPT take to academic writing. If your current prompts and guidelines are soliciting longer student responses that feel like they exist independent of a larger conversation, we have a couple of suggestions to make discussion more engaging–and AI-generated writing less of an issue.

To create ongoing, nuanced discourse you may want to try the ATS-supported tools Ed Discussion and Ed Chat. The immediacy of interaction generally promoted by collaborative chat tools may more closely mirror the kinds of discussions digital natives (and technology users in general) increasingly are accustomed to having in an online setting. As an alternative to traditional discussion boards, chat can promote shorter and quicker bursts of discourse, and a feeling of spontaneity.

This kind of change in the discussion environment may present a situation where the nuances of conversational context quickly change, as compared to a more static and longer statement of one’s point of view. In short, a real exchange between humans makes it feel more worthwhile to articulate your own ideas. In this situation, students are less likely to turn to an AI tool than if they are simply writing to an essay-like prompt.

For more information about how you might employ these tools, check out our previous posts on EdChat and Ed Discussion . If you prefer the familiarity of Canvas Discussion boards, ATS also offers a workshop on using Canvas Discussion Boards effectively . Discussion boards continue to have real potential as a forum for engaging conversations, particularly when guidelines and prompts promote responses in a nuanced context where students address multiple conversants and ideas at once, in a way that AI is less able to replicate.

Like other options on this list, whiteboard software is a great tool for conversation and idea generation, but it also allows you and your class to spatially relate and rearrange ideas. It’s less linear than shared text documents, it’s spontaneous and discursive, and as a result it may be less compatible with formulaic text generated by an AI. These generative conversations, where differing points of view on the same topic can be arranged and assessed spatially, can potentially be the first part of significant writing, in which a student surveys the range of ideas and begins to stake out a claim. Activities of this type have become more common in the past few years due to the mass shift to online learning, but there are excellent pedagogical antecedents. For an example, see Hilton Smith’s “Chalk Talk” activity , summarized by noted scholar of discussion pedagogy Stephen D. Brookfield (scroll down to #13). More recently, there has also been research on the promise of digital whiteboard technology in the fields of both immunology and social work .

What’s more, whiteboard tools allow you to create just the right amount of structure for your needs in a write-to-learn activity. You can find an excellent example of collaborative annotation using whiteboards from UChicago music professor Olga Sánchez-Kisielewska, presented at the 2022 Symposium for Teaching with Technology.

While non-linear discussion activities like this feel mercifully incompatible with the use of a chatbot, you might be asking how this activity contributes to *real* writing. Although it may seem far removed from the realm of “serious” academic writing, you can cap off an activity like this by asking students to summarize, reflect, and build on that discourse with a more traditional piece of writing. If you’re interested in having students write based on whiteboards, chat-based activities, and even collaborative annotation, you might consider starting with Brookfield’s “Discussion Audit” activity (scroll down to #29).

If you’re interested in trying out whiteboards, you have two options supported by ATS that are free to use: Google Jamboard and Zoom Whiteboards. Google Jamboard offers a basic range of features and a very small learning curve, while Zoom Whiteboards allow you to construct more complex collaborative activities that may require increased setup time. Jamboard can be accessed from your Google Drive , while Zoom Whiteboards can either be opened through the Zoom desktop app or from your account in your web browser. Please note that whiteboard software may not be compatible with screen readers, which can be a barrier to accessibility, so use with that in mind. ATS has a post on Zoom Whiteboards that may be useful as you consider your options.

Similar to dynamic discussion between students, there’s nothing like responding to specific parts of a thought-provoking text to make people feel motivated to give a genuine, thoughtful response in their own words. You can provide annotation prompts to scaffold commentary and even encourage interaction in the margins! See our recent post on Hypothesis for more information.

If a student has gone through the work of articulating their reactions to class reading, it’s more likely that they could build on those to get started on more substantial writing projects for your class, rather than using ideas generated by an LLM like ChatGPT. Again, the idea is to create a writing situation where engaging with the prompt, the text, or one’s classmates is more worthwhile and rewarding (and perhaps even simpler!) than articulating a prompt for ChatGPT. And, as with Ed Discussion and Ed Chat, social annotation encourages students to interact with each other in a genuine and dynamic way.

One of the suggestions in the search for ways to keep AI-generated writing out of student submissions has been to break a large writing project into multiple smaller activities, like outlines, rough drafts, peer review, and the like. While this has long been a suggested practice in the world of writing instruction, if two rounds of feedback on one assignment may not always seem feasible for you, peer review in Canvas using structured rubrics or even media feedback, like video or audio, could be a promising option.The structure of a carefully tailored rubric that articulates your unique learning goals for the course may even provide additional reassurance against receiving AI-generated feedback. Our posts on rubrics and peer review in Canvas can help you get started.

If this is your first time requiring multiple drafts, you may want to consider configuring your assignments to accept multiple submissions in Canvas.

Finally, those who use Canvas Assignments to accept student writing will already know that SpeedGrader allows you to highlight text and leave comments on specific points of student writing. However, you may not be aware that the tool underlying SpeedGrader, called DocViewer, also allows the student to comment on their own work.

You may consider using this tool to have students explicate the thinking behind their writing or even offer insight about their writing process, a discipline known as metacognition . Whether it’s through in-line annotation or the global comment feature, Canvas allows students to open their rationale as a writer to you as part of the submission process.

Such self-annotation is a time-honored process in the world of portfolio pedagogy, as it’s a great way to help your students grow as writers and thinkers. Influential portfolio pedagogy scholar Kathleen Blake Yancey characterizes similar practices of reflective writing thusly: “reflection is dialectical, putting multiple perspectives into play with each other in order to produce insight” ( Yancey. 1998 ). So, if you’re looking for a way to not only encourage original writing from students, but also to help students grow as writers and thinkers through revision, DocViewer may be a useful practical tool for you.

For more information about how to use this tool, please see Canvas’ documentation on DocViewer .

In Conclusion

By drawing attention to the tools discussed above and their applications, we hope to aid instructors who want to make meaningful writing and learning activities that obviate the use of ChatGPT and similar tools.

For additional assistance with teaching tools available to UChicago instructors and ideas about how they can support meaningful student writing activities, please contact ATS or drop by our Office Hours .

Image by Gerd Altmann from Pixabay

Search Blog

Subscribe by email.

Please, insert a valid email.

Thank you, your email will be added to the mailing list once you click on the link in the confirmation email.

Spam protection has stopped this request. Please contact site owner for help.

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Recent Posts

  • Zoom AI Companion – February 2024 Feature Releases
  • Beyond STEM: Ed Discussion for the Humanities and Social Sciences
  • Digital Tools for Teaching Writing, Part 3: Workshop, Feedback, and Revision
  • VR and AR Technologies for Innovative Teaching: Oculus Quest and Sketchfab
  • Leverage LinkedIn Learning as a Student
  • A/V Equipment
  • Accessibility
  • Canvas Features/Functions
  • Digital Accessibility
  • Faculty Success Stories
  • Instructional design
  • Multimedia Development
  • Surveys and Feedback
  • Symposium for Teaching with Technology
  • Uncategorized
  • Universal Design for Learning
  • Visualization

Center for Teaching

Teaching with perusall and social annotation – highlights from a conversation.

Posted by Derek Bruff on Wednesday, September 23, 2020 in Resource .

by Derek Bruff, Director

This summer, hundreds of participants in the Center for Teaching’s Online Course Design Institute had the chance to use a social annotation tool as part of their experience in the institute. My CFT colleagues and I asked participants to read a few articles about online teaching and annotate them collaboratively, first using a tool called Hypothesis and later using a tool called Perusall . Participants highlighted passages in the articles that caught their attention, then added comments to those passages sharing their questions, reflections, and perspectives. Often these comments sparked asynchronous discussion, with participants responding to each other’s comments with thoughts of their own.

collaborative annotation assignment

The result for many participants was a lively engagement with the readings and with each other—and a great deal of interest in using social annotation tools in their own teaching this fall.

Given this interest, Vanderbilt adopted Perusall for campus use late in the summer. When classes started, I was curious to know how faculty were using Perusall to support student learning and build social presence in their online, face-to-face, and hybrid courses. I convened a Conversation on Teaching on September 15 th , inviting faculty to share their experiences teaching with Perusall and to learn from other instructors experimenting with social annotation. We had a lively conversation, and I wanted to share a few highlights here on the blog, including some of the creative and effective ways faculty are teaching with social annotation.

Terry Maroney, professor of law, is teaching an online course this fall called “Actual Innocence” about wrongful convictions. She learned about Perusall during the institute this summer, and she’s using social annotation to help students prepare for their synchronous Zoom sessions. For example, she mentioned one assignment that asked students to compare and contrast a text documenting what turned out to be a false confession with a video recording of the alleged confession itself. Perusall supports collaborative annotation of both texts and videos, which made it a useful tool for this activity. Perusall also has a hashtag feature, which Maroney used to great effect: her students tagged their comments using a defined set of keywords, and Maroney used those hashtags to find patterns in student comments across the two documents. Those patterns then informed the synchronous discussion of the documents that Maroney led during her Zoom class that week.

Meanwhile, Katherine Clements, senior lecturer in chemistry, is using Perusall in her upper-level, seminar-style course on macromolecular chemistry. One of her course goals is for students to read and make sense of chemistry research literature, so she asks her students to do so collaboratively through social annotation. Her class isn’t large, but even 23 students can generate a lot of discussion about challenging journal articles. Clements uses two Persuall features to focus her engagement with student comments: students can flag comments as questions for the instructor, and students can “upvote” other students’ comments as particularly helpful or important. By focusing on flagged questions and comments with lots of upvotes, Clements makes good use of her time reading and responding to annotation.

Over at Peabody, Heather Lefkowitz, lecturer in human and organizational development, does have a big class, with close to 90 students enrolled in her course on talent management. She asks her students to annotate and discuss various kinds of documents, including journal articles, popular writing, videos, and more. Having 90 students annotate a document in a single, shared space can result in too much discussion, making it hard for students to have a voice. Lefkowitz uses Perusall’s groups feature to split her students into smaller groups, with each group annotating its own copy of a given document. This makes it easier for each student to say something interesting and to respond to their peers. Lefkowitz is experimenting with group size (4 students per group? or 6 students?), but she’s realized that having persistent small groups is an important way to make a large class feel small and to build social presence in an online course.

One popular question at last week’s conversation: How much time does it take to read and respond to student annotations? Several faulty using Perusall this fall noted how enthusiastically their students took to it, which was great for student engagement but meant a lot of student comments to scan. Some instructors take Katherine Clement’s approach, focusing on annotations that are tagged as questions for instructors or annotations that generate upvotes or replies. Other instructors compared Perusall to more traditional online discussion boards. “Reviewing annotation is so much easier than anything I’ve done on the discussion board,” one participant said. “It takes about a third of the time, and the students are much more engaged on Perusall.” Terry Maroney noted that her workload was similar in both settings, but that her students’ annotations tend to be more grounded and concrete, while their discussion threads are more synthetic and abstract.

We also discussed ways to help students engage more meaningfully with course materials and with each other in the social annotation space. Annotation is a new skill for many students, who, as novices in a discipline, sometimes struggle to make sense of readings and other documents. Providing students some guidance up front regarding the kinds of moves they might make in their annotations can be helpful. For instance, you might suggest students highlight an important point in a reading and comment on why the point was important. Or they might annotate an argument they feel is weak, either with a critique or a suggestion for strengthening it. Or they might draw a connection between the reading at hand and past readings in the course. Or they might highlight a provocative quotation and invite their peers to respond to it. These moves go beyond the “this is interesting” and “I don’t understand this” and “I agree” annotations that some students leave.

It’s natural to use Perusall with course readings and such, but we also floated some other ways to use social annotation with students.

  • I mentioned that I often have students in my writing seminar read and respond to sample work from past students as a way to help them think about their own writing. I could post a de-identified paper from a former student in Perusall, and have students do this work collaboratively there.
  • Abby Parish, associate professor of nursing, said she was thinking of inviting students to annotate her lecture transcripts. The lectures for her online courses are voice-over-PowerPoint with transcripts, so these would be easy to post in Perusall, providing students a venue for asking questions—and answering each other’s questions.
  • Bohyeong Kim, assistant professor of communication studies, shared that she invited her students to annotate her own syllabus, asking questions about course goals and activities and suggesting topics and readings for later in the semester. Doing so gave her students a chance to practice annotating before later assignments, but also helped Kim better understand her students and their interests in the course.

For more on having students annotate their course syllabus, see this blog post from Remi Kalir, assistant professor of learning design and technology at the University of Colorado-Denver. That blog post is one of several resources I have collected on social annotations, a collection you can find on my Diigo account .

When I mentioned on Twitter that last week’s conversation on social annotation went well, I got more than a few replies! Here are a few thoughts on teaching with social annotation from my Twitter network:

  • From Dan Morrison , former Center for Teaching grad fellow and now an assistant professor of sociology at Abilene Christian University: “I love it. It makes my teacher heart grow 3 sizes. Students who take advantage really seem to learn a lot. I’m in the text, too, commenting and encouraging. When most everyone reads, class time is so different. Richer, deeper, more nuanced.”
  • From Dani Picard , senior lecturer in medicine, health, and society here at Vanderbilt: “Students raved last semester about @Perusall but this semester I’m getting more resistance. Students complaining abt the additional time to read/annotate. BUT their level of understanding is better than w/o annotations.”
  • From Melissa Mallon , director of teaching and learning at the Vanderbilt libraries: “Andy [Wesolek] & I are using @hypothes_is in the Buchanan Fellowship we’re teaching this semester – we’re having students find & annotate articles related to privacy & surveillance and it’s resulted in deeper/more complex synchronous discussions. Yay!”
  • From Susan Hrach , professor of English at Columbus State University: “It feels like a game-changer. My students are spending a _looong_ time interacting with the assigned reading and with each other, which is huge when the text is the primary object of your discipline.”
  • From Ania Kowalik , assistant director at the Rice University Center for Teaching Excellence: “Same here! They keep adding notes to the text after we discussed it in class and even later, as they work on their writing assignments. I’ve never seen this level of interaction with a text. Definitely a game changer for me!”

Vanderbilt instructors interested in getting started with Perusall are encouraged to visit this set-up guide , put together by the Center for Teaching’s Brightspace support team. Persuall integrates well with Brightspace, and the support team is available to answer both technical and pedagogical questions about teaching with Perusall. Just email [email protected] for help.

Tags: Perusall , social annotation , social presence

Leave a Response

Teaching guides.

  • Online Course Development Resources
  • Principles & Frameworks
  • Pedagogies & Strategies
  • Reflecting & Assessing
  • Challenges & Opportunities
  • Populations & Contexts

Quick Links

  • Services for Departments and Schools
  • Examples of Online Instructional Modules
  • Blogs @Oregon State University

Ecampus Course Development and Training

Providing inspiration for your online class.

collaborative annotation assignment

Collaborative Online Annotation Tools for Engaging Students in Readings

Do you ever get the sense that students posting in their online discussions haven’t really engaged with the reading materials for that week? One way to encourage active engagement with course readings is to have students annotate directly in the article or textbook chapter that they are assigned. While it is common to see students annotating in their paper copies of their textbooks or readings, these aren’t easily shared with their peers or instructor. Of course, students could snap a photo of their handwritten annotations and upload that as a reading assignment task, though that does require additional steps on the part of both the student and instructor, and there is no interaction with others in the course during that process. However, it is possible to have students annotate their readings completely online, directly in any article on the web or in their ebook textbook. With this process, the annotations can also be seen by others in the course, if desired, so that students can discuss the reading all together or in small groups as they are reading an article or book chapter online. The benefit to this type of annotation online includes components of active learning, increased student interaction, and accountability for students in engaging with the course materials.

Active Learning

The shift to active learning is a bit like going from watching a soccer game on TV to playing a soccer game. Likewise, reading passively and reading to learn are two different activities. One way to get students actively reading to learn is to ask them to make connections from the course materials to their own lives or society, for example, which they then make into annotations in their readings. Annotation tasks require students to take actions and articulate these connections, all without the pressure of a formal assessment. Furthermore, many students arrive at college not knowing how to annotate, so teaching basic annotation practices helps students become more active and effective learners (Wesley, 2012). 

Interaction

“Individuals are likely to learn more when they learn with others than when they learn alone” (Weimer, 2012). Discussion board activities are often where interaction with others in an online course takes place. However, rather than having students refer to a particular reading passage in their discussion board activity, they can simply highlight a passage and type their comments about it right there in the article, no discussion board assignment needed. Others in the course can also read participants’ annotations and reply. With some creative assignment design in Canvas, this can also be set up for small groups. Students may find this type of annotation discussion more authentic and efficient than using a discussion board tool to discuss a reading.

News article embedded in the assignment shows annotations made by specific students with a box to reply

Accountability

A popular way to ensure that students have done the reading is to give them a quiz. However, this is a solitary activity and is higher-stakes than asking students to make targeted annotations throughout a reading. It may make more sense to guide them through a reading with specific annotation tasks. Being explicit about what pieces of the reading students should focus on can help them understand what they need to retain from the reading assignment.

Possible Activities

  • Student-student interaction: Replace a discussion board activity with a collaborative annotation activity where students can annotate the article as they read. Then they can go back later in the week and reply to each other. 
  • Activate prior knowledge: Ask students to include one annotation related to what they already know about this topic.
  • Evaluate sources: Find a pop-science article in your discipline that includes weak support for arguments or claims, for example. Ask students to identify the sources of support in the arguments and challenge the validity of the support. Perhaps they could even be tasked with adding links to reliable sources of support for your discipline in their annotation comments. 

Nuts and Bolts

Two popular annotation tools are Hypothesis and Perusall . I would encourage you to test these out or ask your instructional designer about your needs and whether an annotation tool would be a good fit for your course learning outcomes. 

Wesley, C. (2012). Mark It Up. Retrieved from The Chronicle of Higher Education: https://www.chronicle.com/article/Mark-It-Up/135166

Weimer, M. (2012, March 27). Five Key Principles of Active Learning. Retrieved from Faculty Focus: https://www.facultyfocus.com/articles/teaching-and-learning/five-key-principles-of-active-learning/

Print Friendly, PDF & Email

Leave a reply Cancel reply

Contact info.

Logo for Montgomery College Pressbooks Network

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

banner

Social annotation is the activity of reading and thinking together. With hypothesis for Blackboard, instructors can ensure that their students understand the readings by annotating digital content socially. Imagine a group of your students opening a PDF or a webpage and being able to work together to make meaning of the reading, sharing their responses and ideas about the text or images, and annotating within the margins of the digitally assigned content.

Enroll in one or more engaging workshops that will introduce you to the power of hypothesis-driven learning in your courses. Discover exciting strategies for leveraging collaborative annotation, fostering student interaction, and ultimately enhancing your students’ academic achievements.

All workshops are 45 minutes and are held on Thursday

9:15 am-10:00 am, the value of social annotation for teaching and learning.

Instructor: Gloria Barron

This workshop aims to equip faculty members with innovative strategies to fully harness the potential of collaborative, interactive social annotation through hypothesis, enhancing student engagement and course content comprehension. During the session, participants will actively explore the integration of hypothesis for collaborative annotation within their unique fields and teaching approaches.

Upon completion, participants will be able to:

  • Develop a comprehensive understanding of employing collaborative annotation to elevate student achievement in their classes. Investigate the successful implementation of social annotation across various courses.
  • Assess the benefits of adopting social annotation as a powerful pedagogical instrument.
  • Gain a deeper understanding of how they can leverage collaborative annotation to enhance student success in their courses.

Register in Workday

4:00 PM -4:45 PM

Empowering students with social annotation.

Adding hypothesis as an external tool to readings in Blackboard supports student success by placing active discussion on course readings, enabling students and instructors to add comments and start conversations in the margins of digital texts. This workshop is an excellent opportunity to learn about the potential of social annotation as a learning tool and discuss creative ways to increase engagement in your courses.

  • Describe how to get started with hypothesis and feel comfortable creating a graded assignment.
  • Create assignments that require students to read socially and make annotations or replies to other contributors.
  • Gather some fun ideas for expanding the use of collaborative annotation to improve student success.

3:15 PM – 4:00 PM

Annotate your syllabus.

Encouraging your class to annotate the syllabus serves as a gentle introduction to social annotation, providing a platform for students to actively interact with the course material. This initial engagement establishes a precedent for sustained participation throughout the term.

By incorporating this approach, you offer students the opportunity to delve into the syllabus, exchange ideas, and pose questions about the course, setting a tone for ongoing engagement. In this workshop, participants will not only gain insights and guidance on crafting a collaborative syllabus annotation assignment but will also receive a comprehensive list of pedagogical best practices for effective annotation.

  • Create a syllabus annotation assignment using hypothesis in Blackboard, establishing an initial low-pressure task at the semester’s outset.
  • Initiate the integration of social annotation into your courses, fostering a collaborative environment that encourages active engagement to enhance student success.

1:30-2:15 p.m.

Leveraging social annotation in the age of ai.

The advent of cutting-edge technologies, exemplified by innovations like ChatGPT, has ignited a crucial dialogue within the education industry. In this workshop, you will receive guidance on utilizing social annotation to foster authentic, process-oriented engagement with your course materials. Gain insights into best practices for incorporating social annotation with AI tools and learn the practical steps to set up hypothesis-enabled readings in Blackboard.

  • Explore strategies for effectively working with ChatGPT output
  • Leave with tangible assignment ideas for immediate implementation in your courses.

3:00 PM – 3:45 PM

Introduction to hypothesis in blackboard.

In this workshop, we will discuss how instructors use annotation-powered reading to cultivate essential academic skills, including deep reading and persuasive writing, in students. In addition to sharing pedagogical best practices for social annotation, we will offer hands-on demonstrations of hypothesis in action with course readings on Blackboard. Participants will depart with a comprehensive understanding of seamlessly integrating social annotation into their courses, ultimately leading to enhanced student outcomes.

By the end of this workshop, participants will be able to:

  • Explain the concept and benefits of social annotation through collaborative learning.
  • Become familiar with the functionalities and features of the hypothesis annotation tool.
  • Create a hypothesis assignment in Blackboard.
  • Begin annotating web pages, PDFs, and other online content effectively.
  • Create clear, concise instructions to foster a constructive and positive annotation experience for students who meet unit or course objectives.

12:15-1 p.m.

Annotation and ai starter assignments.

This workshop caters to instructors who are keen on incorporating social annotation into their courses but seeking guidance on effectively guiding students. Participants will explore ideas for annotation starter assignments and receive ready-to-use instructions applicable across various disciplines and modalities. Regardless of the discipline or teaching modality, this workshop offers strategies that can be immediately integrated into your course assignments. Besides sharing pedagogical best practices for collaborative annotation, participants will also gain hands-on experience using hypothesis with course readings in Blackboard.

  • Create an assignment to annotate as a low-stakes assignment
  • Begin incorporating collaborative annotation into a course to improve student success.
  • Employ ChatGPT to enable students to critically review and analyze key course ideas through its output.

12:15 PM – 1:00 PM

Using hypothesis with small groups.

In large classes, engaging every student can be challenging. Utilizing hypothesis for social annotation in small groups facilitates more meaningful and collaborative connections with students. This workshop addresses options for using hypothesis in small groups and explores how social annotation can enhance a collaborative learning environment.

  • Outline various use cases for implementing the Hypothesis small groups feature in your Blackboard course site.
  • Demonstrate the creation of a Hypothesis-enabled reading in your Blackboard course site with small groups enabled.

3:15 PM-4:00 PM

Grading and feedback for social annotation.

While there are several grading options in hypothesis, emphasizing the importance of incentivizing participation is crucial. To ignite interest in annotation, instructors should offer clear guidelines that reward high-quality contributions. Social annotation provides an ideal format for assessing and promoting continuous learning. Join this session to gather ideas and tools that can elevate your grading and feedback practices to the next level.

  • Identify the foundational components for creating either an analytic or holistic rubric for annotation and establish a framework for delivering effective feedback.

collaborative annotation assignment

For information or to learn more about social annotation with hypothesis for Blackboard, send an email to:  [email protected]

Spring 2024 ELITE Professional Development Catalog Copyright © 2023 by [email protected] is licensed under a Creative Commons Attribution 4.0 International License , except where otherwise noted.

Share This Book

Tools for Asynchronous Learning

  • Annotation Tools
  • Miscellaneous Tools

Collaborative Annotation with Hypothesis

  • Peer Review with GitHub Tickets
  • Video Annotation/Analysis with the Kaltura Media Player
  • Replacing Book Discussions
  • Critical Organization with Zotero

Profile Photo

This could be helpful as a replacement for class discussion about any text-based media in HTML or PDF format online. Hypothesis allows users to create “groups” for annotation. Students would be able to view each other’s annotations and respond, which could be useful as a surrogate for class discussion of a text

Hypothesis Logo

To foster asynchronous communication about a text by annotating passages, replying to each other’s commentary, and providing overall “page notes” with tags. Annotation tools like hypothesis encourage students to engage more directly with the text by annotating specific passages, rather than creating de-contextualized discussion boards in Canvas.

  • Instructor should create an account at https://web.hypothes.is/start/ . 
  • When logged in, click on the “Groups” menu link in the upper right-hand corner of your profile page. Select “create new group” at the end of the dropdown.
  • Name and provide a description for the group in the provided text entry boxes
  • Copy the link under “Invite New Members” to share with your students
  • Have students create an account at https://web.hypothes.is/start/.
  • For students using Google Chrome as their browser, have them install the Chrome extension: https://chrome.google.com/webstore/detail/hypothesis-web-pdf-annota/bjfhmglciegochdpefhhlphglcehbmek   
  • For students using other browsers, have them bookmark the Bookmarklet Link found at https://web.hypothes.is/start/ . 
  • If using Chrome : Click on the Hypothesis icon in their plugins menu (upper right hand corner of the browser). This will bring up Hypothesis on the right side of the browser 
  • If using another browser : click on the Hypothesis Bookmarklet, which will bring up Hypothesis on the right side of the browser.
  • Ensure that students click on the dropdown that says “Public” and that they select the group associated with your class (they will only see this if they are logged in in the bookmarklet/plugin).
  • Once the group is selected, students can highlight text and select the pencil icon, which will give them the ability to annotate the selected text. Students can also reply to each other’s annotations in the hypothesis sidebar

Notes on Hypothesis

  • Hypothesis is free and open source, which means that students do not need to pay to use it. 
  • Hypothesis allows users to create highlights that are only visible to them, for their personal use.
  • Hypothesis comments allow users to insert formulae using LaTeX, so it can be used in the context of STEM classes as a means of practicing this formatting language
  • << Previous: Sample Assignments
  • Next: Peer Review with GitHub Tickets >>
  • Last Updated: Mar 28, 2021 7:43 PM
  • URL: https://guides.lib.fsu.edu/asynchLearningTools

© 2022 Florida State University Libraries | 116 Honors Way | Tallahassee, FL 32306 | (850) 644-2706

Perusall

Increase student engagement.

Would you like your students to come to class ready to participate?

With Perusall, an online social annotation platform, you can increase student engagement, collaboration, and community within your course. Plus, Perusall works with your favorite course content including books, articles, web pages, videos, podcasts, and images. 

In 2022, a million students engaged with

classmates, instructors, and course content within Perusall.    

  

Perusall has empowered 3 million+ students to:

Engage with texts, images, video, audio, and websites

Work collaboratively outside the classroom

Have in-depth discussions with peers

Take ownership of their own learning

More engagement

Used by over 3,000,000 students in over 90 countries, Perusall is a social learning platform that instantly turns coursework into a social experience. When students interact together on assignments within Perusall, they are intrinsically motivated to perform better—in turn, honing their critical thinking skills and developing a deeper understanding of the material.

More collaboration

Perusall makes learning more engaging while helping students deepen their understanding of the material. Students can ask questions if they don't understand a concept and receive input from their peers. They may also comment on classmates' posts and express their opinions synchronously or asynchronously. This social engagement with the course content prepares students for class.

More connections

Bring the interactivity of a small seminar to large lecture courses without sacrificing efficiency. Instructors may use Perusall in classes from 5 to 1,000+ students across all disciplines, building a sense of community from the start of the term. Perusall promotes social learning for in-person, blended, or online courses. Even if the course modality shifts, instructors can continue their classroom community with Perusall.

More success stories

An institutional license on your campus guarantees data security, customer support, and platform performance. Pricing is determined by the Perusall activity on your campus. There is no charge for our LTI 1.3 integration with your LMS. In addition, we follow industry best practices for ensuring the security of user data and have our security and accessibility audited by third-party firms. Institutional licenses provide detailed descriptions of our FERPA, PIPEDA, GDPR, and WCAG 2.1 compliance.

Any content, any format

Try it today with your class..

Setting up your course is fast, easy, and free.

19th Edition of Global Conference on Catalysis, Chemical Engineering & Technology

Victor Mukhin

  • Scientific Program

Victor Mukhin, Speaker at Chemical Engineering Conferences

Title : Active carbons as nanoporous materials for solving of environmental problems

However, up to now, the main carriers of catalytic additives have been mineral sorbents: silica gels, alumogels. This is obviously due to the fact that they consist of pure homogeneous components SiO2 and Al2O3, respectively. It is generally known that impurities, especially the ash elements, are catalytic poisons that reduce the effectiveness of the catalyst. Therefore, carbon sorbents with 5-15% by weight of ash elements in their composition are not used in the above mentioned technologies. However, in such an important field as a gas-mask technique, carbon sorbents (active carbons) are carriers of catalytic additives, providing effective protection of a person against any types of potent poisonous substances (PPS). In ESPE “JSC "Neorganika" there has been developed the technology of unique ashless spherical carbon carrier-catalysts by the method of liquid forming of furfural copolymers with subsequent gas-vapor activation, brand PAC. Active carbons PAC have 100% qualitative characteristics of the three main properties of carbon sorbents: strength - 100%, the proportion of sorbing pores in the pore space – 100%, purity - 100% (ash content is close to zero). A particularly outstanding feature of active PAC carbons is their uniquely high mechanical compressive strength of 740 ± 40 MPa, which is 3-7 times larger than that of  such materials as granite, quartzite, electric coal, and is comparable to the value for cast iron - 400-1000 MPa. This allows the PAC to operate under severe conditions in moving and fluidized beds.  Obviously, it is time to actively develop catalysts based on PAC sorbents for oil refining, petrochemicals, gas processing and various technologies of organic synthesis.

Victor M. Mukhin was born in 1946 in the town of Orsk, Russia. In 1970 he graduated the Technological Institute in Leningrad. Victor M. Mukhin was directed to work to the scientific-industrial organization "Neorganika" (Elektrostal, Moscow region) where he is working during 47 years, at present as the head of the laboratory of carbon sorbents.     Victor M. Mukhin defended a Ph. D. thesis and a doctoral thesis at the Mendeleev University of Chemical Technology of Russia (in 1979 and 1997 accordingly). Professor of Mendeleev University of Chemical Technology of Russia. Scientific interests: production, investigation and application of active carbons, technological and ecological carbon-adsorptive processes, environmental protection, production of ecologically clean food.   

Quick Links

  • Conference Brochure
  • Tentative Program

Watsapp

Fontesk

Moscow Metro Font

moscow_metro-6

Moscow Metro is a multi-line display typeface inspired by the Moscow underground map. It comes in Regular and Color versions.

Moscow Metro is ideal for posters and headlines, neon signage and other artworks.

  • Share by email

Designed by: Nadira Filatova Website

License: free for commercial use.

frankenstein

collaborative annotation assignment

First refuelling for Russia’s Akademik Lomonosov floating NPP

!{Model.Description}

collaborative annotation assignment

The FNPP includes two KLT-40S reactor units. In such reactors, nuclear fuel is not replaced in the same way as in standard NPPs – partial replacement of fuel once every 12-18 months. Instead, once every few years the entire reactor core is replaced with and a full load of fresh fuel.

The KLT-40S reactor cores have a number of advantages compared with standard NPPs. For the first time, a cassette core was used, which made it possible to increase the fuel cycle to 3-3.5 years before refuelling, and also reduce by one and a half times the fuel component in the cost of the electricity produced. The operating experience of the FNPP provided the basis for the design of the new series of nuclear icebreaker reactors (series 22220). Currently, three such icebreakers have been launched.

The Akademik Lomonosov was connected to the power grid in December 2019, and put into commercial operation in May 2020.

Electricity generation from the FNPP at the end of 2023 amounted to 194 GWh. The population of Pevek is just over 4,000 people. However, the plant can potentially provide electricity to a city with a population of up to 100,000. The FNPP solved two problems. Firstly, it replaced the retiring capacities of the Bilibino Nuclear Power Plant, which has been operating since 1974, as well as the Chaunskaya Thermal Power Plant, which is more than 70 years old. It also supplies power to the main mining enterprises located in western Chukotka. In September, a 490 km 110 kilovolt power transmission line was put into operation connecting Pevek and Bilibino.

Image courtesy of TVEL

  • Terms and conditions
  • Privacy Policy
  • Newsletter sign up
  • Digital Edition
  • Editorial Standards

collaborative annotation assignment

IMAGES

  1. Everyone Did the Reading: Collaborative Annotation in the Classroom

    collaborative annotation assignment

  2. 8 Ideas Designed to Engage Students In Active Learning Online

    collaborative annotation assignment

  3. Top 6 PDF annotation tools for document collaboration

    collaborative annotation assignment

  4. Collaborative Annotations in the Classroom: Visual and Electronic

    collaborative annotation assignment

  5. Collaborative Online Annotation Tools for Engaging Students in Readings

    collaborative annotation assignment

  6. Close Reading Collaboration Poster

    collaborative annotation assignment

VIDEO

  1. Assignment 0

  2. Collaborative Assignment-Nursing Theorist-Virginia Henderson

  3. EDU305_Topic015

  4. 09 annotate

  5. Eng 102: Annotation Assignment Instructions

  6. Individual Assignment UL00602 Organisation Management

COMMENTS

  1. Ideas for group & collaborative assignments

    Home Ideas for group & collaborative assignments Why collaborative learning? Collaborative learning can help students develop higher-level thinking, communication, self-management, and leadership skills explore a broad range of perspectives and provide opportunities for student voices/expression promote teamwork skills & ethics

  2. Leveraging Annotation Activities and Tools to Promote Collaborative

    Collaborative annotation assignments can "promote high pre-class reading compliance, engagement, and conceptual understanding," leading to deeper student interaction and engagement with course materials, while also helping instructors better gauge students' understanding, comprehension, and engagement (Miller et al., 2018, p. 3).

  3. 6 Remarkable Ideas for Meaningful Collaborative Annotations

    I do a version of this in my Sonnet Group Annotation Assignment. Mid to Large Groups—Walk Around Collaborative Annotations. This style of collaborative annotations is great for getting kids out of their seats. Like the poster annotations, you will need large format text available for the students to add their annotation. Walk Around ...

  4. Integrating Collaborative Annotation into Higher Education Courses for

    First Online: 01 February 2024 Part of the Lecture Notes in Networks and Systems book series (LNNS,volume 899) Abstract Collaborative Annotation (CA) is a literacy strategy that engages students in critical reading, critical thinking, writing and collaboration all in one activity [ 1 ].

  5. PDF Collaborative Annotation: For Any Text and Any Class

    Here's how it works: Identify passages from a text students have already read or from something new. Copy and paste the passages into the center of an otherwise blank piece of paper. You may want to number or letter the passages for later reference or include page numbers from the original text.

  6. Tip Sheet: Collaborative Annotation in Canvas using Hypothes.is

    Hypothes.is is a collaborative online annotation tool that is now available in Canvas. The tool allows students to collaboratively annotate websites and PDF documents. With the Canvas integration, students do not need to create accounts and their annotations can automatically be seen through SpeedGrader if you set it up as an Assignment.

  7. Hypothes.is: Social and Collaborative Annotation

    Benefits of collaborative annotation include increased: student understanding (Miller et al., 2016), intrinsic motivation (Dean & Schulten, 2015), and collective efficacy (Bandura, 2000). ... At this juncture you can set up the rest of your assignment: Add instructions, points, decide how you want to display the grade, and due dates.

  8. Collaborative Annotation / Hypothes.is

    Collaborative annotation tools, sometimes called social annotation tools, offer a way for students to interact with a text, with a professor, with each other, and (in some cases) with a public audience. By incorporating these tools into low-stakes assignments, professors can model effective reading and annotating strategies, and students can:

  9. Frontiers

    Collaborative online annotation platforms are enabling this process in new ways, turning reading from a solitary into a collective activity. The platforms provide a critical discussion forum for students and instructors that is directly content-linked, and can increase uptake of assigned reading.

  10. (PDF) Collaborative Online Annotation: Pedagogy, Assessment and

    Collaborative online annotation platforms are enabling this process in new ways, turning reading from a solitary into a collective activity. The platforms provide a critical discussion forum for ...

  11. Hypothesis: Collaborative Annotation for Canvas LMS

    Hypothesis is a collaborative annotation tool integrated with Canvas that supports shared annotations within a course, discussion in response to annotations, and active reading of text. Instructors select Hypothesis as an external tool when setting up an assignment and can also choose to assign readings to groups.

  12. Hypothesis for Collaborative Web Annotation: Home

    Collaborative annotation assignments are a better way to encourage students to engage more deeply with course content and with each other. For one, conversations that take place in the margins of readings are more organic, initiated by students themselves about what confuses or intrigues them most. In addition, these annotation discussions are ...

  13. Empowering active learning: A social annotation tool for improving

    The Community of Inquiry (CoI) framework is a valuable tool for enhancing online learning experience, emphasizing the importance of creating a supportive and collaborative learning environment (Akyol & Garrison, 2013; Garrison et al., 2000, 2001 ).

  14. AI-Resistant Assignments? Show Student Thinking and Promote Better

    The work of processing small group discussion in whiteboards, collaborative chat tools, or group annotation should be difficult to outsource to an AI for the foreseeable future, and could be a great option for those looking to use writing to learn without fear of AI's unwelcome intrusion into their teaching.

  15. Teaching with Perusall and Social Annotation

    My CFT colleagues and I asked participants to read a few articles about online teaching and annotate them collaboratively, first using a tool called Hypothesis and later using a tool called Perusall.

  16. Collaborative Online Annotation Tools for Engaging Students in Readings

    Students simply go to the assignment and can begin annotating. In the image above, a student highlights a passage to show what the annotation refers to. For a collaborative activity, students can reply to any peer's comment. Alternatively, the instructor can set the annotations to be private, for more independent tasks. Accountability

  17. Interactive and Collaborative Social Annotation for Blackboard

    Create assignments that require students to read socially and make annotations or replies to other contributors. Gather some fun ideas for expanding the use of collaborative annotation to improve student success. Register in Workday 22-Feb 3:15 PM - 4:00 PM Annotate Your Syllabus Instructor: Gloria Barron

  18. Collaborative Annotation with Hypothesis

    This could be helpful as a replacement for class discussion about any text-based media in HTML or PDF format online. Hypothesis allows users to create "groups" for annotation. Students would be able to view each other's annotations and respond, which could be useful as a surrogate for class discussion of a text. Goal

  19. Perusall

    When students interact together on assignments within Perusall, they are intrinsically motivated to perform better—in turn, honing their critical thinking skills and developing a deeper understanding of the material. Explore features For Students More collaboration

  20. Getting Started

    Each annotation should be between 175-200 words. You will also need to cite your documents using Chicago citations. This assignment will be checked via Turnitin.com and any plagiarism will result in a grade of 0. Full assignment details (including the grading rubric) are in the file below and in the digital classroom in D2L.

  21. Московский метрополитен. Moskovsky Metropoliten. Moscow Metro

    Видео, аудио, фото: Московское метро в 2015 году.

  22. Active carbons as nanoporous materials for solving of environmental

    Catalysis Conference is a networking event covering all topics in catalysis, chemistry, chemical engineering and technology during October 19-21, 2017 in Las Vegas, USA. Well noted as well attended meeting among all other annual catalysis conferences 2018, chemical engineering conferences 2018 and chemistry webinars.

  23. Storage Location Assignment for Improving Human-Robot Collaborative

    The robotic mobile fulfillment (RMF) system is a parts-to-picker warehousing system and a sustainable technology used in human-robot collaborative order picking. Storage location assignment (SLA) tactically benefits order-picking efficiency. Most studies focus on the retrieval efficiency of robots to solve SLA problems. To further consider the crucial role played by human pickers in RMF ...

  24. Moscow Metro Font › Fontesk

    July 14, 2020 featured in Display. Bold Color Cool Creative Cyrillic Geometric Neon Outlined Retro. Download Moscow Metro font, a multi-line display typeface in two styles, inspired by the Moscow underground map. Moscow Metro is ideal for posters and headlines, neon signage and other artworks.

  25. First refuelling for Russia's Akademik Lomonosov floating NPP

    Rosatom's fuel company TVEL has supplied nuclear fuel for reactor 1 of the world's only floating NPP (FNPP), the Akademik Lomonosov, moored at the city of Pevek, in Russia's Chukotka Autonomous Okrug. The supply of fuel was transported along the Northern Sea Route. The first ever refuelling of the FNPP is planned to begin before the end of ...