To Grade or Not to Grade: A Case for Prioritizing Feedback

By Suzanne Miller | 2 May, 2023

In their study on student attitudes about social annotation in the classroom at Simon Fraser University, Alice Fleerackers, Juan Pablo Alperin, Esteban Morales, and Remi Kalir observe that students report a range of motivating factors for their participation. Some students felt they learned better, others enjoyed the social aspect of the experience. But the findings are clear in that assessment practices play a key role; the graded elements were cited by most students as a deciding factor in participation. As Fleerackers, Alperin, Morales, and Kalir write “Formal incentives inspire students to annotate.”

Fortunately for educators, collaborative annotation using Hypothesis provides the ideal space to formatively assess and deliver timely feedback on learning. Many instructors like to keep it simple with standard, directions across course readings. Some choose to mix it up based on the complexity, type of reading, or timing of the semester. Others prefer to use annotation replies as feedback and still others reserve the margin for peer-to-peer collaboration. Anyway you look at it, the multitude of options offers plenty of instructional possibilities.

Below are 4 ways that grading and feedback on Hypothesis can help students take more ownership of their learning within the margins.

Simple annotation grading rubrics or participation grading

At the beginning of the semester when you are looking for low-stakes entry points for your students to become familiar with Hypothesis and its functionality, consider creating a single-point rubric and open-ended directions that encourage exploration. While in this phase of using the tool, it is still important to incentivize with an assigned grade based on specific expectations, but probably more important to add timely feedback using LMS-specific grading tools. That way students have room to grow their thinking routines and build their social annotation expertise.

More sophisticated annotation grading rubrics

Structured guidelines provide students with a framework for social annotation. While this structure can be simplified, it can also be gradually removed as experience grows. During this time instructors can experiment with different types of directions or rubrics that fit the assignment. An analytic-style rubric can identify specific elements and criteria, focusing on a set of discrete skills to each be measured on their own (see an example of an instructor-created analytic rubric). This type of grading allows more detailed feedback, and can be easier to give on the back end, but requires more development time on the front end.

Holistic rubrics and grading consist of a single scale with all criteria to be included in the evaluation being considered together. The rater matches an entire piece of student work to a single description on the scale. This type of grading and subsequent feedback can be more reliable across graders and might be an ideal starting place for faculty across departments or disciplines looking to roll out a more consistent approach to social annotation in multiple courses. All this to say, no instructor should be limited in their approach to grading or think that they need to do it a certain way every time they assign with Hypothesis. It is important to adapt just as course requirements and student needs evolve over time.

Instructor annotations as feedback

As much as an instructor can provide grades and private feedback through their LMS in a Hypothesis assignment, it is vital to remember that public annotation replies are equally as powerful. Modeling quality annotations and open-ended questions for student response provides necessary scaffolding, but there is an additional responsibility in replying to student annotations as frequently as possible. Making sure to reply with feedback that is not always instructive is a valuable component in students becoming more naturally motivated to annotate beyond the grade. This method closely resembles a studio approach to learning, which is prevalent in several artistic disciplines. The more that annotators see an emphasis on different types of feedback, the more likely they will internalize it as an ongoing practice for themselves.

To build on the studio/workshop approach it is valuable to know there is an ART in providing quality feedback to social annotation conversations. Actually, ART is an acronym that stands for Affirming, Reminding, and Tasking. It is an easy framework for grounding feedback in an actionable context. First, affirming by letting a student know where they have been successful in accomplishing an annotation goal, sets the scene for instructive comments to follow. Then, by returning/reminding of the initial objective, it allows an opportunity to refocus the purpose. And finally, it segues perfectly into the specific suggestions that will help to improve achievement on future annotation tasks. It may be unnecessary to include all of the ART elements in a single piece of feedback, but as much as an instructor can sprinkle a bit of each across multiple assignments they will reap the benefits of student ownership of feedback as a constant.

Peer-to-peer annotation as feedback

In an effort to promote a collaborative space centered on joint knowledge production, students need to see the value in giving and receiving constructive feedback from each other in threaded annotation discussions. Accordingly, directions/expectations for the assignment should prioritize replies as a graded task. Once students are hooked on the benefits, such as improved analysis of the source text or gaining new, diverse perspectives, they will be more intrinsically motivated to return multiple times to a given assignment. Thus, becoming a true collaborator when it comes to making meaning and thinking critically through course materials.

Ready to give students a whole new way to engage in class? Fill out the form below to get started:

Share this article