Monthly's peer feedback system
Before diving in, I wanted to give a big shoutout to the team that helped every step of the way, from ideation through flawless execution.
Giving high quality, constructive feedback is a critical part of any learning experience—but it’s also difficult to give good feedback. At Monthly, we initially designed a peer feedback project that paired a student up with four peers that submitted a project, and asked them to go down the list of student projects to give some feedback. While well intentioned, we soon realized that many people wouldn’t receive helpful feedback—instead they’d receive short phrases like “sounds great” or “looks good”. There’s absolutely nothing wrong with giving support of course, but when students are paying upwards of $300 for an online learning experience to level up their creative craft, receiving valuable feedback becomes a core tenet of that process.
As the design lead, I improved our old peer feedback system by providing students with better guidance and structure for how to give feedback, leading to an average length increase of feedback by 38% and increase in peer feedback comment submission by 58%.
Some of these students could probably articulate their frustrations even better than I can.
Require more constructive feedback on projects from peers before they get any feedback—meaning I will not get feedback until I give the feedback I was assigned to do. Additionally, having some sort of rubric people can grade things and give comments about certain aspects of your song that you need more work or are great. The peer structure is a great idea but there needs to be more incentive for people to give feedback so you get it and when you do get it, it’s constructive. Some people are just too afraid to give adequate feedback because they don’t want to be mean.
It would be good to fine-tune the peer feedback system. I always got at least one response that was “sounds good” or similar. Someone set up a Facebook group for people taking the course and I actually got more helpful feedback there than on Monthly itself, probably because the group was full of people similarly frustrated with poor peer feedback!
The peer group aspect of the class didn’t pan out quite how I expected. I tried to give feedback as expected (at least two positive comments and 1 suggestion/question for improvement) but for most of the feedback I received was, “Can’t wait to see how it turns out” or “That’s cool”. I even fished a bit for feedback by saying I didn’t know how I’d do something, but didn’t really get takers.”
We saw plenty of complaints like this echoed in both our in classroom sentiment experience survey as well as customer support.
It became clear that the key objective was to improve the quality of peer feedback, measured by a potential increase in second order peer interaction rate (rate at which feedback spurns replies or further comments), which would hopefully lead to higher class completion.
Pushing focus, context, and guidance
The previous version of peer feedback stacked 3 or 4 existing peer feed activity cards on top of one another, and while this did suffice as an MVP, it didn’t contextualize the space differently than simply going into the peer feed and writing feedback. A key hypothesis that drove the UX was that by creating more focus and visual emphasis on feedback gives students more space to think and write good feedback.
Additionally, the feedback prompts/description used to live at the top of the page, well above the actual student projects. We contextualized the feedback guidance above the text area to put it at top of mind, leading to better, higher quality feedback.
Another key hypothesis was that by creating context as to who the person is, and what their past projects have looked like, it helps feedback-givers understand what elements are new, and thus have a clearer idea of what to give feedback on. I introduced a “past projects” and “about” accordion, which allows students to contextualize the project in the greater arc of the student’s creative journey, leading to higher quality and personal feedback.
Everyone knows that giving high quality feedback is difficult, and often requires a certain level of trust and vulnerability. We wanted to reinforce this pattern by recognizing this accomplishment—and by providing a moment for delight, hopefully giving constructive feedback becomes easier and easier.
By using Optimizely to compare our experiment with our hypotheses against the control (existing feedback feature), we’re able to measure the success of the feature.
We saw a significant increase in the average length of feedback given in our experiment group, compared to control. While qualitatively, we can’t see the exact type of feedback that’s leading to this increase, our best assumption is that if the feedback length is longer, it’s probably more substantive and thus helpful for students.
We saw a massive increase in the average completion rate for the peer feedback assignment project, which means more students receiving peer feedback from each other!