Redesigning the Remote Critique Process

By Rochelle Dai, Sienna Gonzalez, Morgan Creek, and Gennifer Hom

Problem Statement

Our problem statement was, “How might we create a comfortable and beneficial critique environment for design students to receive feedback on their projects in a remote setting?”

Our solution is a platform that increases transparency between participants and centralizes feedback for future reference.

Final Product

Video by Rochelle Dai

Our Team

Timeline

Our user research continued during our ideation process to allow for adequate time researching our main user groups. A key component to our process was conducting user-testing between our lo-fi iterations.

User Research

We surveyed and interviewed 20 students and professors about their experience with remote critique and found 3 common pain points:

A lack of personal connections between students and between professors and students

“I don’t feel very connected to my professor and peers…We can hide behind our screens.” — Student

Frequent awkward silences where students were waiting for one another to speak

“It feels awkward sitting there in silence, especially when my camera is on while waiting for at least one person to give me a critique.” —Student

Disorganized documentation as a result of decentralized feedback found in different documents or locations, making the feedback difficult to implement in future work

“Students never read the written critiques and there is no change in work. It is a wasted opportunity.” —Design Professor

Brainstorming

Affinity mapping exercise done through Figma

User Research: Synthesis

Dissecting the Data

Example 1: Students have varying comfort levels regarding microphone use.

One student felt that using mic is nerve wracking, whereas another student dislikes when the audience mutes themselves.

“More people can contribute by typing in the chat. It’s not as nerve wracking as speaking in front of everyone.” — Student

“I dislike that people just mute themselves and turn their cameras off.” — Student

Example 2: Each professor has a different class structure, which affects the efficiency of critique. Students had varying experiences with whether or not critique sessions were done efficiently.

“It’s a lot quicker because we do it in breakout sessions and get 1-on-1 critiques with the professor.” — Student

“It would be great if professors had a time limit because these design critiques take way too long.” — Student

Example 3: We also found that students felt the text-based format of critique had both pros and cons.

Some students found this type of format beneficial when receiving feedback, whereas when providing feedback, students found this format is more time-consuming.

“It’s in written form so you can easily refer back to them and keep them in your notes.” — Student

“It can take more […]. I have to type things out and circle them.”—Student

Ideation: Sketches

Examples of a few sketches we did to solve some of the problems from our data and brainstorming

After we brainstormed problem statements for each step of the critique journey, and we created some rough sketches of solutions to the various problems.

We began to ideate possible designs to help alleviate critical pain points found during our research. Each team member explored solutions to a couple different problem statements and depicted these with digital sketches.

Before Critique

Low Fidelity Prototype

As a result, we implemented an ice breaker question to create a friendly atmosphere and foster communication before critique, similar to how we used to chat with our peers before in-person critiques.

We also let users choose their presentation time to erase some ambiguity or anxiety.

User Testing

“I’m not sure where to look or what to do when I see this screen?”—User

High Fidelity Prototype

  1. Selecting a presentation time
  2. Answering the ice breaker question
High fidelity prototype of the user flow for entering a critique session

During Critique

Low Fidelity Prototype

“It feels awkward sitting there in silence, especially when my camera is on while waiting for at least one person to give me a critique.”

First iteration of the emotes interaction during our low fidelity prototype phase

To combat this, we decided to implement emotes—this would allow the audience to quickly show the presenter they are actively engaged during the critique process. The emotes would be visible to all attendees, similar to instagram live. We hoped this would help offset longer periods of silence in a fun and encouraging way.

User Testing Round 1

“I would feel bad if emotes were visible to everyone. I would compare myself to other students it would and make me subconscious of my work.”—User

Mid-Fidelity Prototype

Second iteration of our emotes feature after attempting to address user feedback

From here we wanted to shift the definition of emotes from a performative action more into a static tool like a sticker that students can place onto the artwork to indicate favorite areas.

Only the presenter would be able to view these stickers—we felt this would be a better way for students to receive more specific feedback without having to compare emotes between peers.

User Testing Round 2

“I saw the emotes as a reaction tool and it wasn’t transparent to me that I can click and drag them as stickers.”—User

High Fidelity Prototype

Final iteration of the emotes feature

We simplified the emotes where students react to the overall piece rather than specific areas.

To address user feedback, we made the emotes more subtle and provided the option to hide the chat and emotes to make them less distracting. We limited emotes to positive options because we wanted to encourage users to provide any constructive feedback in the form of written critique, and because the emotes are intended to show positive engagement with the students’ work.

After Critique

Low Fidelity Prototype

We centralized all the feedback the student received, such as the annotations, chat, and professor’s critique into one document that students could easily reference after class.

Low fidelity prototype of what the final documentation of feedback could look like

User Testing

“What if there are multiple annotations in the same area?”—User

High Fidelity Prototype

To combat this, we combined annotations from the same area, and added an option for users to filter between text or hand-drawn annotations. This is a feature that could benefit from more user testing, especially for larger class sizes.

Final Product

Reflection

Future Changes

What We Learned

  • Designing with the user flow in mind
  • How to incorporate diverse user feedback into a design solution

Conclusion

By integrating user testing in between lo-fi, mid-fi, and hi-fi prototypes, we were able to further refine our solution and gain a better understanding of how the user feels during the critique process.

Finally, if we had more time, we would want to further test our product and explore the professors’ experience with conducting in-class critique.

Overall, our team learned how to design with the user flow and emotions in mind and incorporate diverse feedback. Our team won the Most Innovative UX award and the Audience Choice award on our final presentation day.

We’re a student-run design consultancy @ UC Davis!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store