Reflections on Teaching Remote UX Design

Ian Arawjo
10 min readSep 7, 2020

--

Zoom, zoom, zoom. (Source: undraw.co)

It’s pretty hard to run project-based, hands-on courses right now. For people in computing, classes that involve prototyping, lab studies, electronics, or other hands-on work are at a disadvantage. Some professors are changing or removing material; others are shipping “lunch boxes” of materials to students. What’s clear is that few people know how best to manage remote constraints.

Over the summer, Matt Law, Mahima Rao and I faced these challenges in teaching a 6-week course called HCI Design x AI, which covers intro UX design and methods. Here we share some of our strategies, workarounds, and challenges for those grappling with how to teach HCI methods remotely.

This is part of a two-part post on teaching Intro HCI design x AI. If you’re interested in reflections on the AI component of the course, wait for Part 2!

Overview

HCI Design is a core class at Cornell Information Science taught by Professor Gilly Leshed. Introducing the human-centered design (HCD) process and UX design principles, the class covers fundamentals like affordances and mental models, contextual interviews, sketching and storyboarding, paper prototyping, and usability studies. Throughout the class, groups of 3-4 students follow the HCD process, recruit and interview potential users, analyze data, ideate, prototype a design, and return to participants to get feedback. In our version of the course, we integrated some recent work in UX-AI convergence and AI as a design material. There were (9) group project teams and 34 students.

Planning and Preparation

We originally envisioned the class as just an online version of the in-person class, with daily lectures and discussions held live. However, advice from the Center for Teaching Innovation at Cornell nudged us towards a more asynchronous model. Anticipating possible timezone issues and other personal conflicts, we decided to poll students before proceeding.

We found that students came from diverse backgrounds, with majors from Architecture to Communication to Computer Science to Engineering. Time-zones spanned the globe, with about 1/3 students working outside the U.S. Many students had internships, classes, or family or work obligations during the daytime.

To accommodate these factors, we shifted primarily to asynchronous instruction and backed away from more technical content in ML. Our new goal was to post 10–20 minute videos, posting them near-daily. At the end of a video was a “what should I do next?”: a quiz, discussion, or other activity. We chose Canvas Modules to release videos in sequence and consolidate all assignments, announcements, and other resources onto one platform. For asking questions to course staff, we used CampusWire.

Of course, not all content could be taught asynchronously. Aspects of the UX course that proved particularly challenging to adapt were:

  1. Peer collaboration and feedback
  2. Contextual inquiry
  3. Paper prototype Wizard of Oz methods
  4. Usability studies

We describe how we approached each of these topics in turn.

1. Remote Peer Collaboration and Feedback

Our first challenge was how to facilitate peer collaboration. In physical classes, face-to-face collaboration (e.g., post-it notes, chatting, showing designs to get feedback) is key. Right now, Zoom is all the rage to support face-to-face interactions, but it has problems:

  • It’s hard and clunky to set up calls in an informal, ad-hoc fashion
  • There’s no visibility on whether a Zoom call is happening to other people in a class
  • You can’t “drop by” breakout rooms or students’ Zoom calls as an instructor

We wanted a more informal environment where students could drop by and ask each other for help. And, because students are scattered across the globe, this couldn’t be “scheduled” to a set time. It had to be 24/7 access. We considered Canvas, but while we enjoy Canvas, it did not integrate well with ad-hoc coordination through texting, audio, or video calls. We also considered CampusWire messaging, but it too was too experimental.

Workaround: We settled on Discord, which suited many of our needs. Discord supports both text channels and persistent “breakout rooms” (audio/video channels), a combination that enables ad-hoc group formation and collaboration. We set up a number of persistent breakout rooms, and then a text #lobby chat. Students would post in the #lobby if they were looking for a partner, then move off into a breakout room to video chat. Instructors could see who was online and meeting, and hop into their room to discuss. We encouraged the use of Figma for affinity diagramming activities.

How it worked in practice: We found that students formed groups quite naturally and independently through Discord, and believe the approach might work even better for larger classes. While we scheduled time-slots for specific periods, students also found partners outside these times.

Our Discord server, with #channels and audio/video “breakout rooms.”

Still, there were challenges. Some students would find a partner on Discord but leave to chat on Zoom, facing technical difficulties. And later in the course, some students seemed to pair up and stick with friends, not posting to the public Discord. We hypothesize that part of this flight from Discord is due to our choice of CampusWire for asking questions to course staff. We had chosen CampusWire to fight the sequential nature of Discord text chat, avoid bombarding staff emails, and consolidate Q&A. But while some students were active on CampusWire, others seemed to ignore it, and we ended up having to mirror some important decisions onto the Discord channel to ensure everyone received them. In retrospect, letting students submit questions to Discord may have driven more engagement with that platform and kept answers in one place. A few students even contacted us in Discord through direct messages.

We also came away thinking Discord may be especially well-suited for open office hours and live critique sessions. We began to realize one of the greatest challenges of remote classes is losing the face-to-face, direct “corrections” that occur when TAs are walking around a classroom. In-person, other students overhear TAs’ corrections and then adjust their own expectations, or share this knowledge between themselves. One suggestion is to have TAs be present on Discord for set periods across a day, not to teach, but for students doing a hands-on activity to ask for feedback.

2. Remote “Contextual” Interviews

If contextual inquiry could be done well remotely, big design firms would have no need to fly UX researchers across the world. We knew going in that teaching contextual inquiry for remote conditions would be fraught.

An obvious solution is to encourage participants to video call or facetime from a relevant location. However, participants could have a variety of setups, technical know-how, and internet access. What’s more, they’d possibly be away from, or not have safe access to, the relevant context for the interview, given the pandemic conditions.

Workaround: Our strategy was to teach a method we called “data elicitation” — like diary studies or cultural probes, students could ask participants to collect and bring some data to the interview. This could be photos, video (e.g., a tour of a relevant location), even drawing something. The aim was to try to elicit memories and get the participant embodied through imagination. Our goal was not to prescribe a solution for all projects, but to encourage creativity in how participants approached the constraints of remote work.

How it worked in practice: All project groups planned video calls and 7/9 groups also used data elicitation. The elicited photos, sketches, or videos were either sent in beforehand or referred to during the interview. For instance, one team asked participants to sketch the floor layout of an apartment they had recently moved out of, and used this and a set of pre-made photos to jog memories about what items they had thrown out, sold, kept, and why. This team found the method to be effective for their particular project. Other teams, however, focused more on verbal responses (although this is also a challenge which occurs in the in-person version of the course). Our takeaway is that data elicitation seemed much better than audio-only calls, but still posed a challenge to explain to students unfamiliar with these methods.

Data elicitation also could augment situations where groups resorted to audio calling to cater to the comfort of some participants. These groups asked participants to describe their setting (such as sitting at a kitchen table with sunlight pouring in), or to send photos through email.

3. Remote Paper Prototypes and Wizard of Oz

Teaching paper prototyping and Wizard of Oz studies in a remote setting poses several challenges. The most obvious is the difficulty in eliciting direct interaction between a participant and a prototype without the ability to run a study in-person. How can a participant “touch” the paper prototype? While this gap might be bridged by higher-fidelity digital prototypes, we did not want to gloss over teaching the creative value of low-fidelity prototyping.

Workaround: We came up with a setup where the Wizard (the person controlling the paper prototype) would share a video stream of the prototype with the participant, who in turn would share their screen with the Wizard and the other experimenters. Thus, the Wizard could watch the user “interact” with the video stream of the prototype, clicking or swiping on the paper elements, updating the prototype as they did so.

We originally tried to do this with Twitch but found the latency to be unacceptable. The most responsive setup that we found was actually to have everyone on a Zoom call, with the participant sharing a Google Hangouts stream of the prototype from the Wizard’s mobile phone.

Here’s a setup guide and video for anyone interested in this idea.

Example of a remote setup for paper prototyping, using a Hangouts call within a Zoom call.

How it worked in practice: To our surprise, no students mentioned any difficulties with enacting this setup (barring some of their participants having the aforementioned issues with video calls). It is worth noting, however, that this solution is mostly targeted towards screen-based prototypes. We did have one group that tried to prototype a non-screen interface, with physical interactions between multiple components. This group found a direct video call with the participant talking the wizard through their interactions with the prototype was more effective.

4. Remote Usability Testing

The in-person class taught usability testing through in-person observations. For instance the note-taker would jot down what screens the participant moved through in a Figma or more developed paper prototype, where they had trouble, how many “mis-clicks” they made, how long a task took, etc.

An example Maze.design task for a Coffee Ordering app.

Workaround: This challenge was more straightforward to solve, given the availability of tools for remote testing. We considered asking participants to screen-share their navigation through a prototype, but primarily encouraged students to use Maze.design with Figma prototypes. Maze is a tool that takes a Figma prototype and adds support for data collection, surveys, and designing tasks. The Maze trial supports 10 “blocks,” which we found was enough for setting up 3-4 tasks and asking a few survey questions during the study. We encourage those running remote UX classes to check out Maze!

Some Additional Reflections

We sent out an anonymous feedback survey to students over halfway through the course. The most common criticism was workload. While part of this may be a mismatch of expectations, we also released too much content for some days, with multiple exercises and lectures. To paraphrase one student, “it felt overwhelming to open up the class site and see tons of new exercises and videos.”

To help students feel less overwhelmed, we recommend that asynchronous content limits to 1 exercise per day. If there are multiple parts, don’t make these multiple exercises, but clearly state the parts in a single exercise, and make it easy for students to fill in (e.g., a worksheet docx to fill out). Spreading the parts of one long exercise over multiple days is another tactic.

We also in retrospect would recommend weekly live critique sessions, as Professor Dan Cosley at Cornell suggested to us. We found that holding a live critique directly after a project milestone was more engaging than textual feedback and provided a much faster turnaround where groups can incorporate feedback almost immediately. These sessions also served to bring everyone together, overcoming the often impersonal, delayed, and fragmented nature of virtual classes. Instructors might incentivize live peer feedback via a participation grade.

Conclusion

Remote classes can take a drain on everyone, and there’s only so much good course design can do. Despite those challenges, however, we had a lot of fun teaching this course, and overall came away with the belief that it was possible to maintain group project quality. One of us (me) had served as a graduate TA for the in-person course, and so I can say the quality of the projects of the summer course were just as good as that 15-week, in-person class, in spite of all these limitations. We hope this post offered some ideas and insights for adapting HCI courses and methods to remote work.

Bios

Matthew Law is a PhD candidate at Cornell University in Information Science. He studies human-robot interaction (HRI), specifically how humans and robots can collaboratively design.

Mahima Rao is a Research Intern at Cornell University, studying under Dr. Qian Yang. Her work focuses on UX-AI convergence. She gained her Masters in Information Science this past Spring.

Ian Arawjo is a PhD candidate at Cornell University in Information Science. He studies the intersection of programming and culture, and explores how to design CS education as a site for intercultural learning and relationship building across difference.

--

--

Ian Arawjo
Ian Arawjo

Written by Ian Arawjo

Assistant Prof @ Université de Montréal; Previously: Postdoc @Harvard, PhD from @CornellInfoSci. Former game developer.

No responses yet