Designing voice-activated learning management systems — a UX case study

Gautam Krishnan
UX Collective
Published in
9 min readFeb 28, 2019

--

Whiteboard is an Alexa voice skill that integrates the features of Piazza and Blackboard, two popular Learning Management Systems.

Introduction

During my master’s program, my team had a wonderful opportunity to be guided by Dr. Debaleena Chattopadhyay for our HCI course project. We built an Alexa skill that could interact with the two LMS softwares used in our University — Piazza and Blackboard. Our system was meant to be used by both professors and students.

The Team

Gautam, Sumanth, Sai Priya and Varshini. We’ve been an amazing team and have worked together on several projects in and out of academics.

Problem Statement

Among a multitude of Learning Management Systems, our University used Blackboard and Piazza. The typical usage is to view/post marks, grades, syllabus, create/submit assignments, polls, discussions, projects, etc. Because both Blackboard and Piazza have some similar features, professors typically used one of them. The usage of both these services in the same course was common too, because Blackboard had some features that Piazza didn’t, and vice versa. It was extremely confusing for students to hop between the two. What made it harder was the lack of a good user experience on both the systems. Specifically, we wanted to address the following problems:

  1. As students register for multiple courses per term, there was no single place to view all the information regarding their courses.
  2. The interfaces of Blackboard and Piazza were a pain to use. Their mobile interfaces were even more frustrating.
  3. Students and professors had to open these applications on a personal computer to view or post content. In a home setting, users had to go to a personal computer to view the content every time they received an email notification about new content being posted (new discussions, replies to existing discussions, announcements, etc. were very frequent).

Our Approach

A simple solution to problems (2) and (3) would be to make those respective web and mobile interfaces better. Because they were third party systems, we had no control over them. Even if we did make them better, it still wouldn’t address this important problem — they would nevertheless have had to go to Blackboard and Piazza separately.

Any solution here would involve fetching and combining data from both the services. We did not want this new solution to be another web/mobile interface. The reasoning behind this was that there were some complex tasks that could only be done from those particular applications — like uploading and submitting project files. Coming up with a new web/mobile interface would mean that users will have to jump between 3 different apps now.

During the time we were working on the project, a quarter of all the households had a smart speaker with a voice assistant and the projected adoption was 50% of all US households by the end of 2018. This is how we decided to build Whiteboard, the Alexa voice skill.

A diagram of how our system works.

User Research

We conducted a semi-structured interview with 2 teaching assistants to get an idea of the tasks that instructors perform with Blackboard and Piazza. We identified the most common tasks to be:

  1. Answering unresolved questions on Piazza.
  2. Uploading assignments and home works.
  3. Posting grades.

Next, we conducted an online survey with users of Blackboard and Piazza to elicit our requirements. We sent out the survey link to our classmates and friends pursuing other programs in our University. Please note that we only conducted this survey with students who were pursuing a Master’s degree, although we do not expect the results to be significantly different among users pursuing other degrees. The common tasks that stood out among the students were:

  1. Checking grades.
  2. Posting and answering questions.
  3. Viewing and replying to discussions.
  4. Checking for content availability.

We also recorded users’ general concerns with voice assistants in order to address them in our design. Academics aside, the four of us as a team happened to attend HackIllinois, a hackathon at the University of Illinois at Urbana-Champaign. This was a good testing ground for us to see how people interact with an Alexa skill.

For the hackathon project, we built a different project, which too was an Alexa skill. We tested our skill at two different settings: Closed room for a noiseless environment; and an open room environment to find the efficiency of voice assistants in places with ambient noise (like television, people talking etc.). From the observations at the hackathon, we learned that users preferred simple and short commands over longer sentences.

Requirements

Based on our survey results, observations and the time we had for the term project, we came up with the following requirements that our system should address:

  • Students must be able to check grades posted by instructors for an exam/assignment/homework/project.
  • Students must be able to check for content availability (course materials/homework/assignments).
  • Students must be able to check the dates for exams, deliverables that are due and set reminders for them.
  • Students must be able to ask/answer questions in a class discussion forum. There should be options for posting public, anonymous questions and answers to the entire class.
  • Instructors must be able to check and answer unresolved questions posted by students in the discussion forum.

Assumptions

We assumed that our users had an Alexa smart speaker in their homes (Echo, Echo Dot, etc.) and had prior knowledge about using them. We also assumed that the Whiteboard skill was installed in those devices by our users. The complexity and experience of finding and installing the Alexa skill, connecting the Blackboard and Piazza accounts with the skill was not tested by us.

Whiteboard Skill Setup

Sketches

As a course requirement, each of us were required to come up with 10 sketches individually, trying to think of all the possible ways users would interact with the system. Among each of the 10 individual sketches, we chose the top 3 based on their use cases and practicality. The complete sketch diary is here.

Along with the sketches, we also identified a few important interaction points:

  1. Initiation.
  2. Knowing what to say.
  3. Mapping to possible actions.
  4. Feedback and dialogs.
  5. Recognition and Correction of errors.

Formative Evaluation

We conducted a formative evaluation study with 5 users — 3 students, one student who was also a Teaching Assistant, and one instructor. We used Wizard of Oz prototypes, a technique in which users interact with a system that users believe to be autonomous, but is actually being operated or partially operated by an unseen human being.

We gave the users an overall idea of the tasks that they needed to perform and what the end goal of each task was, letting them perform the tasks naturally. We got feedback from how they interacted with the system and what follow up questions they had to ask after the initial command, and based upon this, modified our existing design sketches.

A Wizard of Oz setup

Lessons learned from formative evaluation:

  1. Users sometimes wanted the system to repeat some of the options in case they missed/misheard them.
  2. Student users wanted to know grades for specific home works and assignments in specific courses. They also had follow-up questions regarding the class average, highest grade achieved in class, etc.
  3. Before checking unresolved questions or answers, users preferred to know answers to “how many?” and “how long?”. Based on the answer they received, users then decided whether to have them read aloud or to have them sent to their mobile phones.

The System

Whiteboard is a voice-enabled learning management system that allows students and instructors to perform simple everyday tasks with voice commands. Amazon’s intelligent voice assistant, Alexa, does the speech recognition for us. I’d like to keep this section minimal to not deviate from the UX aspect of this article.

We created an interaction model defined by Intents, which contain different combinations of a speech command that the user can use to perform a task. For example, a user looking to check her/his grades in a specific course can say "what are my grades in {course}", "check my grades in {course}", "what is my score in {course}" and several such equivalents. We defined this interaction model in Amazon's Developer Console in a new custom Alexa Skill Set.

When Alexa hears these commands, the intent associated with that specific command gets fired. An AWS Lambda service account is linked to the Alexa Skill Set that we created in the previous step, which responds to those intents by executing code for each of those intents. We used an unofficial Piazza API that can be found here as an npm module. For Blackboard, we used their official REST API.

Summative Evaluation

We used Oz studies, where users interact with the actual working system. Users performed four different tasks using the Alexa whiteboard skill. It must be noted that these studies were not the field studies, although field studies could have given us more information in natural settings. Due to time constraints, we conducted the summative evaluations in a controlled manner.

While we conducted the summative evaluation with our classmates who had not interacted with our system before, they were aware of what our system was capable of. We believe this to have caused a bias in the results we obtained. We measured the time taken by the user to complete the task, success on task and the errors that occurred during the execution of the task. We also measured user satisfaction using the NASA TLX surveys.

The Oz setup.

Conclusion

We got very positive feedback about Whiteboard. Most of our users found the concept of a personalized voice UX skill to be fascinating. These were the results we obtained:

  1. The user satisfaction score for Whiteboard was 96%, while the average satisfaction scores for all the individual tasks were all above 90%.
  2. When asked how likely they were to use our product on a scale of 1 to 5, all of our users gave a 5/5.
  3. The average success rate for all the tasks was 95% and the average time on each task was less than a minute.
  4. Most of the errors that we observed were recognition errors. Alexa’s trouble in recognizing user’s commands were 65% of all the errors. The next 30% were a mix of interaction errors and situations which our system did not know how to handle. The rest 5% were vocabulary errors.
  5. All of the interaction errors were in the first two tasks, and most of them were when users forgot to say the skill name after invoking Alexa. We noted that our users are new to using a skill on Alexa, which we believe was the prime reason behind this.

Other recommended articles by Gautam Krishnan:

Get in touch:

  • Follow me on Dribbble and check out my latest designs ⛹️‍♂️
  • Follow me on Medium, I write on Design and UX 👨‍💻
  • Connect with me on LinkedIn and say Hi 👋

--

--

Hacker | Product Designer at Vitech | Former UI/UX Designer at Zoho. I love solving problems and making people’s lives better. http://www.gautamkrishnan.com