Recruitment & Methodology

Participants were recruited via social media. All participants were current robot owners that fit our company user personas.
After having our participants sign an NDA, we had them fill out a pre-session survey to understand a little bit more about them and their backgrounds.

We held five remote interviews over a course of three days. We used a combination of survey, observation, moderated concept testing, and card sorting using the Think-Aloud Protocol to determine study results. In each session, our team of three took turns facilitating, note taking, and observing.

Metrics used in this study were:
• # Shared expectations
• # Unintended actions
• # Times users asked a question 

Concept Test 1

We presented three videos to our users:
• First clean screen & animation
• First cleaning flow
• Robot control sheet content
This image shows three mobile phone UI screens labeled A, B, and C:

A (First clean screen): Displays a message prompting the user to pick up small objects before starting a clean with a prominent button labeled "Start."
B (First cleaning flow): Displays a cleaning progress screen with a pause button and a visual of cleaning status.
C (Bottom sheet content): Shows additional content or options at the bottom, prompting the user to make selections before starting a clean.

A. First Clean Screen Impression

Our goals of showing users the first clean screen were to understand if the screen satisfied the design intent:

• Inform the user to pick up objects that the robot might get stuck
• To know how to start the robot
• To know if users know that their first clean will generate a map
This image shows two user quotes from "Pet lover" and "Techie."

Pet lover: The user expresses confusion about the term "your first map" and says they would click on "My Maps" to investigate further, but remains unsure.
Techie: The user is confused by a search bar at the top of the screen, questioning its relevance if it’s not a search bar and suggesting it should be removed if unnecessary.
The most important feedback from this study was that the top navigation was misunderstood from a search bar, and that the information button did not reinforce the expectation that a map would be generated at the end of the cleaning run. Fortunately, all participants understood that the robot needed to start from the base and that they should pick up objects before starting a clean.
A table lists participants' feedback based on three criteria: 1) number of expectations matching the design, 2) number of actions not intended with design intent, and 3) number of times the user asked a question to clarify functionality. Each participant (Abram, Pixie, Krystle, Jonah, Andy) has a corresponding score under each column, with detailed feedback underneath. The feedback describes their understanding of tasks like starting the robot, identifying objects, or interpreting map functions, and lists any actions that diverged from design expectations.

B: First cleaning flow

For this task, we showed participants a series of screens and animations and asked them to describe what their impressions are when they reached the last screen.
wo mobile screens show the user interface for both Vectornator and PowerPoint:

Vectornator (left): Features a polygon-shaped no-go zone being drawn with precision points.
PowerPoint (right): Shows basic shapes (e.g., squares and lines) that can be added to the map.
From this study, we learned that the rendered map in the video was causing confusion about the fidelity of the robot-generated map. 4/5 of the participants expressed that they didn't understand the map pixelation of the generated map and that it looked incomplete.  
A screenshot shows a user map generation process, accompanied by a user quote:

A user labeled as "Busybody" comments, "The picture [generated map] looks very different than the animation."
This feedback is likely referring to discrepancies between the user's expectations and the actual generated map in the robot cleaning app.
This table is similar to the first expectation table, showing participant feedback for five users (Abram, Pixie, Krystle, Jonah, Andy). It is broken into three categories:

Number of expectations matching the design.
Number of actions not intended with design intent.
Number of times the user must ask a question to the facilitator to clarify functionality.
Each participant has detailed comments, explaining their thought processes, errors, or confusion related to the robot map design.

C: Cleaning bottom sheet content

We gave participants various cards in Figma and asked them to choose the four that they welt would be the most important to see displayed in the bottom sheet and why. Each card was either floorplan-centric (information related to the user's floorplan) or robot centric (information or controls for the robot).
This image displays a mobile phone screen on the right showing a cleaning robot interface and a set of purple boxes on the left listing potential data points:

Robot model, Robot Name, Floorplan Name, Floorplan Square Footage, Robot Status, Area Robot Has Cleaned, Robot Battery Life, Cleaning Time, and Robot Cleaning Mode.
The diagram suggests what information users might want to see on the robot app interface.
All 5 participants primarily chose robot-centric cards! All users explained that their reasoning was that the robot-centric cards are more useful for cleaning status and setting up cleaning runs.
 A table showing rows with various categories of robot status information, including:

Robot name/model
Robot battery life
Cleaning time
Robot cleaning mode
Robot status
Floor plan name
Area cleaned by the robot
Floor plan square footage
Each row has small user profile pictures of participants linked to the respective robot information.

Concept Test 2

Concept test 2 focused on map editing and tooling expectations.
This image contains two mobile phone screenshots:

A (Map editing expectations): Shows a map of a home layout with labeled rooms and a large button at the bottom right labeled "Start." It aims to understand how users expect to edit their maps.
B (Testing editing tools): Shows two different tools: PowerPoint and Vectornator. The user interface focuses on selecting and shaping map areas, using vector shapes and editing features to test user intuition with the tools.

A: Map editing expectations

We provided users with two screenshots of the app, and asked them their expectations on what they would do on these screens and how they would go about editing their maps.

Our main goals of this study were to find out what capabilities and tooling users expect to have during map editing, and if they think about rooms and zones differently.
Editing the map proved to be a natural next step for users after seeing their map being generated. All users noticed the option to edit their map. Interestingly, one user said he wouldn't want to spend time editing the map.


After seeing the editing mode screen of the map, participants brought up these key factors:

• There was a desire to edit things on the map other than rooms (like completing wall segments, adding furniture, etc.)
• No-go zones would be the most important editing feature
• Users want the ability to control how many times the robot cleans a specific area on the map
• There is an expectation to be able to name rooms and identify those areas on the map
• The term "Zones" was interpreted as something that is a collection of rooms
 Two mobile screens show a completed map generated by the robot vacuum:

Screen 1 (left): A prompt at the top says, "Begin editing your map," encouraging the user to start making changes.
Screen 2 (right): A map with no-go zones added, offering an interface for making more specific adjustments.
Two user quotes:

Pet lover: Mentions a desire for more control over how many times the vacuum covers an area and how vigorously it cleans.
Techie: Expresses a desire to name rooms, straighten lines, and place obstacles for the robot.This table compares the feedback from participants (Abram, Pixie, Krystle, Jonah, and Andy) regarding their expectations for map and zone editing in the robot cleaning app. Columns include:

Number of times user expectations did not meet design intent.
Number of times the user thinks about zones together or separately.
Common tooling associated with editing rooms and zones.

B: Testing shapes vs. pen tool

We asked participants to download two mobile apps, Vectornator, and Powerpoint. We gave them minimal direction in both apps, and told them to create zones on the displayed map while following the "Think Aloud" protocol.

Our goal was to understand if users find it more difficult to use a shapes library or create zones with a pen tool.
wo mobile screens show the user interface for both Vectornator and PowerPoint:

Vectornator (left): Features a polygon-shaped no-go zone being drawn with precision points.
PowerPoint (right): Shows basic shapes (e.g., squares and lines) that can be added to the map.
This image displays a similar map editing process, but in the Vectornator tool. A curve and nodes are being drawn on the map. "Abram Lopez" appears in a small video call window in the corner, indicating a live testing session.This image shows a user editing a robot cleaning map in PowerPoint. A rectangle shape is added to the map, and a user named "Abram Lopez" appears in a small video call window in the corner, possibly for testing or feedback purposes.
Overall, participants were frustrated with the pen tool and leaned towards using a tap and drag or drawing motion. The shapes library task was completed the quickest and most accurately.
A table compares user feedback across two tools (Vectornator and PowerPoint). The table is broken down into:

Number of unintended results surfaced: Users like Abram, Pixie, Krystle, Jonah, and Andy highlight challenges such as difficulties with shapes, drawing, and fixing points.
Number of questions to clarify tool usage or guidance: Minimal clarification needed, with notes on specific user struggles, such as with dotted lines or resizing shapes.

Final Result

After synthesizing the data from this user study, it was clear that we needed to do the following:

• Update the UI for the first time screen and flow to address participant issues
• Adjust the animation based on the feedback we received
• Display robot-centric information in the robot bottom sheet and designing it to include the highest voted content
• Work with the scrum team to improve map rendering quality

Our flows and designs were drastically changed based on this user test. Our final revisions are as shown:
Three mobile screens:

Screen 1: The interface prompts the user to "Remove small objects and loose cords before starting a clean" with an image of a robot vacuum and various objects. Buttons below the image allow the user to "Locate," "Start cleaning," and view "Modes." The status shows the robot is docked.
Screen 2: Shows the robot vacuum in cleaning mode with buttons to pause or stop cleaning. The status displays as "Cleaning."
Screen 3: Displays the map of a house with different rooms, and an option to "Add No-Go Zone" at the bottom.
Back To Top
arrow_upward