Recruitment & Methodology

Participants were recruited via social media. All participants were current robot owners that fit our company user personas.
After having our participants sign an NDA, we had them fill out a pre-session survey to understand a little bit more about them and their backgrounds.

We held five remote interviews over a course of three days. We used a combination of survey, observation, moderated concept testing, and card sorting using the Think-Aloud Protocol to determine study results. In each session, our team of three took turns facilitating, note taking, and observing.

Metrics used in this study were:
• # Shared expectations
• # Unintended actions
• # Times users asked a question 

Concept Test 1

We presented three videos to our users:
• First clean screen & animation
• First cleaning flow
• Robot control sheet content

A. First Clean Screen Impression

Our goals of showing users the first clean screen were to understand if the screen satisfied the design intent:

• Inform the user to pick up objects that the robot might get stuck
• To know how to start the robot
• To know if users know that their first clean will generate a map
The most important feedback from this study was that the top navigation was misunderstood from a search bar, and that the information button did not reinforce the expectation that a map would be generated at the end of the cleaning run. Fortunately, all participants understood that the robot needed to start from the base and that they should pick up objects before starting a clean.

B: First cleaning flow

For this task, we showed participants a series of screens and animations and asked them to describe what their impressions are when they reached the last screen.
From this study, we learned that the rendered map in the video was causing confusion about the fidelity of the robot-generated map. 4/5 of the participants expressed that they didn't understand the map pixelation of the generated map and that it looked incomplete.  

C: Cleaning bottom sheet content

We gave participants various cards in Figma and asked them to choose the four that they welt would be the most important to see displayed in the bottom sheet and why. Each card was either floorplan-centric (information related to the user's floorplan) or robot centric (information or controls for the robot).
All 5 participants primarily chose robot-centric cards! All users explained that their reasoning was that the robot-centric cards are more useful for cleaning status and setting up cleaning runs.

Concept Test 2

Concept test 2 focused on map editing and tooling expectations.

A: Map editing expectations

We provided users with two screenshots of the app, and asked them their expectations on what they would do on these screens and how they would go about editing their maps.

Our main goals of this study were to find out what capabilities and tooling users expect to have during map editing, and if they think about rooms and zones differently.
Editing the map proved to be a natural next step for users after seeing their map being generated. All users noticed the option to edit their map. Interestingly, one user said he wouldn't want to spend time editing the map.


After seeing the editing mode screen of the map, participants brought up these key factors:

• There was a desire to edit things on the map other than rooms (like completing wall segments, adding furniture, etc.)
• No-go zones would be the most important editing feature
• Users want the ability to control how many times the robot cleans a specific area on the map
• There is an expectation to be able to name rooms and identify those areas on the map
• The term "Zones" was interpreted as something that is a collection of rooms

B: Testing shapes vs. pen tool

We asked participants to download two mobile apps, Vectornator, and Powerpoint. We gave them minimal direction in both apps, and told them to create zones on the displayed map while following the "Think Aloud" protocol.

Our goal was to understand if users find it more difficult to use a shapes library or create zones with a pen tool.
Overall, participants were frustrated with the pen tool and leaned towards using a tap and drag or drawing motion. The shapes library task was completed the quickest and most accurately.

Final Result

After synthesizing the data from this user study, it was clear that we needed to do the following:

• Update the UI for the first time screen and flow to address participant issues
• Adjust the animation based on the feedback we received
• Display robot-centric information in the robot bottom sheet and designing it to include the highest voted content
• Work with the scrum team to improve map rendering quality

Our flows and designs were drastically changed based on this user test. Our final revisions are as shown:
Back To Top
arrow_upward