Role: Senior User Experience Designer
Time: 18 Months (2018-2019)
Vector is a consumer robot that has a large array of capabilities. He can remember people, take pictures, receive voice commands, detect edges, and more. Vector can be connected with an Amazon Alexa account and has an SDK available for people that wish to create their own scripts and features.
No Robot Is an Island
When building and designing a robot, there are a lot of roles at play. Design has to be aware of all parameters and work with all teams. Vector was a wonderful opportunity to work cross discipline with many talented people.
•Create a table top mobile robot that went beyond being a “virtual assistant on wheels”. Vector has a personality.
•He is an “always on” robot that the user is excited to come home too.
•Decouple Vector from the app. The app would be there, but it wouldn’t be needed for interacting with the robot.
•On average, Vector can spend about 30 to 40 minutes off of the charger.
•Vector needs a 2.4 ghz wifi signal to connect to the cloud.
•He can connect to the app over blue-tooth.
The audience for Vector was intended to be tech enthusiastic adults. As we brought users in for testing, it became apparent that the words “tech enthusiastic” has many different levels. There were many people that came in, worked in tech, but had never interacted with an Alexa or any IoT device.
Our product didn’t just exist on a screen. It existed in the physical world as well. This
presented a number of challenges, especially for Onboarding, when the user would first interact with their robot.
•How do we tell the user how to set him up?
•How do we inform the user when an interaction fails?
•How do we keep them confident enough to keep engaging with the product?
As a User Experience Designer, I spearheaded the creation and revisions of the
Vector App. The app launched in October 2018 on both iOS and Android. I also worked on robot-centric features, such as face recognition. I was the Product Owner and Designer for Onboarding, which married both the app and robot.
App Creation and Exploration
1. We would start with research and ideation.
2. The team would then create loose flows, which we would tighten up as we reviewed with other team members.
3. After the flow diagrams felt good, we would move on to creating bare bones wires which would evolve into more detailed wires.
4. From our wires, we would create prototypes that stakeholders and devs could use to interact with the feature, look at motion, and user test with.
5. We would cycle through these steps as needed.
Initial Launch and Follow Up
The app was launched in October 2018 along with the physical product. Due to time constraints, prioritization, and a number of other reasons the scope of the app had to be reduced from the initial UX design. We moved quickly to test and verify a new MVP. The app was functional and looked polished, but I felt that it could be vastly improved.
I began exploration of a redesign. I wanted a fast turnaround for the new app
layout, so I chose not to add new features or app functionality. I focused on find-ability of content and app browse-ability. I surfaced content that previously had been two to three taps below the surface. I always placed an evergreen “What’s New” section on the front page to guide users to new and noteworthy content.
I presented a prototype of my new design at Anki’s R&D Fair. Working with User
Research, we set up a small but fun test to compare and contrast the new and old apps. Response was extremely positive. Product and I set out to design a road map for the new year to test and begin creation.
Testing, Research, & Implementation
Using my prototype, we put the app in front of first time users.
I then worked closely with the UI Artist and Dev Team to ensure full understanding of the new design. After reading my documentation, the app devs were on board.
I dedicated time to add in new strings, images, and adjusted all the layouts in XCode and Android Studio. It was important that I wear many hats and use my technical capabilities so our App Developers could push harder on the larger app structure problems.
I was the UX Designer and Product Owner for Onboarding. This feature took the
combination of many teams. I worked closely with Robotics, Product, App Engineers, Animation, User Testing, and many more people to create this feature. It was a multi-layered problem space that provided a unique and interesting design space.
When I started designing Onboarding, I was sure of a few things:
1. Onboarding needed to teach the user how to trigger Vector
2. There would need to be at least one full Voice Command
3. It had to be delightful
One of the biggest Onboarding challenges was attention splitting. We were trying to teach the user how to interact with the robot, so the app was needed as a guidance tool. However, this caused the user to shift their attention between the app and the robot.
My first pass at Onboarding differentiated sections cleanly. This allowed us to pace the user’s attention to limit attention splitting between device and robot. Each section felt contained and bite sized, while reinforcing previous learnings. They also all felt celebratory, allowing the user some mental respite and feelings of achievement.
I created flow diagrams, wires, and a workable prototype so User Research could be testing as soon as possible. The faster we got Onboarding in front of users, the sooner we could refine and adjust.
What We Learned
Users were not overwhelmed and felt confident by the time the Onboarding finished. However the users didn’t always understand what difference between a successful trigger and an unsuccessful one. To fix this many teams worked to iterate on the
ear-con, light states, and messaging.
We ended up pivoting away from this type of flow. It relied heavily on the robot being authoritative. The specific steps required the robot and app to constantly be in sync which was an impossibility due to connection issues and general user behavior.
What We Ended On
Vector’s magic exists in the user’s interaction with the robot and their exploration together. So we got rid of the heavy handed app guide.
We kept the delight of Vectors “waking up” moment, and then quickly taught the user how to interact with their robot. By using repetition and clear indication of a successful trigger, users came away much more confident on what meant success and what meant failure. The app flow ended up much shorter, but doesn’t push the user along. At the end of the flow, we gently guide the user to specific
commands. We provided support material later in the app too, for anyone that wanted a refresher.
We switched over to being app authoritative. This cleaned up a lot of our edge cases and sync issues.
These changes improved results greatly.