Menu Reader
Improving restaurant experiences for the visually impaired
Overview
Skills
UX/UI Design, UX Research, Mobile App Prototyping, Competitive Analysis, User Testing, Accessibility Design
Menu Reader (Dining Assistant?) allows visually impaired people to regain a sense of independence in restaurant environments by allowing them to freely browse menus unaided by others.
Role
UX Designer, UX Researcher
Tools
XD, Illustrator, Photoshop
Understanding the Problem
Through user interviews and secondary research, a common pain point was the need to reduce the reliance on others in the dining experiences of the visually impaired.
“Complex for a quick turnaround of ideas into animations”
“I only go to the few [restaurants] where I already know the staff is understanding and willing to help”
Findings
Restaurant staff often lack the resources to effectively accommodate visually impaired visitors
User Research
Many restaurants lack the tools and training needed to provide proper accommodation for their visually impaired customers. This forces users to be more dependent on staff or other guests that they are with. The positive or negative experiences of the interviewees were directly related to the staffs’ attitudes and willingness help. How can we undo this dynamic?
Visually impaired customers rarely get a proper feel for the menu before ordering
Although encouraged, braille and large-print menus are not required in restaurants. As a result, users often have to depend on waiters and the people they are with to read them the menu, a source of frustration for both parties. What changes can be made to make menus universally accessible?
Mobile devices and voice assistants are used by thee visually impaired to complete daily tasks
Users are comfortable using screen readers, mobile devices, and other non-visual methods in their everyday lives. How can we introduce this comfort and skillset into the restaurant experience?
Building a Solution
Prototyping
Goals
Allow users to browse menus without assistance from others
Leverage the mobile devices and VUIs that users are already comfortable using
Pros
Cons
Menu scanning relies on a single image capture which would be tricky for targeted users
Iterating
The improved prototype shifted to a camera-first approach that allows live scanning rather than image capture. Menu items would be read automatically to users depending on the area of the menu that the camera pointed towards.
The initial prototype was discussed with target users to validate the direction of the design:
Inclusion of voice assistant to track ongoing order and standout menu items
Present the menu in a format that is readable by low-vision users and screen readers
Live Camera Capture to Speech
In live camera mode, users can focus on individual areas of and easily navigate the menu via auditory cues. This allows for users to independently navigate menus without the help of others. From a technical experience standpoint, this method also prevents any blur or similar issues that could have been present with the static image capture route.
Order Tracking and Item Recall
As hears the different options listed in a menu, they can simple ask the voice assistant to favorite and/or add a particular item to their order, capturing their name, price, and ingredients (if listed). Users can then ask to hear their saved items.
Dialogue Flow
Below is a look at the key conversational templates of the voice assistant
Accessible Design Considerations
Color palettes, typography, sizing, etc were all key areas of focus to maximize the usefulness of the application for visually impaired users.