- Bye bye, Wi-Fi: How to add a wired network to your home without running Ethernet
- Why I no longer recommend this Windows-like Linux distro
- How to buy Casio's tiny digital watch for your finger in the US this week
- This free Google AI tool turns complex research papers into concise conversations
- Tech supply chains at risk as the US launches probe into China’s legacy chip dominance
Estée Lauder applies AI, AR for cosmetics accessibility
The original design involved taking a selfie, which the algorithm analyzed to assess the uniformity of application and then provided guidance to the user. It didn’t take long, though, for the team to instead utilize real-time video the app could use to scan the user’s face. For instance, if the video shows the user has applied foundation or lipstick unevenly, the app provides verbal descriptions of the specific areas that need a touchup and guidance to correct the issue. The user can then make adjustments, rescan, and the app will provide a prompt when everything is correctly applied.
First priority, however, was for Aidan and his team to directly engage with the visually impaired community. “We really wanted to gain an understanding of what their unique needs were, their pain points and preferences, and what they desired from our products,” he says. “We pulled together focus groups and asked questions, but mostly listened to them about their personal experiences with makeup and technology.”
Importantly, he says, some focus group members were completely blind, some experienced varying degrees of low vision, and some had excellent peripheral vision. This enabled his team to gain insight from a variety of individual experiences and to question their own assumptions.
“We assumed that a natural, more humanistic sounding voice would be the preference for the implementation, but the user research confirmed that familiarity was actually most important to our users,” Aidan says. “Whatever they had set up on their device is what they wanted to experience.”
The team also partnered with internal advocacy groups at ELC, external advocacy groups, and experts in accessibility and inclusivity, and combined their insights with feedback from the focus groups to gather requirements for VMA. The team then leveraged the user research for everything from naming the application to getting the tone of voice just right.
“Throughout the design, build, and test phases, their feedback was informing our decisions, even the small features like being able to adjust the speed of the virtual assistant’s speech,” Aidan says.