There’s been a lot of press lately on how AI-powered assistants will make our lives better/easier/more manageable very soon.
Some even think we’re already at that point, but anyone who’s ever tried Siri, Cortana, or Amazon’s Alexa know that they are all still pretty dumb. However, this has gotten me thinking about what this move from a visually driven interface to a voice controlled interface will mean for us as designers.
Advances in machine learning mean that voice recognition has gone from a high error rate (>25%) to a low rate (<5%) —i.e. it basically works almost all the time.Nate Clinton
I think the quote above highlights perfectly what has been accomplished so far. Voice recognition HAS improved a lot, but even though Siri may understand what I’m saying, it’s not the same as saying she’s able to act on the command. 70% of my Siri queries are still ‘set a timer for 8 minutes’ when making spaghetti with the other 30% are ‘start a new outdoor run’. She always gets both of these right, but I would not go that as far as saying it has anything to do with being “intelligent”.
When things go wrong
Some of the most overlooked things when creating user experiences is what happens when things go wrong. Sure, there have been some great 404-pages created, but great user experiences go beyond that. They try to cover every misstep a user might take. What happens if they enter invalid information into a form? What about when they click ‘Print’ without first selecting an item? Designing for blue skies scenarios is the easier part. It is harder to understand that all user experiences have cloudy days too. Some are even like Bergen, Norway, where it’s raining all day, every day.
Fin, a new AI-powered assistant, is a great example of this. I love how their promo video (below) makes me as a user feel completely empowered by their service. These people, just casually telling their smartphone to carry out all of these chores that no-one wants to do. Have a look:
Do you notice what’s not in the videos though? Fin’s response. There’s no visual or audio confirmation that Fin has understood the input or carried out the requests. They list the tasks on their website, but there’s no reason to believe that Fin could actually solve these tasks. I could tell Siri all of those things too, just to have her tell me “here’s what I found on the web for…“ (cue sad trombone sound).
As technology offers us more and more options and possibilities, our work as UX-designers will grow to include even more edge-cases. As our acceptance of friction with these services continues to decrease, our work will increasingly need to include more ‘what if’ scenarios.
“Designing for voice and chat will be a sought-after skill in the UX profession in the very near future (now, in fact). The platforms will battle for market share and they will add capabilities rapidly. The SDKs themselves will evolve to be more turnkey, and third parties will join the fray to create tools for makers.”
So ask your yourself this question: What will all the highly-skilled Dribble UI designers do? 😉