Original Source Here
I understand that some deep design and UX questions are at stake. A complete reshuffle of a product strategy cannot happen overnight, and satisfying a handful of customers means taking the risk to anger thousands of others.
However, the confidence we have in those systems is paramount to a great user experience.
Although designers do everything they can to arrange the perfect user flow, we end up in situations where we can all sense the limits and imperfections of an intelligent system. Yet, we cannot recalibrate it, which hurts the brand credibility and the system’s likeability.
For all the Star Wars lovers, what makes C3P0 so annoying? Despite being extremely knowledgeable, he is severely opinionated, flawed, and unable to adapt to situations like … a human being.
An Ai system isn’t a magic piece of software handling all human nature in complex equations and thousands of lines of code. Despite all the hype around DeepMind and Artificial General Intelligence, its ambition should be to augment our experience and complement the way we use a service. It shouldn’t try to substitute itself for humans.
The overnight explosion of data science came with unrealistic expectations. Models should be better than humans, understand their actions, explain their behaviors, and anticipate their mood changes.
It put a lot of pressure on data science teams to deliver models with the highest metrics. However, how do we measure its overall effectiveness in the global customer experience and not simply against a set of actions to predict?
It takes courage to show the system’s shortcomings. And ironically, by working closely with UX designers, they could rethink explicit feedback gathering mechanisms and improve their models significantly over time.
The benefit? The brand automatically becomes more human, and the trust from its user base will go up.
Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot