Not so long ago, when working on an AI-powered product, we had to treat AI models as complete black boxes – mostly due to the lack of tools. It is not such a big deal when AI only suggests a new restaurant to visit, but we all want to avoid the «computer says no» if the AI influences your chances to get a bank loan or a right medical treatment.
Luckily, many explainable AI methods have been developed in recent years. However, vast majority of them has been created by scientists and engineers mainly for themselves, so it is tricky to embed them directly in a user-facing part of the product, where a data literacy gap likely exists.
The goal of this talk is to provide UX professionals with an overview of explainable AI methods and ways they can use them to facilitate transparency, overcome the data literacy gap, decrease the power/knowledge imbalance and build user trust to AI-powered products. I will gently unpack terms like specificity, LIME, SHAP or «dropout on prediction», so that designers without technical background can find a common language with their fellow data scientists. Additionally, I will discuss how data visualization can be used as a powerful means to support understanding and give relevant context. This talk will be held in english.
Teresa is a Zurich-based data scientist with a physicist’s soul and a passion for human-centered design. Her main focus is on the first mile of data science: user research and product design, data analytics, prototyping of machine learning models. She also designs and develops data visualizations and data stories. When not doing anything data, Teresa plays folk music on common and less common musical instruments.