In recent years we have experienced an increasing deployment in our daily lives of applications based on Artificial Intelligence technology. Some of these applications are truly amazing, such as self driving cars, translation systems, and automatic detection of illnesses. However, we have also seen applications which are disrupting our daily lives with unintended consequences, such as recommender systems which polarize public opinion and face recognition applications which can hinder our privacy. As the researcher Michael Jordan stated: “Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.” In this context, a serious and systematic study of FAT (Fairness, Accountability and Transparency) in AI is a critical. The popularity of the recently created FAT conference and several related workshops in academic conferences, especially those related to topics in artificial intelligence, show evidence of how researchers and developers, as well as the society at large, is becoming aware of the effects of AI technology. In terms of recommendation systems, algorithmic transparency has been a topic of study in for at least 10 years, but only recently this area has attracted strong attention in the community. In this tutorial, Prof. Parra will present a general overview of FAT in AI, to then focus on FAT for recommendation systems with a theoretical presentation as well as practical activities.

Short Bio

Denis Parra is Assistant Professor at the Department of Computer Science at PUC Chile, he obtained his PhD at the University of Pittsburgh, USA. Prof. Parra conducts research in Recommendation Systems as well as in applications at the crossroads of HCI and AI. His work has focused on the role of visualization and interactivity in recommender systems upon user’s perception of transparency, trust and controllability. He has published his research in conferences like ACM IUI, ACM RecSys, ECIR, and UMAP, as well as in journals such as ACM TiiS, IJHCS, ESWA, PloS One and UMUAI. Currently, Prof. Parra works on applications of explainable AI (XAI) in Recommendation Systems, as well as in Medicine and in Fake News. He is a researcher at the Millenium Institute on Data Fundamentals, a large Chilean initiative to study the impact of Data across disciplines such as Computer Science, Statistics, Political Science and Journalism. He is also member of the AI Lab at PUC and leader of the SocVis research group.

[slides]