A Meaningful Road to Explanation
Abstract: Cyber-human systems are systems formed by the coordinated interaction of human and computational components. The latter can only be justified in these systems to the extent that they are meaningful to humans – in both senses of ‘meaning’, i.e., in the sense of semantics as well as in the sense of purpose or significance. On one hand, the data these components manipulate only acquire meaning when mapped to shared human conceptualizations of the world. On the other hand, they can only be justified if ethically designed. Ultimately, we can only build trustworthy cyber-human systems if the interoperation of their components is meaning preserving, i.e., if we are able to: semantically interoperate these components; transparently demonstrate (i.e., explain) how their interoperation positively contributes to human values and goals. To meet these requirements, we must be able to explicitly reveal and safely relate the different theories of the world (i.e., ontologies) embedded in these components. In this talk, I discuss the strong relation between the notions of semantics, ontology, and explanation under certain interpretations. Specifically, I will present a notion of explanation termed Ontological Unpacking, which aims at explaining symbolic domain descriptions (conceptual models, knowledge graphs, logical specifications). I show that the models produced by Ontological Unpacking differ from their traditional counterparts not only in their expressivity but also on their nature: while the latter typically merely have a descriptive nature, the former have an explanatory one. Moreover, I show that it is exactly this explanatory nature that is required for semantic interoperability and, hence, trustworthiness. Finally, I discuss the relation between Ontological Unpacking and other forms of explanation in philosophy and science, as well as in Artificial Intelligence. I will argue that the current trend in XAI (Explainable AI) in which “to explain is to produce a symbolic artifact” (e.g., a decision tree or a counterfactual description) is an incomplete project resting on a false assumption, that these artifacts are not “inherently interpretable”, and that they should be taken as the beginning of the road to explanation, not the end.
Bio: Giancarlo Guizzardi is a Full Professor of Software Science and Evolution as well as Chair and Department Head of Semantics, Cybersecurity & Services (SCS) at the University of Twente, The Netherlands. He is also an Affiliated/Guest Professor at the Department of Computer and Systems Sciences (DSV) at Stockholm University, in Sweden. He has been active for nearly three decades in the areas of Formal and Applied Ontology, Conceptual Modelling, Business Informatics, and Information Systems Engineering, working with a multi-disciplinary approach in Computer Science that aggregates results from Philosophy, Cognitive Science, Logics and Linguistics. Over the years, he has delivered keynote speeches in several key international conferences in these fields (e.g., ER, BPM, upcoming CAiSE, IEEE ICSC). He is currently an associate editor of a number of journals including Applied Ontology and Data & Knowledge Engineering, a co-editor of the Lecture Notes in Business Information Processing series, and a member of several international journal editorial boards. Finally, he is a member of the Steering Committees of ER, EDOC, and IEEE CBI, and of the Advisory Board of the International Association for Ontology and its Applications (IAOA). Finally, he is and ACM Distinguished Speaker and the recipient of the 2023 ER Fellows Award.