Explaining Explainable Artificial Intelligence
Explainable Artificial Intelligence (XAI) is one of the hottest topics in AI today. Ironically, one would think that a motivation for the importance of XAI is for people to better understand AI and the AI models in use. However, diversity of opinions and perspectives on XAI has created more ambiguities and confusions than helping in any
meaningful way. To even explain what an explanation is, some papers in the literature have confused the term, making it close to impossible to newcomers to the field to find coherence or aspire for consistency. The diversity is reaching unhealthy state with orthogonal definitions and taking antonyms and incommensurable concepts making them synonyms. The aim of this presentation is to disambiguate XAI, taking the audience into a trip that will start from the basics, travel through contemporary literature, land on current challenges of XAI and providing food for thoughts along the way. My aim is not to unify XAI or create a universal agreement. My aim is to maximise people understanding of XAI and to have the basis for those who disagree with me to communicate their disagreement in
concise statements.