Skip to main content

Explainable AI (XAI) as Model Reconciliation

Primary supervisor

Mor Vered

Co-supervisors


Creating efficient and beneficial user agent interaction is a challenging problem. Challenges include improving performance and trust and reducing over and under reliance. We investigate the development of Explainable AI Systems that can provide explanations of AI agent decisions to human users. Past work on plan explanations primarily focused on explaining the correctness and validity of plans. In this work we will use Theory of Mind (ToM) to reason about the mental models of the actors involved in AI systems, designers, agents and users and expand on the "Model Reconciliation Problem" or reconciling the different actors' beliefs and intentions. We will study the properties of such explanations, present algorithms for automatically computing them as well as extensions to existing frameworks  and evaluate their performance both empirically and through controlled user studies.

Required knowledge

  • Strong background in computer science in general
  • Familiarity and understanding of basic principles underlying automated reasoning and robotics
  • C/C++ programming and knowledge 
  • Python programming

Learn more about minimum entry requirements.