Skip to main content

Explainable AI (XAI) for Disobedient Robotics

Primary supervisor

Mor Vered

Co-supervisors

  • Dana Kulic

As humans we have the ability, and even the necessity of distinguishing between orders that are ethical, safe and necessary and orders that may be harmful or unethical. Theoretically this ability should exist in robots. To coin Assimov’s second law “A robot must obey orders given by human beings except where such orders would conflict with the First Law '', the first law being not injuring or allowing any human to be injured. A robot should always act ethically and work in its operator’s best interest, however some orders may be in direct opposition to the wellbeing of its operator or other humans.

This leads to a fundamental question we wish to explore, how would robot disobedience affect the human-robot relationship? While there has been work studying how to repair human trust in robots following failures of command or task execution by robots [Baker, 2018, Esterwood 2022, Park 2023], little work has been done to estimate the effects of direct disobedience on human-robot trust and joined performance. While accepting that disobedience in robots might be useful, it is still imperative that the human establish an understanding of why the robot behaved in such a way so that further such occasions might be prevented in the future and also the relationship may be repaired. We hope to further the collaborative relationship by incorporating eXplainable AI best practices to explain varying factors of the disobedient occurrence and the underlying contributing factors. 

Required knowledge

 

  • Strong background in computer science in general
  • Familiarity and understanding of basic principles underlying automated reasoning and robotics
  • C/C++ programming and knowledge 
  • Python programming

Learn more about minimum entry requirements.