Skip to main content

Generating human-centered explanation for a social robot capable of multimodal emotion recognition

Primary supervisor

Mor Vered


Research area

Human Centred AI

Robots in Human-Robot Interaction (HRI) often contain complex components and advanced functions based on automated decision-making models. In particular, affective HRI systems aim at achieving intended outcomes, such as mental or physical health of the user, through understanding, responding to, and influencing the emotional states of the users. While automatic emotion recognition and other perception functions of robots have enjoyed great improvement with the advancement of deep learning and machine learning, there are increasing concerns that these models are a “black-box” to users of such affective HRI systems.  Since current HRI systems are far from being perfect, technical errors and behaviors that are perceived as inappropriate by the users are inevitable during the interaction [1] and may lead to a distrust and misuse of the system. Apart from compliance there are ethical issues to consider as well; automatic emotion recognition has raised concerns about potential violations of a user’s privacy and possible misuse that can limit basic human rights, such as the freedom of expression [2]. 

Adding explanations to system output has been known to increase user trust and compliance [3]. Therefore, it is critical to develop explainable HRI systems which can provide explanations that empower the user to make informed decisions, especially when an error has occurred. Prior research has demonstrated the benefits of a robot capable of generating explanations that inform the user the cause and context of a technical error [4]. The PhD candidate will work at the intersection of robotics, explainable AI (XAI), and public policy research to develop human centred explanations for robotic multimodal emotion recognition. The system will further strive to adapt its functions based on user preferences to achieve an acceptable and effective interaction with the user. As explanations are a form of conversation rather than a dynamic process [5] this project will include empirical human studies to evaluate the effectiveness and acceptance of said explanations and their effect on the human-robot interaction. 


[1] Tian, L. and Oviatt, S., 2021. A Taxonomy of Social Errors in Human-Robot Interaction. ACM Transactions on Human-Robot Interaction (THRI), 10(2), pp.1-32.

[2] Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., Lyons, T., Manyika, J., Niebles, J.C., Sellitto, M. and Shoham, Y., 2021. The AI Index 2021 Annual Report. arXiv preprint arXiv:2103.06312.

[3] Lee, J. D. and See, K. A., 2004 Trust in automation: Designing for appropriate reliance. Human factors 46(1):50-80.

[4] Das, D., Banerjee, S. and Chernova, S., 2021. Explainable AI for Robot Failures: Generating Explanations that Improve User Assistance in Fault Recovery. arXiv preprint arXiv:2101.01625.

[5]  Miller, T., Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1 38,2019.

Learn more about minimum entry requirements.