Skip to main content

Incorporating social norms and context in reinforcement learning

Primary supervisor

Leimin Tian

Co-supervisors


Deep reinforcement learning (RL) has demonstrated promising performance when applied to human-robot interaction. In particular, previous studies have shown that a robot with such a model can learn social skills over time, for example, handshakes [1] or approaching a group of people adhering to social norms [2]. However, deep RL is subject to erroneous modelling of the task [3]. It is difficult to design an appropriate reward function that encourages the desired behaviors [4], similar to how animals and humans may develop “superstitious” behaviors in operant conditioning [5]. The goal of this project is to design a robot that learns a social norm using deep RL, and identify suitable reward functions that avoid making a “superstitious” robot.

Student cohort

Double Semester

Aim/outline

Basic goals:

  1. Developing a reinforcement learning model that incorporates emotions to enable adaptive agent behaviors.
  2. Extending the emotional RL model to include observations of the environment, such as task outcomes or user engagement.

Possible extensions:

  1. Applying the emotional RL model to a social robot
  2. Conduct human-robot interaction experiments to evaluate the emotional RL model

URLs/references

 

Required knowledge

Python programming, reinforcement learning