Primary supervisor
Mor VeredCo-supervisors
- Chenyuan Zhang
- Nir Lipovetzky
In human-AI collaboration, it is essential for AI systems to understand and anticipate human behavior in order to coordinate effectively. Conversely, humans also form inferences about the agent’s beliefs and goals to facilitate smoother collaboration. As a result, AI agents should adapt their behavior to align with human reasoning patterns, making their actions more interpretable and predictable. This principle forms the foundation of transparent planning (MacNally et al, 2018).
A key prerequisite for transparent planning is a robust model of how humans interpret and infer goals from observed behaviors (i.e., a human model of goal recognition). Prior work has explored human goal recognition in controlled environments (Zhang et al, 2024) via manipulating factors such as timing and solvability in puzzle-like tasks. However, that work only focuses on passive settings. As an extension of goal recognition tasks, active goal recognition (AGR) offers a more realistic framework where observers are not passive but can interact with the environment to gather information actively. Therefore, there is a need to extend these models to account for human behavior in active goal recognition scenarios. Such an extension is critical for building AI systems that can both interpret and be interpreted by humans in interactive, real-world environments.
Student cohort
Aim/outline
Zhang et al. (2025) introduced a model for active goal recognition in which the observer actively gathers information and updates their beliefs based on observed interactions. In this study, we investigate whether this computational model can also account for human action selection and belief updates in active goal recognition tasks, thereby serving as a cognitively plausible model of human reasoning in such contexts.
Specifically, we will use the Planimation tool from editor.planning.domains to design interactive environments for human experiments. Participants will engage in active goal recognition (AGR) tasks, and their behavior will be collected through an online platform (e.g., Prolific). After data collection, we will evaluate the AGR models across multiple domains tested in the human experiments, compare their fit to human behavior against other models, and identify which approach best captures patterns of human reasoning.
URLs/references
MacNally, A. M., Lipovetzky, N., Ramirez, M., & Pearce, A. R. (2018, July). Action selection for transparent planning. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems (pp. 1327-1335).
Zhang, C., Kemp, C., & Lipovetzky, N. (2024, May). Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal Solvability. In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (pp. 2066-2074).
Required knowledge
- Programming experience (required)
- Familiarity with web development (preferred)
- Understanding of classical planning or search algorithms (preferred)