Goal Recognition is the task of inferring the goal of an agent from their action logs. Goal Recognition assumes these logs are collected by an independent process that is not controlled by the observer. Active Goal Recognition extends Goal Recognition by also assigning the data collection task to the observer. This Ph.D. project will provide a unified probabilistic and decision-theoretic perspective to fundamentally solve the central question: how should an observer act in an environment to actively uncover the goal of the agent?
Research projects in Information Technology
Displaying 11 - 20 of 120 projects.
PhD opportunities on Multomodal LLM/ human understanding
We have several PhD opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.
If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.
Uncertainty quantification using deep learning
Two PhD scholarships are available, funded through a DECRA, which explore the use of deep learning models for uncertainty quantification.
GEMS 2026: Toward Distribution-Robust Medical Imaging Models in the Wild
While deep learning has shown remarkable performance in medical imaging benchmarks, translating these results to real-world clinical deployment remains challenging. Models trained on data from one hospital or population often fail when applied elsewhere due to distributional shifts. Since acquiring new labeled data is often costly or infeasible due to rare diseases, limited expert availability, and privacy constraints, robust solutions are essential.
PhD/RA/Master thesis on Multimoda LLM
We have several PhD, Research Assistant (RA) and master research thesis’s opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.
If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.
Designing Responsible AI for Scalable Educational Assessment and Feedback
As education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency.
Using AI and machine learning to improve polygenic risk prediction of disease
We are interested in understanding genetic variation among individuals and how it relates to disease. To do this, we study genomic markers or variants called single nucleotide polymorphisms, or SNPs for short. A SNP is a single base position in DNA that varies among human individuals. The Human Genome Project has found that these single letter changes occur are all over the human genomes; each person has about 5M of them! While most SNPs have no effect, some can influence traits or increase the risk of certain diseases.
Minimum Message Length
Minimum Message Length (MML) is an elegant information-theoretic framework for statistical inference and model selection developed by Chris Wallace and colleagues. The fundamental insight of MML is that both parameter estimation and model selection can be interpreted as problems of data compression. The principle is simple: if we can compress data, we have learned something about its underlying structure.
Agentic AI for Software Teams: Building the Next Horizon of SWE Agents for Society with Atlassian
🎯 Research Vision
The next generation of software engineering tools will move beyond autocomplete and static code generation toward autonomous, agentic systems — AI developers capable of planning, reasoning, and improving software iteratively. This project explores the development of agentic AI systems that act as intelligent collaborators: understanding project goals, decomposing problems, writing and testing code, and learning from feedback.
🔍 Research Objectives
Explainability of Reinforcement Learning Policies for Human-Robot Interaction
This PhD project will investigate the explainability of reinforcement learning (RL) policies in the context of human-robot interaction (HRI), aiming to bridge the gap between advanced RL decision-making and human trust, understanding, and collaboration. The research will critically evaluate and extend state-of-the-art explainability methods for RL, such as policy summarization, counterfactual reasoning, and interpretable model approximations, to make robot decision processes more transparent and intuitive.