Skip to main content

Research projects in Information Technology

Displaying 11 - 20 of 196 projects.


Using AI and machine learning to improve polygenic risk prediction of disease

We are interested in understanding genetic variation among individuals and how it relates to disease. To do this, we study genomic markers or variants called single nucleotide polymorphisms, or SNPs for short. A SNP is a single base position in DNA that varies among human individuals. The Human Genome Project has found that these single letter changes occur are all over the human genomes; each person has about 5M of them!  While most SNPs have no effect, some can influence traits or increase the risk of certain diseases.

Supervisor: Prof Enes Makalic

Minimum Message Length

Minimum Message Length (MML) is an elegant information-theoretic framework for statistical inference and model selection developed by Chris Wallace and colleagues. The fundamental insight of MML is that both parameter estimation and model selection can be interpreted as problems of data compression. The principle is simple: if we can compress data, we have learned something about its underlying structure.

Supervisor: Prof Enes Makalic

Agentic AI for Software Teams: Building the Next Horizon of SWE Agents for Society with Atlassian

🎯 Research Vision

The next generation of software engineering tools will move beyond autocomplete and static code generation toward autonomous, agentic systems — AI developers capable of planning, reasoning, and improving software iteratively. This project explores the development of agentic AI systems that act as intelligent collaborators: understanding project goals, decomposing problems, writing and testing code, and learning from feedback.

🔍 Research Objectives

Probabilistic Active Goal Recognition

Goal Recognition is the task of inferring the goal of an agent from their action logs. Goal Recognition assumes such action logs are collected by an independent process that is not controlled by the observer. Active Goal Recognition extends Goal Recognition by also assigning the data collection task to the observer. This Ph.D. project will provide a unified probabilistic and decision-theoretic perspective to fundamentally solve the central question: how should an observer act in an environment to actively uncover the goal of the agent?

Supervisor: Dr Buser Say

Indigenous (Energy)

This scholarship opportunity is open to domestic applicants who identify as Aboriginal or Torres Strait Islander.

Explainability of Reinforcement Learning Policies for Human-Robot Interaction

This PhD project will investigate the explainability of reinforcement learning (RL) policies in the context of human-robot interaction (HRI), aiming to bridge the gap between advanced RL decision-making and human trust, understanding, and collaboration. The research will critically evaluate and extend state-of-the-art explainability methods for RL, such as policy summarization, counterfactual reasoning, and interpretable model approximations, to make robot decision processes more transparent and intuitive.

Supervisor: Dr Mor Vered

Decision AI for biodiversity

Adaptive sequential decisions to maximise information gain and biodiversity outcomes

Supervisor: Prof Iadine Chades

Street-Level Environment Recognition On Moving Resource-Constrained Devices

Street-Level Environment Recognition On Moving Resource-Constrained Devices

Explainability and Compact representation of K-MDPs

Markov Decision Processes (MDPs) are frameworks used to model decision-making in situations where outcomes are partly random and partly under the control of a decision maker. While small MDPs are inherently interpretable for people, MDPs with thousands of states are difficult to understand by humans. The K-MDP problem is the problem of finding the best MDP with, at most, K states by leveraging state abstraction approaches to aggregate states into sub-groups. The aim of this project is to measure and improve the interpretability of K-MDP approaches using state-of-the-art XAI approaches.

Supervisor: Dr Mor Vered