Skip to main content

Research projects in Information Technology

Displaying 1 - 10 of 113 projects.


Uncertainty quantification using deep learning

Two PhD scholarships are available, funded through a DECRA, which explore the use of deep learning models for uncertainty quantification. 

GEMS 2026: Toward Distribution-Robust Medical Imaging Models in the Wild

While deep learning has shown remarkable performance in medical imaging benchmarks, translating these results to real-world clinical deployment remains challenging. Models trained on data from one hospital or population often fail when applied elsewhere due to distributional shifts.  Since acquiring new labeled data is often costly or infeasible due to rare diseases, limited expert availability, and privacy constraints, robust solutions are essential.

Supervisor: Dr Chern Hong Lim

PhD/RA opportunities on Multimodal We have several PhD and Research Assistant (RA) opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI. If you have published in top-tLLM

We have several PhD and Research Assistant (RA) opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.

If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.

Supervisor: Dr Qiuhong Ke

Designing Responsible AI for Scalable Educational Assessment and Feedback

As education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency.

Supervisor: Dr Guanliang Chen

Using AI and machine learning to improve polygenic risk prediction of disease

We are interested in understanding genetic variation among individuals and how it relates to disease. To do this, we study genomic markers or variants called single nucleotide polymorphisms, or SNPs for short. A SNP is a single base position in DNA that varies among human individuals. The Human Genome Project has found that these single letter changes occur are all over the human genomes; each person has about 5M of them!  While most SNPs have no effect, some can influence traits or increase the risk of certain diseases.

Supervisor: Prof Enes Makalic

Minimum Message Length

Minimum Message Length (MML) is an elegant information-theoretic framework for statistical inference and model selection developed by Chris Wallace and colleagues. The fundamental insight of MML is that both parameter estimation and model selection can be interpreted as problems of data compression. The principle is simple: if we can compress data, we have learned something about its underlying structure.

Supervisor: Prof Enes Makalic

Agentic AI for Software Teams: Building the Next Horizon of SWE Agents for Society with Atlassian

🎯 Research Vision

The next generation of software engineering tools will move beyond autocomplete and static code generation toward autonomous, agentic systems — AI developers capable of planning, reasoning, and improving software iteratively. This project explores the development of agentic AI systems that act as intelligent collaborators: understanding project goals, decomposing problems, writing and testing code, and learning from feedback.

🔍 Research Objectives

Explainability of Reinforcement Learning Policies for Human-Robot Interaction

This PhD project will investigate the explainability of reinforcement learning (RL) policies in the context of human-robot interaction (HRI), aiming to bridge the gap between advanced RL decision-making and human trust, understanding, and collaboration. The research will critically evaluate and extend state-of-the-art explainability methods for RL, such as policy summarization, counterfactual reasoning, and interpretable model approximations, to make robot decision processes more transparent and intuitive.

Supervisor: Dr Mor Vered

Decision AI for biodiversity

Adaptive sequential decisions to maximise information gain and biodiversity outcomes

Supervisor: Prof Iadine Chades

Explainability and Compact representation of K-MDPs

Markov Decision Processes (MDPs) are frameworks used to model decision-making in situations where outcomes are partly random and partly under the control of a decision maker. While small MDPs are inherently interpretable for people, MDPs with thousands of states are difficult to understand by humans. The K-MDP problem is the problem of finding the best MDP with, at most, K states by leveraging state abstraction approaches to aggregate states into sub-groups. The aim of this project is to measure and improve the interpretability of K-MDP approaches using state-of-the-art XAI approaches.

Supervisor: Dr Mor Vered