As education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency.
Research projects in Information Technology
Displaying 1 - 10 of 193 projects.
Designing Secure and Privacy-Enhancing Frameworks for Digital Education Credentials
This project will investigate the security and privacy challenges emerging from the adoption of digital education credentials, such as W3C Verifiable Credentials. As universities and employers increasingly rely on digital systems to issue, store, and verify qualifications, new risks arise—ranging from data breaches and identity fraud to profiling and surveillance through credential verification logs.
Testing AI/LLM systems
In this project, we will develop automated approach to detect the defects in AI systems, including LLMs, auto-driving systems, etc.
Automated software testing and debugging with/without LLMs
The objective of this project is to design automated approach to detect bugs in various software, e.g., compilers, data libraries and so on.
The project may involve LLMs.
Using AI and machine learning to improve polygenic risk prediction of disease
We are interested in understanding genetic variation among individuals and how it relates to disease. To do this, we study genomic markers or variants called single nucleotide polymorphisms, or SNPs for short. A SNP is a single base position in DNA that varies among human individuals. The Human Genome Project has found that these single letter changes occur are all over the human genomes; each person has about 5M of them! While most SNPs have no effect, some can influence traits or increase the risk of certain diseases.
Minimum Message Length
Minimum Message Length (MML) is an elegant information-theoretic framework for statistical inference and model selection developed by Chris Wallace and colleagues. The fundamental insight of MML is that both parameter estimation and model selection can be interpreted as problems of data compression. The principle is simple: if we can compress data, we have learned something about its underlying structure.
Agentic AI for Software Teams: Building the Next Horizon of SWE Agents for Society with Atlassian
🎯 Research Vision
The next generation of software engineering tools will move beyond autocomplete and static code generation toward autonomous, agentic systems — AI developers capable of planning, reasoning, and improving software iteratively. This project explores the development of agentic AI systems that act as intelligent collaborators: understanding project goals, decomposing problems, writing and testing code, and learning from feedback.
🔍 Research Objectives
Indigenous (Energy)
This scholarship opportunity is open to domestic applicants who identify as Aboriginal or Torres Strait Islander.
Explainability of Reinforcement Learning Policies for Human-Robot Interaction
This PhD project will investigate the explainability of reinforcement learning (RL) policies in the context of human-robot interaction (HRI), aiming to bridge the gap between advanced RL decision-making and human trust, understanding, and collaboration. The research will critically evaluate and extend state-of-the-art explainability methods for RL, such as policy summarization, counterfactual reasoning, and interpretable model approximations, to make robot decision processes more transparent and intuitive.
Decision AI for biodiversity
Adaptive sequential decisions to maximise information gain and biodiversity outcomes