Skip to main content

Research projects in Information Technology

Displaying 1 - 10 of 197 projects.


Probabilistic Active Goal Recognition

Goal Recognition is the task of inferring the goal of an agent from their action logs. Goal Recognition assumes these logs are collected by an independent process that is not controlled by the observer. Active Goal Recognition extends Goal Recognition by also assigning the data collection task to the observer. This Ph.D. project will provide a unified probabilistic and decision-theoretic perspective to fundamentally solve the central question: how should an observer act in an environment to actively uncover the goal of the agent?

Supervisor: Dr Buser Say

PhD opportunities on Multomodal LLM/ human understanding

We have several PhD  opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.

If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.

Supervisor: Dr Qiuhong Ke

Enhancing learner feedback literacy using AI-powered feedback analytics

Exciting opportunities to work on a new Discovery Project: Enhancing learner feedback literacy using AI-powered feedback analytics!

 

Project description:

Supervisor: Yi-Shan Tsai

Enhancing SOC Efficiency: Automated Attack Investigation to Combat Alert Fatigue

Security Operations Centres (SOCs) play a central role in organisational defence and are responsible for continuous monitoring, detecting, investigating and responding to cyber attacks. Organisations increasingly depend on security tools to flag suspicious activity. These tools generate alerts that analysts must examine to determine whether they represent real attacks or false positives. However, the volume of alerts continues to grow at a pace that far exceeds what human analysts can realistically review.

Supervisor: Dr Mengmeng Ge

Uncertainty quantification using deep learning

Two PhD scholarships are available, funded through a DECRA, which explore the use of deep learning models for uncertainty quantification. 

GEMS 2026: Toward Distribution-Robust Medical Imaging Models in the Wild

While deep learning has shown remarkable performance in medical imaging benchmarks, translating these results to real-world clinical deployment remains challenging. Models trained on data from one hospital or population often fail when applied elsewhere due to distributional shifts.  Since acquiring new labeled data is often costly or infeasible due to rare diseases, limited expert availability, and privacy constraints, robust solutions are essential.

Supervisor: Dr Chern Hong Lim

PhD/RA/Master thesis on Multimoda LLM

We have several PhD,  Research Assistant (RA) and master research thesis’s opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.

If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.

Supervisor: Dr Qiuhong Ke

Generative-AI driven Requirements Regulation in Space Missions

see all the details here: https://careers.pageuppeople.com/513/cw/en/job/687090/phd-opportunity-on-generativeai-driven-requirements-regulation-in-space-missions

 

Supervisor: Dr Chetan Arora

Designing Responsible AI for Scalable Educational Assessment and Feedback

As education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency.

Supervisor: Dr Guanliang Chen

Designing Secure and Privacy-Enhancing Frameworks for Digital Education Credentials

This project will investigate the security and privacy challenges emerging from the adoption of digital education credentials, such as W3C Verifiable Credentials. As universities and employers increasingly rely on digital systems to issue, store, and verify qualifications, new risks arise—ranging from data breaches and identity fraud to profiling and surveillance through credential verification logs.

Supervisor: Dr Hui Cui