Exciting opportunities to work on a new Discovery Project: Enhancing learner feedback literacy using AI-powered feedback analytics!
Project description:
Displaying 1 - 10 of 196 projects.
Exciting opportunities to work on a new Discovery Project: Enhancing learner feedback literacy using AI-powered feedback analytics!
Project description:
Security Operations Centres (SOCs) play a central role in organisational defence and are responsible for continuous monitoring, detecting, investigating and responding to cyber attacks. Organisations increasingly depend on security tools to flag suspicious activity. These tools generate alerts that analysts must examine to determine whether they represent real attacks or false positives. However, the volume of alerts continues to grow at a pace that far exceeds what human analysts can realistically review.
Two PhD scholarships are available, funded through a DECRA, which explore the use of deep learning models for uncertainty quantification.
While deep learning has shown remarkable performance in medical imaging benchmarks, translating these results to real-world clinical deployment remains challenging. Models trained on data from one hospital or population often fail when applied elsewhere due to distributional shifts. Since acquiring new labeled data is often costly or infeasible due to rare diseases, limited expert availability, and privacy constraints, robust solutions are essential.
We have several PhD and Research Assistant (RA) opportunities available in areas such as Multimodal Large Language Models (MLLM) for human understanding, MLLM safety, and Generative AI.
If you have published in top-tier conferences (e.g., CVPR, ICCV, ECCV, NeurIPS, etc.), you will have a strong chance of receiving a full PhD scholarship.
see all the details here: https://careers.pageuppeople.com/513/cw/en/job/687090/phd-opportunity-on-generativeai-driven-requirements-regulation-in-space-missions
As education systems increasingly adopt AI to support teaching and learning, the automation of assessment and feedback processes has emerged as a critical area of innovation. Large-scale learning environments—such as MOOCs, online degrees, and data-intensive Learning Management Systems—necessitate scalable solutions to provide timely, high-quality feedback. However, existing AI-powered assessment systems often raise ethical, pedagogical, and fairness concerns, including issues of bias, explainability, and learner agency.
This project will investigate the security and privacy challenges emerging from the adoption of digital education credentials, such as W3C Verifiable Credentials. As universities and employers increasingly rely on digital systems to issue, store, and verify qualifications, new risks arise—ranging from data breaches and identity fraud to profiling and surveillance through credential verification logs.
In this project, we will develop automated approach to detect the defects in AI systems, including LLMs, auto-driving systems, etc.
The objective of this project is to design automated approach to detect bugs in various software, e.g., compilers, data libraries and so on.
The project may involve LLMs.