Skip to main content

Research projects in Information Technology

Displaying 11 - 20 of 36 projects.


Human Factors in Cyber Security: Understanding Cyberscams

Online fraud, also referred to as cyberscams, is increasingly becoming a cybersecurity problem that technical cybersecurity specialists are unable to effectively detect. Given the difficulty in the automatic detection of scams, the onus is often pushed back to humans to detect. Gamification and awareness campaigns are regularly researched and implemented in workplaces to prevent people from being tricked by scams, which may lead to identity theft or conning individuals out of money.

Supervisor: Prof Monica Whitty

Privacy-preserving machine unlearning

Design efficient privacy-preserving method for different machine learning tasks, including training, inference and unlearning

Supervisor: Dr Shujie Cui

Enhancing Privacy Preservation in Machine Learning

This research project aims to address the critical need for privacy-enhancing techniques in machine learning (ML) applications, particularly in scenarios involving sensitive or confidential data. With the widespread adoption of ML algorithms for data analysis and decision-making, preserving the privacy of individuals' data has become a paramount concern.

Supervisor: Dr Hui Cui

Securing Generative AI for Digital Trust

Project description: Generative AI models work by training large networks over vast quantities of unstructured data, which may then be specialised as-needed via fine-tuning or prompt engineering. In this project we will explore all aspects of this process with a focus on increasing trust in the model outputs by reducing or eliminating the incidence of bugs and errors.
Supervisor:

LLM models for learning and retrieving software knowledge

The primary objective of this project is to enhance Large Language Models (LLMs) by incorporating software knowledge documentation. Our approach involves utilizing existing LLMs and refining them using data extracted from software repositories. This fine-tuning process aims to enable the models to provide answers to queries related to software development tasks.

Supervisor: Aldeida Aleti

[Malaysia] AI meets Cybersecurity

AI is now trending, and impacting diverse application domains beyond IT, from education (chatGPT) to natural sciences (protein analysis) to social media.

This PhD research focuses on the fusing AI research and cybersecurity research, notably one current direction is on advancing the latest generative AI models for cybersecurity, or vice versa: using cybersecurity to attack AI. 

The student is free to discuss with the supervisor on which topic is of the most interest to him/her, in order to shape the PhD topic. 

Privacy-Enhancing Technologies for the Social Good

Privacy-Enhancing Technologies (PETs) are a set of cryptographic tools that allow information processing in a privacy-respecting manner. As an example, imagine we have a user, say Alice, who wants to get a service from a service provider, say SerPro. To provide the service, SerPro requests Alice's private information such as a copy of her passport to validate her identity. In a traditional setting, Alice has no choice but to give away her highly sensitive information. 

Supervisor: Dr Muhammed Esgin

[NextGen] Secure and Privacy-Enhancing Federated Learning: Algorithms, Framework, and Applications to NLP and Medical AI

Federated learning (FL) is an emerging machine learning paradium to enable distributed clients (e.g., mobile devices) to jointly train a machine learning model without pooling their raw data into a centralised server. Because data never leaves from user clients, FL systematically mitigates privacy risks from centralised machine learning and naturally comply with rigorous data privacy regulations, such as GDPR and Privacy Act 1988. 

Supervisor:

Explainable AI (XAI) in Medical Imaging

Are you interested in applying your AI/DL knowledge to the medical domain?

This project focuses on the use of AI in Medical Imaging (e.g. CT, MRI, X-Ray, Ultrasound, etc). The work includes segmentation and classification; for example, segmenting tumour from the medical images, and then classify the grade of the tumour. We will use various Deep Learning techniques, such as CNN, and will experiment with a variety of Deep Learning frameworks, such as U-Net, ResNet, etc.

Guarding On-device Machine Learning Models via Privacy-enhancing Techniques

 On-device machine learning (ML) is rapidly gaining popularity on mobile devices. Mobile developers can use on-device ML to enable ML features at users’ mobile devices, such as face recognition, augmented virtual reality, voice assistance, and medical diagnosis. This new paradigm is further accelerated by AI chips and ASICs embedded on mobile devices, e.g., Apple’s Bionic neural engine. Compared to cloud-based machine learning services, on-device ML is privacy-friendly, of low latency, and can work offline. User data will remain at the mobile device for ML inference.

Supervisor: