Skip to main content

Research projects in Information Technology

Displaying 21 - 30 of 38 projects.


[Malaysia] AI meets Cybersecurity

AI is now trending, and impacting diverse application domains beyond IT, from education (chatGPT) to natural sciences (protein analysis) to social media.

This PhD research focuses on the fusing AI research and cybersecurity research, notably one current direction is on advancing the latest generative AI models for cybersecurity, or vice versa: using cybersecurity to attack AI. 

The student is free to discuss with the supervisor on which topic is of the most interest to him/her, in order to shape the PhD topic. 

Privacy-Enhancing Technologies for the Social Good

Privacy-Enhancing Technologies (PETs) are a set of cryptographic tools that allow information processing in a privacy-respecting manner. As an example, imagine we have a user, say Alice, who wants to get a service from a service provider, say SerPro. To provide the service, SerPro requests Alice's private information such as a copy of her passport to validate her identity. In a traditional setting, Alice has no choice but to give away her highly sensitive information. 

Supervisor: Dr Muhammed Esgin

[NextGen] Secure and Privacy-Enhancing Federated Learning: Algorithms, Framework, and Applications to NLP and Medical AI

Federated learning (FL) is an emerging machine learning paradium to enable distributed clients (e.g., mobile devices) to jointly train a machine learning model without pooling their raw data into a centralised server. Because data never leaves from user clients, FL systematically mitigates privacy risks from centralised machine learning and naturally comply with rigorous data privacy regulations, such as GDPR and Privacy Act 1988. 

Supervisor:

Explainable AI (XAI) in Medical Imaging

Are you interested in applying your AI/DL knowledge to the medical domain?

This project focuses on the use of AI in Medical Imaging (e.g. CT, MRI, X-Ray, Ultrasound, etc). The work includes segmentation and classification; for example, segmenting tumour from the medical images, and then classify the grade of the tumour. We will use various Deep Learning techniques, such as CNN, and will experiment with a variety of Deep Learning frameworks, such as U-Net, ResNet, etc.

Guarding On-device Machine Learning Models via Privacy-enhancing Techniques

 On-device machine learning (ML) is rapidly gaining popularity on mobile devices. Mobile developers can use on-device ML to enable ML features at users’ mobile devices, such as face recognition, augmented virtual reality, voice assistance, and medical diagnosis. This new paradigm is further accelerated by AI chips and ASICs embedded on mobile devices, e.g., Apple’s Bionic neural engine. Compared to cloud-based machine learning services, on-device ML is privacy-friendly, of low latency, and can work offline. User data will remain at the mobile device for ML inference.

Supervisor:

Operations of Intelligent Software Systems

Nowadays more and more intelligence software solutions emerge in our daily life, for example the face recognition, smart voice assitants, and autonomous vehicle. As a type of data-driven solutions, intelligent components learn their decision logic from data corpus in an end-to-end manner and act as a black box. Without rigorous validation and verification, intelligent solutions are error-prone especially when deployed in the real world environment. To monitor, identify, mitigate and fix these defects becomes extremely important to ensure their service quality and user experience.

Supervisor: Dr Xiaoning Du

Privacy-Preserving Machine Learning

With success stories ranging from speech recognition to self-driving cars, machine learning (ML) has been one of the most impactful areas of computer science. ML’s versatility stems from the wealth of techniques it offers, making ML seem an excellent tool for any task that involves building a model from data. Nevertheless, ML makes an implicit overarching assumption that severely limits its applicability to a broad class of critical domains: the data owner is willing to disclose the data to the model builder/holder.

A Smart Software Vulnerability Detection Platform

Identifying vulnerabilities in real-world applications is challenging. Currently, static analysis tools are concerned with false positives; runtime detection tools are free of false positives but inefficient to achieve a full spectrum examination. A generic, scalable and effective vulnerability detection platform, taking advantage of both static and dynamic techniques, is desirable. To further overcome the shortcomings of these techniques, deep learning is more and more involved in static vulnerability localization and improving fuzzing efficiency.

Supervisor: Dr Xiaoning Du

Towards secure and trustworthy deep learning systems

Over the past decades, we have witnessed the emergence and rapid development of deep learning. DL has been successfully deployed in many real-life applications, including face recognition, automatic speech recognition, and autonomous driving, etc. However, due to the intrinsic vulnerability and the lack of rigorous verification, DL systems suffer from quality and security issues, such as the Alexa/Siri manipulation and the autonomous car accidents. Developing secure and trustworthy DL systems is challenging, especially given the limited time budget.

Supervisor: Dr Xiaoning Du

Fairness testing of AI-based systems

Machine learning is being used to make important decisions affecting people's lives, such as filter loan applicants, deploy police officers, and inform bail and parole decisions, among other things. Machine learning has been found to introduce and perpetuate discriminatory practices by unintentionally encoding existing human biases and introducing new ones. In this project, we will develop automated testing approaches that can be used to verify that machine learning models are not biased. 

Supervisor: Aldeida Aleti