Skip to main content

Research projects in Information Technology

Displaying 1 - 10 of 196 projects.


Disentangled Representation Learning for Synthetic Data Generation and Privacy Protection

Synthetic data generation has drawn growing attention due to the lack of training data in many application domains. It is useful for privacy-concerned applications, e.g. digital health applications based on electronic medical records. It is also attractive for novel applications, e.g. multimodal applications in meta-verse, which have little data for training and evaluation. This project focuses on synthetic data generation for audio and the corresponding multimodal applications, such as mental health chatbots and digital assistants for negotiations.

Supervisor: Dr Lizhen Qu

[NextGen] Secure and Privacy-Enhancing Federated Learning: Algorithms, Framework, and Applications to NLP and Medical AI

Federated learning (FL) is an emerging machine learning paradium to enable distributed clients (e.g., mobile devices) to jointly train a machine learning model without pooling their raw data into a centralised server. Because data never leaves from user clients, FL systematically mitigates privacy risks from centralised machine learning and naturally comply with rigorous data privacy regulations, such as GDPR and Privacy Act 1988. 

Supervisor: Xingliang Yuan

Explainable AI (XAI) in Medical Imaging

Are you interested in applying your AI/DL knowledge to the medical domain?

Point of Care MRI - the PoCeMR projecct

Portable point of care medical devices have revolutionised the way in which people receive medical treatment. It can bring timely and adequate care to people in need but also opens up the opportunity to address the healthcare inequality for the rural and remote.

Machine Learning for faster and safer MRI and PET imaging

Machine learning has recently made significant progress for medical imaging applications including image segmentation, enhancement, and reconstruction.

Funded as an Australian Research Council Discovery Project, this research aims to develop highly novel physics-informed deep learning methods for Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) and applications in image reconstruction and data analysis.

Guarding On-device Machine Learning Models via Privacy-enhancing Techniques

 On-device machine learning (ML) is rapidly gaining popularity on mobile devices. Mobile developers can use on-device ML to enable ML features at users’ mobile devices, such as face recognition, augmented virtual reality, voice assistance, and medical diagnosis. This new paradigm is further accelerated by AI chips and ASICs embedded on mobile devices, e.g., Apple’s Bionic neural engine. Compared to cloud-based machine learning services, on-device ML is privacy-friendly, of low latency, and can work offline.

Supervisor: Xingliang Yuan

Practical Attacks against Deep Learning Apps

With the growing computing power of mobile devices, deep learning (DL) models are deployed on mobile apps to enable more private, accurate, and fast services for end-users. To facilitate the deployment of DL to apps, major companies have been extending DL frameworks to mobile-end, e.g., Tensorflow Lite, Pytorch Mobile, Caffe2 Mobile, etc. In particular, Google recently enabled Tensorflow Lite for training at mobile apps in addition to inference in November 2021 (https://blog.tensorflow.org/2021/11/on-device-training-in-tensorflow-lite.html).

Supervisor: Xingliang Yuan

Privacy-Aware Rewriting

Despite the popularity of providing text analysis as a service by high-tech companies, it is still challenging to develop and deploy NLP applications involving sensitive and demographic information, especially when the information is expected to be shared with transparency and legislative compliance. Differential privacy (DP) is widely applied to protect privacy of individuals by achieving an attractive trade-off between utility of information and confidentiality.

Supervisor: Dr Lizhen Qu

Explainable Artificial Creativity

Explainable AI (XAI), a sub-field of AI, has highlighted the need for transparent AI models that can communicate important aspects of their process and decision making to their users. There is an important knowledge gap concerning the analysis, use and application of XAI techniques in creativity. Creative AI systems are passive participants in much of the creative process, partly because of the lack of mechanisms to give an account of the reasoning behind their operation. This is analogous to human-produced work being evaluated and discussed without giving a voice to its creator.

Training Safe Machine Learning Models Using Mathematical Optimization

Machine learning models have significantly improved the ability of autonomous systems to solve challenging tasks, such as image recognition, speech recognition and natural language processing. The rapid deployment of such models in safety critical systems resulted in an increased interest in the development of machine learning models that are robust and interpretable. 

Supervisor: Dr Buser Say