Skip to main content

Research projects in Information Technology

Displaying 11 - 20 of 37 projects.


[Malaysia] AI meets Cybersecurity

AI is now trending, and impacting diverse application domains beyond IT, from education (chatGPT) to natural sciences (protein analysis) to social media.

This PhD research focuses on the fusing AI research and cybersecurity research, notably one current direction is on advancing the latest generative AI models for cybersecurity, or vice versa: using cybersecurity to attack AI. 

Personal Future Health Prediction

This is one of our CSIRO Next Generation AI graduate programme PhD projects with Future Wellness Group:

https://www.monash.edu/it/ssc/raise/projects/personal-future-health-prediction

Note:  *** Must be Domestic Student i.e. Australian or New Zealand Citizen or Australian Permanent Resident *** for RAISE programme

Project Description

Supervisor: Prof John Grundy

Privacy-Enhancing Technologies for the Social Good

Privacy-Enhancing Technologies (PETs) are a set of cryptographic tools that allow information processing in a privacy-respecting manner. As an example, imagine we have a user, say Alice, who wants to get a service from a service provider, say SerPro. To provide the service, SerPro requests Alice's private information such as a copy of her passport to validate her identity. In a traditional setting, Alice has no choice but to give away her highly sensitive information. 

Supervisor: Dr Muhammed Esgin

[NextGen] Secure and Privacy-Enhancing Federated Learning: Algorithms, Framework, and Applications to NLP and Medical AI

Federated learning (FL) is an emerging machine learning paradium to enable distributed clients (e.g., mobile devices) to jointly train a machine learning model without pooling their raw data into a centralised server. Because data never leaves from user clients, FL systematically mitigates privacy risks from centralised machine learning and naturally comply with rigorous data privacy regulations, such as GDPR and Privacy Act 1988. 

Supervisor:

Explainable AI (XAI) in Medical Imaging

Are you interested in applying your AI/DL knowledge to the medical domain?

Guarding On-device Machine Learning Models via Privacy-enhancing Techniques

 On-device machine learning (ML) is rapidly gaining popularity on mobile devices. Mobile developers can use on-device ML to enable ML features at users’ mobile devices, such as face recognition, augmented virtual reality, voice assistance, and medical diagnosis. This new paradigm is further accelerated by AI chips and ASICs embedded on mobile devices, e.g., Apple’s Bionic neural engine. Compared to cloud-based machine learning services, on-device ML is privacy-friendly, of low latency, and can work offline.

Supervisor:

Operations of Intelligent Software Systems

Nowadays more and more intelligence software solutions emerge in our daily life, for example the face recognition, smart voice assitants, and autonomous vehicle. As a type of data-driven solutions, intelligent components learn their decision logic from data corpus in an end-to-end manner and act as a black box. Without rigorous validation and verification, intelligent solutions are error-prone especially when deployed in the real world environment. To monitor, identify, mitigate and fix these defects becomes extremely important to ensure their service quality and user experience.

Supervisor: Dr Xiaoning Du

Interoperability (using FHIR) in cutting-edge medical software systems

Critical work to the future of healthcare ... exploring the role of #FHIR in interoperability and #datascience

Also allowing exploration and usage of the #SMART on #FHIR software paradigm 

Involves working with various real world health services and health IT partners 

#digitalhealth #health #EMR #hospital #software

Supervisor: Chris Bain

Privacy-Preserving Machine Learning

With success stories ranging from speech recognition to self-driving cars, machine learning (ML) has been one of the most impactful areas of computer science. ML’s versatility stems from the wealth of techniques it offers, making ML seem an excellent tool for any task that involves building a model from data. Nevertheless, ML makes an implicit overarching assumption that severely limits its applicability to a broad class of critical domains: the data owner is willing to disclose the data to the model builder/holder.

A Smart Software Vulnerability Detection Platform

Identifying vulnerabilities in real-world applications is challenging. Currently, static analysis tools are concerned with false positives; runtime detection tools are free of false positives but inefficient to achieve a full spectrum examination. A generic, scalable and effective vulnerability detection platform, taking advantage of both static and dynamic techniques, is desirable. To further overcome the shortcomings of these techniques, deep learning is more and more involved in static vulnerability localization and improving fuzzing efficiency.

Supervisor: Dr Xiaoning Du