Research projects in Information Technology
Displaying 11 - 20 of 35 projects.
Securing Generative AI for Digital Trust
LLM models for learning and retrieving software knowledge
The primary objective of this project is to enhance Large Language Models (LLMs) by incorporating software knowledge documentation. Our approach involves utilizing existing LLMs and refining them using data extracted from software repositories. This fine-tuning process aims to enable the models to provide answers to queries related to software development tasks.
[Malaysia] AI meets Cybersecurity
AI is now trending, and impacting diverse application domains beyond IT, from education (chatGPT) to natural sciences (protein analysis) to social media.
This PhD research focuses on the fusing AI research and cybersecurity research, notably one current direction is on advancing the latest generative AI models for cybersecurity, or vice versa: using cybersecurity to attack AI.
Privacy-Enhancing Technologies for the Social Good
Privacy-Enhancing Technologies (PETs) are a set of cryptographic tools that allow information processing in a privacy-respecting manner. As an example, imagine we have a user, say Alice, who wants to get a service from a service provider, say SerPro. To provide the service, SerPro requests Alice's private information such as a copy of her passport to validate her identity. In a traditional setting, Alice has no choice but to give away her highly sensitive information.
[NextGen] Secure and Privacy-Enhancing Federated Learning: Algorithms, Framework, and Applications to NLP and Medical AI
Federated learning (FL) is an emerging machine learning paradium to enable distributed clients (e.g., mobile devices) to jointly train a machine learning model without pooling their raw data into a centralised server. Because data never leaves from user clients, FL systematically mitigates privacy risks from centralised machine learning and naturally comply with rigorous data privacy regulations, such as GDPR and Privacy Act 1988.
Explainable AI (XAI) in Medical Imaging
Are you interested in applying your AI/DL knowledge to the medical domain?
Guarding On-device Machine Learning Models via Privacy-enhancing Techniques
On-device machine learning (ML) is rapidly gaining popularity on mobile devices. Mobile developers can use on-device ML to enable ML features at users’ mobile devices, such as face recognition, augmented virtual reality, voice assistance, and medical diagnosis. This new paradigm is further accelerated by AI chips and ASICs embedded on mobile devices, e.g., Apple’s Bionic neural engine. Compared to cloud-based machine learning services, on-device ML is privacy-friendly, of low latency, and can work offline.
Operations of Intelligent Software Systems
Nowadays more and more intelligence software solutions emerge in our daily life, for example the face recognition, smart voice assitants, and autonomous vehicle. As a type of data-driven solutions, intelligent components learn their decision logic from data corpus in an end-to-end manner and act as a black box. Without rigorous validation and verification, intelligent solutions are error-prone especially when deployed in the real world environment. To monitor, identify, mitigate and fix these defects becomes extremely important to ensure their service quality and user experience.
Interoperability (using FHIR) in cutting-edge medical software systems
Critical work to the future of healthcare ... exploring the role of #FHIR in interoperability and #datascience
Also allowing exploration and usage of the #SMART on #FHIR software paradigm
Involves working with various real world health services and health IT partners
#digitalhealth #health #EMR #hospital #software
Privacy-Preserving Machine Learning
With success stories ranging from speech recognition to self-driving cars, machine learning (ML) has been one of the most impactful areas of computer science. ML’s versatility stems from the wealth of techniques it offers, making ML seem an excellent tool for any task that involves building a model from data. Nevertheless, ML makes an implicit overarching assumption that severely limits its applicability to a broad class of critical domains: the data owner is willing to disclose the data to the model builder/holder.