In NeuroDistSys (NDS): Optimized Distributed Training and Inference on Large-Scale Distributed Systems, we aim to design and implement cutting-edge techniques to optimize the training and inference of Machine Learning (ML) models across large-scale distributed systems. Leveraging advanced AI and distributed computing strategies, this project focuses on deploying ML models on real-world distributed infrastructures, improving system performance, scalability, and efficiency by optimizing resource usage (e.g., GPUs, CPUs, energy consumption).
Research projects in Information Technology
Displaying 1 - 10 of 37 projects.
Autonomous Vehicles for Urban Transit Optimisation
Public transportation is vital for sustainable urban mobility, yet challenges like inefficient first- and last-mile connectivity, and over-reliance on private cars hinder its effectiveness. Autonomous vehicles (AVs) offer transformative potential by enabling diverse, on-demand mobility solutions tailored to specific trip needs, enhancing connectivity, and reducing emissions. However, current research often overlooks the complexities of mixed-vehicle environments, and the development of optimal deployment, routing, and charging strategies.
SmartScaleSys (S3): AI-Driven Resource Management for Efficient and Sustainable Large-Scale Distributed Systems
In SmartScaleSys (S3), we aim to design and build resource management solutions to learn from usage patterns, predict future needs, and allocate resources to minimize latency, energy consumption, and costs of running diverse applications in large-scale distributed systems. This project offers researchers and students a chance to explore cutting-edge concepts in AI-driven infrastructure management, distributed computing, and energy-aware computing, preparing them for impactful roles in industry and research.
Investigating Security and Privacy Issues in Real-World Asset Tokenization
This project will explore the security and privacy challenges inherent in the tokenization of real-world assets (RWAs) using the blockchain technology. As industries increasingly adopt tokenization to digitize and trade assets like real estate, commodities, and fine art, ensuring the security and privacy of these transactions becomes critical. The research will focus on identifying vulnerabilities in existing tokenization frameworks, analyzing potential risks, and developing novel security protocols to protect sensitive data and ensure the integrity of tokenized assets.
Understanding and detecting mis/disinformation
Mis/disinformation (also known as fake news), in the era of digital communication, poses a significant challenge to society, affecting public opinion, decision-making processes, and even democratic systems. We still know little about the features of this communication, the manipulation techniques employed, and the types of people who are more susceptible to believing this information.
This project extends upon Prof Whitty's work in this field to address one of the issues above.
Human Factors in Cyber Security: Understanding Cyberscams
Online fraud, also referred to as cyberscams, is increasingly becoming a cybersecurity problem that technical cybersecurity specialists are unable to effectively detect. Given the difficulty in the automatic detection of scams, the onus is often pushed back to humans to detect. Gamification and awareness campaigns are regularly researched and implemented in workplaces to prevent people from being tricked by scams, which may lead to identity theft or conning individuals out of money.
Privacy-preserving machine unlearning
Design efficient privacy-preserving method for different machine learning tasks, including training, inference and unlearning
Enhancing Privacy Preservation in Machine Learning
This research project aims to address the critical need for privacy-enhancing techniques in machine learning (ML) applications, particularly in scenarios involving sensitive or confidential data. With the widespread adoption of ML algorithms for data analysis and decision-making, preserving the privacy of individuals' data has become a paramount concern.
Securing Generative AI for Digital Trust
LLM models for learning and retrieving software knowledge
The primary objective of this project is to enhance Large Language Models (LLMs) by incorporating software knowledge documentation. Our approach involves utilizing existing LLMs and refining them using data extracted from software repositories. This fine-tuning process aims to enable the models to provide answers to queries related to software development tasks.