Collecting and analysing social media content (e.g., Reddit), along with using Google Trends, presents a great opportunity to develop social media epidemic intelligence. This approach can enhance the understanding of chronic conditions such as arthritis, back pain, and knee pain, as well as track associated areas such as treatments and risk factors, including obesity, diet, physical activity, and exercise.
Honours and Minor Thesis projects
Displaying 1 - 10 of 211 honours projects.
Current studies on diabetes recommender systems and apps mainly focus on the performance and personalisation of AI models and techniques, including machine learning and deep learning models that are trained on user data. These works often use a one-size-fits-all approach for presenting information to users. Yet, research shows that humans process information in different ways, and their attitudes towards an action depend on their attitude-function styles.
Current studies on diabetes recommender systems and apps mainly focus on the performance and personalisation of AI models and techniques, including machine learning and deep learning models that are trained on user data. These works often use a one-size-fits-all approach for presenting information to users. Yet, research shows that humans process information in different ways, and their attitudes towards an action depend on their attitude-function styles.
This project aims to develop effective machine learning algorithms for detecting deepfake videos, which have become a significant concern for disinformation and cybersecurity. The objectives include pre-processing the data for feature extraction, and training machine learning models to accurately classify videos as either real or manipulated. The methodology involves using advanced techniques such as convolutional neural networks, recurrent neural networks or video vision transformer models to analyse visual and temporal patterns in the videos.
Our groundbreaking research explores the intricate relationship between natural language processing (NLP) and electroencephalography (EEG) brain signals [1]. By leveraging advanced machine learning techniques, we aim to decode the neural patterns associated with language comprehension and production, ultimately enabling seamless communication between humans and machines. Our innovative approach has the potential to revolutionize brain-computer interfaces,speech recognition technologies, and assistive devices for individuals with communication impairments.
This project aims to develop foundation models for detecting anomalies in time series data. Anomalies, such as unusual patterns or unexpected events, can signal critical issues in systems like healthcare, finance, or cybersecurity. Current methods are often limited by the fact that they reuire long training before one can test the model on a new time series due to complexity and variability of real-world time series data. By leveraging advanced machine learning techniques, this project seeks to create robust and adaptable models that can generalize across diverse time series scenarios.
Background:
Imagine the human brain as a complex electrical grid, with over 80 billion neurons (nerve cells) acting as power stations. These power stations need to send electrical signals to each other efficiently. Myelin, a special lipid sheath, wraps around the neuron processes (axons) like insulation around electrical wires. This insulation ensures that the signals travel quickly and without losing strength, giving the brain’s “white matter” its name (Figure 1A).
This project focuses on identifying and distinguishing between authentic audio recordings and those that have been artificially generated or manipulated. As voice cloning technology advances, creating realistic audio deepfakes has become easier, raising concerns about misinformation and privacy. To combat this, this project aims to develop machine learning models to analyse audio features such as pitch, tone, cadence, and spectral characteristics.
This project aims to identify the geographical position where an audio clip was recorded by analysing sound patterns and audio signals from the surrounding environment. This approach leverages hand-crafted and/or deep features to distinguish between different soundscapes associated with specific locations, like train stations, shopping malls, classrooms, hospitals, parks, and so on. Deep learning models are trained on labelled audio datasets that capture diverse environments and their unique acoustic signatures.
This project involves enhancing traditional object detection methods by incorporating human pose estimation to identify weapons in various contexts, especially in surveillance and security applications. This approach leverages computer vision techniques that analyse the positions and movements of individuals, allowing systems to recognize not just the presence of weapons but also the intent and behaviour of the person carrying them.