This project aims to develop effective machine learning algorithms for detecting deepfake videos, which have become a significant concern for disinformation and cybersecurity. The objectives include pre-processing the data for feature extraction, and training machine learning models to accurately classify videos as either real or manipulated. The methodology involves using advanced techniques such as convolutional neural networks, recurrent neural networks or video vision transformer models to analyse visual and temporal patterns in the videos.
Honours and Minor Thesis projects
Displaying 11 - 20 of 216 honours projects.
This project aims to develop foundation models for detecting anomalies in time series data. Anomalies, such as unusual patterns or unexpected events, can signal critical issues in systems like healthcare, finance, or cybersecurity. Current methods are often limited by the fact that they reuire long training before one can test the model on a new time series due to complexity and variability of real-world time series data. By leveraging advanced machine learning techniques, this project seeks to create robust and adaptable models that can generalize across diverse time series scenarios.
Background:
Imagine the human brain as a complex electrical grid, with over 80 billion neurons (nerve cells) acting as power stations. These power stations need to send electrical signals to each other efficiently. Myelin, a special lipid sheath, wraps around the neuron processes (axons) like insulation around electrical wires. This insulation ensures that the signals travel quickly and without losing strength, giving the brain’s “white matter” its name (Figure 1A).
This project focuses on identifying and distinguishing between authentic audio recordings and those that have been artificially generated or manipulated. As voice cloning technology advances, creating realistic audio deepfakes has become easier, raising concerns about misinformation and privacy. To combat this, this project aims to develop machine learning models to analyse audio features such as pitch, tone, cadence, and spectral characteristics.
This project aims to identify the geographical position where an audio clip was recorded by analysing sound patterns and audio signals from the surrounding environment. This approach leverages hand-crafted and/or deep features to distinguish between different soundscapes associated with specific locations, like train stations, shopping malls, classrooms, hospitals, parks, and so on. Deep learning models are trained on labelled audio datasets that capture diverse environments and their unique acoustic signatures.
This project involves enhancing traditional object detection methods by incorporating human pose estimation to identify weapons in various contexts, especially in surveillance and security applications. This approach leverages computer vision techniques that analyse the positions and movements of individuals, allowing systems to recognize not just the presence of weapons but also the intent and behaviour of the person carrying them.
This project aims to develop techniques that enable users to find relevant audio content by inputting textual queries. This process leverages machine learning models, particularly natural language processing and audio signal processing, to bridge the gap between text and audio. When a user submits a query, the system analyses the text to understand its intent and context.
This project involves the automated generation of textual descriptions for audio content, such as spoken language, sound events, or music. This process typically employs deep learning techniques, such as recurrent neural networks, transformer models, and so on, to analyse audio signals and generate coherent captions. By training on large datasets that include both audio recordings and corresponding textual descriptions, these models learn to recognize patterns and contextual meanings within the audio.
Adaptively smoothing one-dimensional signals remains an important problem, with applications in time series analysis, additive modelling and forecasting. The trend filter provides an novel class of adaptive smoothers; however, it is usually implemented in a frequentist framework using tools like the lasso and cross-validation. Bayesian implementations tend to rely on posterior sampling and as such do not provide simple, sparse point-estimates of the underlying curve.
Learning appropriate prior distributions from replications of experiments is a important problem in the space of hierarchical and empirical Bayes. In this problem, we exploit the fact that we have multiple repeats of similar experiments and pool these to learn an appropriate prior distribution for the unknown parameters of this set of problems. Standard solutions to this type of problem tend to be of mixed Bayesian and non-Bayesian form, and are somewhat ad-hoc in nature.