Deepfakes, derived from "deep learning" and "fake," involve techniques that merge the face images of a target person with a video of a different source person. This process creates videos where the target person appears to be performing actions or speaking as the source person. In a broader context, deepfakes encompass other categories such as lip-sync and puppet-master. Lip-sync deepfakes alter videos to synchronize mouth movements with a provided audio track.
Research projects in Information Technology
Displaying 21 - 30 of 188 projects.
Fingerprint detection from images and videos using machine learning
This project aims to develop robust algorithms capable of identifying and analyzing fingerprints extracted from both static images and video footage. Machine learning techniques, particularly computer vision and pattern recognition methods, will be utilized to automate the process of fingerprint detection. These methods will be trained to learn patterns from fingerprint features and detect them using object detection approaches. A dataset of fingerprint images and videos, annotated with ground truth information will be collected.
Understanding and detecting mis/disinformation
Mis/disinformation (also known as fake news), in the era of digital communication, poses a significant challenge to society, affecting public opinion, decision-making processes, and even democratic systems. We still know little about the features of this communication, the manipulation techniques employed, and the types of people who are more susceptible to believing this information.
This project extends upon Prof Whitty's work in this field to address one of the issues above.
Human Factors in Cyber Security: Understanding Cyberscams
Online fraud, also referred to as cyberscams, is increasingly becoming a cybersecurity problem that technical cybersecurity specialists are unable to effectively detect. Given the difficulty in the automatic detection of scams, the onus is often pushed back to humans to detect. Gamification and awareness campaigns are regularly researched and implemented in workplaces to prevent people from being tricked by scams, which may lead to identity theft or conning individuals out of money.
Energy Informatics
The energy transition to net zero is in full swing! We at Monash University's Faculty of Information Technology (FIT) are in the unique position that we support the transition across an immensely broad range of topics: from model-predictive building control and community battery integration to wind farm optimisation and multi-decade investment planning, we support clever algorithms and data crunching to make decisions automatically and to let humans make informed decisions, too.
Effective analytics for real-life time series anomaly detection
Anomaly detection methods address the need for automatic detection of unusual events with applications in cybersecurity. This project aims to address the efficacy of existing models when applied to real-life data. The goal is to generate new knowledge in the field of time series anomaly detection [1,2] through the invention of methods that effectively learn to generalise patterns of normal from real-life data.
Faithful and Salient Multimodal Data-to-Text Generation
While large multimodal models (LMMs) have obtained strong performance on many multi-modal tasks, they may still hallucinate while generating text. Their performance on detecting salient features from visual data is also unclear. In this project, we develop a framework to generate faithful and salient text from mixed-modal data, which includes images and structured data.
Self-aware neural networks
This project is similar in flavour to the Conscious AI project but rather than come from a Philosophical/Neuroscience/Math/Theory angle, this project aims to build self-aware neural networks that are constructed in a way that is inspired by what we know about self-awareness circuits in the brain and the field of self-aware computing. The project will advanced state of the art AI for NLP or vision or both and embed self-awareness modules within these systems.
Living AI
In collaboration with people from Monash materials engineering, neuroscience and biochemistry we are developing living AI networks where neurons in a dish are grown to form biological neural networks that can be trained to do machine learning and AI tasks in a similar way to artificial neural networks. In this project you will develop machine learning theory that is consistent with the learning that occurs within these biological neural networks, so that these networks can be leveraged for AI applications.
Explainable AI (XAI) as Model Reconciliation
Creating efficient and beneficial user agent interaction is a challenging problem. Challenges include improving performance and trust and reducing over and under reliance. We investigate the development of Explainable AI Systems that can provide explanations of AI agent decisions to human users. Past work on plan explanations primarily focused on explaining the correctness and validity of plans.