Skip to main content

Research projects in Information Technology

Displaying 21 - 30 of 187 projects.


Understanding and detecting mis/disinformation

Mis/disinformation (also known as fake news), in the era of digital communication, poses a significant challenge to society, affecting public opinion, decision-making processes, and even democratic systems. We still know little about the features of this communication, the manipulation techniques employed, and the types of people who are more susceptible to believing this information.

This project extends upon Prof Whitty's work in this field to address one of the issues above.

Supervisor: Prof Monica Whitty

Human Factors in Cyber Security: Understanding Cyberscams

Online fraud, also referred to as cyberscams, is increasingly becoming a cybersecurity problem that technical cybersecurity specialists are unable to effectively detect. Given the difficulty in the automatic detection of scams, the onus is often pushed back to humans to detect. Gamification and awareness campaigns are regularly researched and implemented in workplaces to prevent people from being tricked by scams, which may lead to identity theft or conning individuals out of money.

Supervisor: Prof Monica Whitty

Energy Informatics

The energy transition to net zero is in full swing! We at Monash University's Faculty of Information Technology (FIT) are in the unique position that we support the transition across an immensely broad range of topics: from model-predictive building control and community battery integration to wind farm optimisation and multi-decade investment planning, we support clever algorithms and data crunching to make decisions automatically and to let humans make informed decisions, too. 

Effective analytics for real-life time series anomaly detection

Anomaly detection methods address the need for automatic detection of unusual events with applications in cybersecurity. This project aims to address the efficacy of existing models when applied to real-life data. The goal is to generate new knowledge in the field of time series anomaly detection [1,2] through the invention of methods that effectively learn to generalise patterns of normal from real-life data.

Supervisor: Mahsa Salehi

Faithful and Salient Multimodal Data-to-Text Generation

While large multimodal models (LMMs) have obtained strong performance on many multi-modal tasks, they may still hallucinate while generating text. Their performance on detecting salient features from visual data is also unclear. In this project, we develop a framework to generate faithful and salient text from mixed-modal data, which includes images and structured data.

Supervisor: Teresa Wang

Self-aware neural networks

This project is similar in flavour to the Conscious AI project but rather than come from a Philosophical/Neuroscience/Math/Theory angle, this project aims to build self-aware neural networks that are constructed in a way that is inspired by what we know about self-awareness circuits in the brain and the field of self-aware computing. The project will advanced state of the art AI for NLP or vision or both and embed self-awareness modules within these systems.

Supervisor: Dr Levin Kuhlmann

Living AI

In collaboration with people from Monash materials engineering, neuroscience and biochemistry we are developing living AI networks where neurons in a dish are grown to form biological neural networks that can be trained to do machine learning and AI tasks in a similar way to artificial neural networks. In this project you will develop machine learning theory that is consistent with the learning that occurs within these biological neural networks, so that these networks can be leveraged for AI applications.

Supervisor: Dr Levin Kuhlmann

Explainable AI (XAI) as Model Reconciliation

Creating efficient and beneficial user agent interaction is a challenging problem. Challenges include improving performance and trust and reducing over and under reliance. We investigate the development of Explainable AI Systems that can provide explanations of AI agent decisions to human users. Past work on plan explanations primarily focused on explaining the correctness and validity of plans.

Supervisor: Dr Mor Vered

Explainable AI (XAI) for Disobedient Robotics

As humans we have the ability, and even the necessity of distinguishing between orders that are ethical, safe and necessary and orders that may be harmful or unethical. Theoretically this ability should exist in robots. To coin Assimov’s second law “A robot must obey orders given by human beings except where such orders would conflict with the First Law '', the first law being not injuring or allowing any human to be injured.

Supervisor: Dr Mor Vered

Privacy-preserving machine unlearning

Design efficient privacy-preserving method for different machine learning tasks, including training, inference and unlearning

Supervisor: Dr Shujie Cui