Skip to main content

Research projects in Information Technology

Displaying 31 - 40 of 113 projects.


Explainable AI (XAI) for Disobedient Robotics

As humans we have the ability, and even the necessity of distinguishing between orders that are ethical, safe and necessary and orders that may be harmful or unethical. Theoretically this ability should exist in robots. To coin Assimov’s second law “A robot must obey orders given by human beings except where such orders would conflict with the First Law '', the first law being not injuring or allowing any human to be injured.

Supervisor: Dr Mor Vered

Conscious AI

What makes a machine conscious? This PhD would be at the intersection of Philosophy, AI and neuroscience. You would study the latest neuroscience based theories about how consciousness emerges in the brain, as well as the latest AI methods and examine what if any consciousness current AI methods might have and how we might define whether an AI is conscious based on what we know about consciousness in the brain. This wouldn't be a typical machine learning PhD, as many aspects can only be examined on a philosophical and theoretical level.

Supervisor: Dr Levin Kuhlmann

Generative AI for Recommender Systems

A recommender system is a subclass of information filtering/retrieval system that provides suggestions for items that are most pertinent to a particular user without an explicit query. Recommender systems have become particularly useful in this information overload era and have played an essential role in many industries including Medical/Health, E-Commerce, Retail, Media, Banking, Telecom and Utilities (e.g., Amazon, Netflix, Spotify, Linkedin etc).

Supervisor: Teresa Wang

Causal Reasoning for Mental Health Support

This Ph.D. project aims to combine causal analysis with deep learning for mental health support. As deep learning is vulnerable to spurious correlations, novel causal discovery and inference methods will be developed to identify and reason over causal relationships among all associations from the data in literature. As the number of causal relationships is usually much smaller than that of associations, the proposed techniques will achieve explainability by making causes and effects interpretable to psychologists.

Supervisor: Dr Lizhen Qu

Bayesian-network models for human-machine collaboration to protect pollinator-plant interactions in agriculture and natural ecosystems

Ecological systems are dynamic and complex. Many ecosystems support human food production and in turn are impacted by human food production activity. This creates feedback loops between ecosystems, human society and our agriculture, that are typical of complex systems. Ecosystem and social system modelling therefore, including simulation, can play a key role to understand food production and ecosystem interactions.

Formal Explainability in Artificial Intelligence

Artificial Intelligence (AI) models are widely used in decision making procedures in many real-world applications across important areas such as finance, healthcare, education, and safety critical systems. The fast growth, practical achievements and the overall success of modern approaches to AI guarantees that machine learning AI approaches will prevail as a generic computing paradigm, and will find an ever growing range of practical applications, many of which will have to do with various aspects of humans' lives including privacy and safety.

Supervisor: Alexey Ignatiev

AI models for skin conditions management and diagnosis

Problem:

Almost 1 million people in Australia suffer from a long-term skin condition.  Without early intervention, skin conditions become chronic conditions with significant health, psychosocial and economic impacts, including anxiety, depression and social isolation. Access to safe, timely, high-quality specialist care leads to better outcomes for individuals. With roughly 2 dermatologists per 100,000 Australians, it’s not surprising how hard it is to have access to dermatologist expertise.

Solution:

Supervisor: Dr Yasmeen George

Large language models for detecting misinformation and disinformation

The proliferation of misinformation and disinformation on online platforms has become a critical societal issue. The rapid spread of false information poses significant threats to public discourse, decision-making processes, and even democratic institutions. Large language models (LLMs) have shown tremendous potential in natural language understanding and generation. This research aims to harness the power of LLMs to develop advanced computational methods for the detection and mitigation of misinformation and disinformation. More specific objectives are:

AI-augmented coaching, reporting and its assessment

This project will develop general cutting edge generative AI and natural language processing methods to advance AI-augmented human-in-the-loop coaching and associated training planning and outcome reporting.

Supervisor: Dr Levin Kuhlmann

Brain network mechanisms underlying anaesthetic-induced loss of consciousness

This project focuses on brain network mechanisms underlying anaesthetic-induced loss of consciousness through the application of simultaneous EEG/MEG and neural inference and network analysis methods. In this work we study the effects putative NMDA antagonists xenon, a potent anaesthetic, and nitrous oxide, a weak anaesthetic, on anesthetic-induced changes in brain mechanisms and networks. The goal is to find common brain mechanisms and networks that are effected by different kinds anaesthetics to see if this points to a 'backbone' for the generation of consciousness. 
Supervisor: Dr Levin Kuhlmann