This PhD project is funded by a successful ARC Discovery Project grant: "Improving human reasoning with causal Bayesian networks: a user-centric, multimodal, interactive approach" and the successful applicant will work as part of a larger research team.
Reasoning and decision making under uncertainty is an essential challenge in medicine, the law, and many other key domains. The best AI systems for helping humans meet this challenge are causal Bayesian networks, which can accurately model complex probabilistic systems. However, because people are notoriously deficient in probabilistic reasoning, they find hard to understand and trust these models and their reasoning. This Discovery Project will explore new integrated visual and verbal ways of explaining these models and their reasoning, to reduce known human reasoning difficulties and fallacies. It will also investigate how to reduce human cognitive load by prioritising the most useful information for the user. Expected outcomes include novel AI enhancements that empower users to drive the reasoning process and strengthen trust in the system’s reasoning. The Discovery Project will apply and evaluate these methods in two areas: medical and legal reasoning, where better and more transparent reasoning and decision making improve outcomes for end users, providing significant potential health, social and economic benefits.
The PhD project will involve designing and implementing new algorithms to produce visual aids to assist people to reason with causal Bayesian networks, as well as the planning and conduct of exploratory usability studies to assess the effectiveness of candidate visual or verbal aids with human participants.
Required knowledge
Strong programming skills. Some knowledge of Bayesian Networks and/or Human-Computer Interaction beneficial.