Skip to main content

Primary supervisor

Wray Buntine

It is said that ChatGPT is ‘not trying to be right, it’s just trying to be plausible.’  While the LLM community talk about hallucination, it is a complex phenomenon and a product of their construction.  The training and theory of LLMs has no notion of truth, they generate text with no critical evaluation of content or sources.  Many text sources are opinions, some may have subtle or not so subtle propaganda, some reflecting misinformed views, and LLMs reproduce this mess.  Moreover, the training and theory of LLMs has no notion of epistemic uncertainty, they have no sense of uncertainty.  AI researchers pushing world models (for instance Yann LeCun) address this problem by bringing actions and multi-modal data into their systems.  An alternative approach is that we adapt Bayesian meta-reasoning [1] and work with the LLM.  We should attempt to understand sources and their viewpoint biases [2].  Content is provided to us as opinion, even Wikipedia sources are known to have bias (so called "establishment bias").  Can we assess the biases of a diversity of sources

Aim/outline

Develop an agentic framework for developing Bayesian meta-reasoning of truthfulness that works with an LLM.  Develop a suitable test domain and demonstrate the approach.

URLs/references

[1]  "Position: Agentic AI Systems should be making Bayes-Consistent Decisions" by T. Papamarkou, Pierre Alquieret, Matthias Bauer, Wray Buntine et al., to appear in ICML 2026, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6143772

[2]  "A systematic review on media bias detection: What is media bias, how it is expressed, and how to detect it", Francisco-Javier Rodrigo-Ginés, Jorge Carrillo-de-Albornoz, Laura Plaza, Expert Systems With Applications, 2024

Required knowledge

Experienced in Python, AI programming, and working with Large Language Models.