Skip to main content

Securing Generative AI for Digital Trust

Primary supervisor

Xingliang Yuan


  • Shuo Wang
  • Jason Xue

Research area


Project description:

  1. Generative AI models work by training large networks over vast quantities of unstructured data, which may then be specialised as-needed via fine-tuning or prompt engineering. In this project we will explore all aspects of this process with a focus on increasing trust in the model outputs by reducing or eliminating the incidence of bugs and errors. New technologies and methods will fuse the ever-increasing capabilities of generative AI with techniques such as light-weight sanity and security scanners, model patching and re-training, and output anomaly flagging. Instead of the usage of singular AI, heterogenous AI ensembles could co-produce outputs with mechanisms to catch and merge dissimilar outputs to enhance the code quality. Multi-modal models can also be explored which can take into account additional context for better decision making. Overall, new techniques will be developed such that the generative AI behaviour may be constrained to within known-good parameters. Trust in Generative AI will follow into trust into the products and systems produced.
  2. The students will be expected to: (a) produce application-representative data and scripts for AI training, tuning, evaluation, and patching, (b) perform literature review to identify suitable model target architectures and methods, (c) develop novel mechanisms for constraining and controlling generative AI behaviours, (d) broadcast their research via authorship and presentation of their work at respected peer-reviewed venues.
  3. Expected outcomes include project collaboration with industry partners, effective work in the broader team (the project is expected to have at least 2 students plus collaborators), and artifacts such as peer-reviewed publications in Data61’s target conference venues, open-source AI datasets, training scripts, and trained models.


Required knowledge

  • Generative AI
  • NLP skills
  • System security
  • Software testing

To be eligible you must have:

  • A first-class honours (H1) Bachelor’s degree or equivalent in the relevant research area (completed or near completion);
  • Applied for admission to a PhD program at an Australian university or be a student enrolled in their first 12 months of study in a PhD program at an Australian University;
  • A university supervisor who confirms they are willing and able to supervise you;
  • Not already hold a PhD degree.  

Project funding


Learn more about minimum entry requirements.