Primary supervisorMaria Teresa Llano
Explainable AI (XAI), a sub-field of AI, has highlighted the need for transparent AI models that can communicate important aspects of their process and decision making to their users. There is an important knowledge gap concerning the analysis, use and application of XAI techniques in creativity. Creative AI systems are passive participants in much of the creative process, partly because of the lack of mechanisms to give an account of the reasoning behind their operation. This is analogous to human-produced work being evaluated and discussed without giving a voice to its creator. By better explaining their processes and means of production, creative AI systems can make a case for the value of their contributions and achieve a voice in the creative process.
The aim of this project is to develop novel explainable methods and interaction methodologies for creative AI systems and creative practitioners. It will investigate new modes of interaction where the machine takes a pro-active role in ideation and generation by providing explanations in ongoing exchanges with users. The research forms part of an ARC funded research project and will involve working in collaboration with other members of the interdisciplinary project team.
The successful candidate will contribute to the development of models of explanations, using machine learning approaches and employing dialogue and interrogation at the heart of the interaction.
Experience with Artificial Intelligence and Machine Learning models. Experience on applying AI and ML to some form of creative expression is desirable but not required (e.g., visual art, design, music, poetry).