Explainable AI (XAI), a sub-field of AI, has highlighted the need for transparent AI models that can communicate important aspects of their process and decision making to their users. There is an important knowledge gap concerning the analysis, use and application of XAI techniques in creativity. Creative AI systems are passive participants in much of the creative process, partly because of the lack of mechanisms to give an account of the reasoning behind their operation. This is analogous to human-produced work being evaluated and discussed without giving a voice to its creator.
Research projects in Information Technology
This PhD scholarship is funded as an important collaboration between the Faculty of Information Technology and the ARC Laureate project Global Encounters and First Nations Peoples: 1000 years of Australian History, conducted by Professor Lynette Russell AM.
Recent advances in technology mean we can now reappraise the exploration of the past as a future-aligned endeavour. The definition of the ‘past’ here is broad; the reconstruction of a bygone world may derive from relatively recent written texts or photographic archives, from centuries old remains uncovered in archaeological excavations, or even far back in ‘deep time’, to the long-vanished ecologies evidenced in the fossil record.
Successful creative collaboration with an AI agent is both exciting and highly challenging. This is because the experience of creating a novel artefact together with an AI agent is hindered by the lack of both understanding the reasoning of the agent as well as the ability to intervene and communicate during the creative process to unleash the full potential of an idea.
Contemporary filmmakers and visual artists alike are embracing the potential of immersive digital technology – such as Augmented and Virtual Reality – to tell stories in powerful, new and affective ways. By effectively breaking the dictatorship of the frame that has defined the representational form of the moving image for the past 150 years, VR introduces a new paradigm for cinematic expression and viewing experience. This challenge marks a transformational moment in the evolution in the craft of “immersive storytelling”.
Museums are – and have always been – mixed reality spaces par excellence. Today, digital technologies extend the ways in which the wealth of material culture they contain can be interpreted and exhibited, presenting new and (previously) unimaginable ways of bringing their stories to life.
Technologies emerge from a society’s cultural imagination, sparking new ways to imagine the future. Today, Artificial Intelligence (AI) – as the technical capability of a system ‘to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation’  – is formed into virtually any digital system that we draw upon to interact with each other and the world around us.
This practice-based research involves further development of the AirSticks, a hardware/software package which allows the triggering and manipulation of sound and visuals in a 3D playing space, as a gestural instrument for live electronic music performance, music education and general health and wellbeing in collaboration with our interdisciplinary team at SensiLab. This can be done through new performances, new software or new hardware. How can we reinvent the connect between our bodies, our ears and our creativity, and what new applications for the AirSticks can be discovered?
In recent years, AI techniques such as GANs and associated deep learning neural networks have become popular tools applied to the production and creation of works of art. In 2018, AI Art made headlines around the world when a “work of art created by an algorithm” was sold at auction by Christie’s for $432,500 – nearly 45 times the value estimated before auction.