This project investigates how user and developer behaviour can be modelled as latent states underlying observable software-engineering and requirements-engineering artefacts, and how recovering these states can deliver actionable insight to practitioners — for example, early signals of requirement instability, indicators of stakeholder misalignment, or behavioural predictors of defect-prone modules.
Honours and Masters project
Displaying 1 - 10 of 268 honours projects.
Evaluating Large Language Model Accuracy for Clinical Document Parsing in Australian Healthcare Contexts
Background and Motivation
Evaluating Immersive Multiview Maps
The project aims to evaluate an immersive virtual reality system for visual exploration of global data. Visual exploration of maps often requires a contextual understanding at multiple scales and locations. Multiview map layouts, which present a hierarchy of multiple views to reveal detail at various scales and locations, have been shown to support better performance than traditional single-view exploration on desktop displays. We created a virtual reality system, named immersive multiview maps, that allows for visual exploration of global data across geographical and temporal scales.
Immersive Environmental Journalism
Online articles, including news and government reports, hold critical significance in communicating environmental issues (e.g., Black Summer bushfires, water security, or renewable energy). Although charts and photographs on a 2D screen can communicate facts, they struggle to cultivate the deeper engagement and empathy that comes from direct presence in the affected environments.
Two converging technological shifts create a new opportunity.
EdgeVLMOpt (EVO): Optimizing Vision-Language Models for Resource-Constrained Edge Devices
In EdgeVLMOpt (EVO): Optimizing Vision-Language Models for Resource-Constrained Edge Devices, we aim to develop efficient and scalable techniques to enable the deployment of advanced vision-language models (VLMs) on edge hardware. While VLMs have demonstrated strong capabilities in multimodal reasoning and understanding, their high computational and memory demands pose significant challenges for real-time, on-device applications.
EdgeFusionAI (EFAI): Real-Time Multi-Sensor Multi-Modal Intelligence on Edge Devices
In EdgeFusionAI (EFAI): Real-Time Multi-Sensor Multi-Modal Intelligence on Edge Devices, we aim to design and develop efficient techniques for fusing heterogeneous sensory data, including vision, LiDAR, radar, and other modalities, to enable robust and real-time decision-making on resource-constrained edge platforms. This project focuses on building intelligent systems capable of integrating diverse data sources while addressing the challenges of limited computation, memory, and energy availability at the edge.
Databases and Medicine
Are you interested in applying your database knowledge to a medical domain? In this project, you will explore data curation, management, processing and analysis of medical data. You will explore various medical and patient datasets available publicly, such as the UK Biobank, Cancer Atlas, etc.
AI and Music
Are you interested in applying your AI knowledge to music, especially classical music? You will explore how AI (and Deep Learning) can be used in music, such as using Generative AI to create Theme & Variations in classical music, analysing classical music structures (e.g., sonata form, theme & variation form, etc.), and identifying instruments (e.g., violin vs. viola).
AI in Medicine
AI has been increasingly used in Medicine. There are big opportunities for AI in medical research, including medical imaging diagnosis. AI and Deep Learning have been used to detect and classify lesions in various diseases, such as cancers.
Explainable AI (XAI) in Medical Imaging
Are you interested in applying your AI/DL knowledge to the medical domain? This project focuses on the use of AI in Medical Imaging (e.g., CT, MRI, X-ray, ultrasound). The work includes segmentation and classification; for example, segmenting tumour from medical images and then classifying the tumour grade. We will use various Deep Learning techniques, such as CNNs, and experiment with a variety of Deep Learning frameworks, such as U-Net and ResNet.