Skip to main content

Semantic Extraction of Building Information for Mixed Reality Data Visualisation

Primary supervisor

Barrett Ens

Co-supervisors

Research area

Embodied Visualisation

This ambitious project combines the latest advances in Computer Vision and Immersive Data Visualisation to support people who rely on digital models of physical infrastructure. The construction and engineering industries are increasingly moving toward digital technologies, including digital models and virtual twins. Mixed Reality allows for building managers, engineers, or maintenance workers to visualise digital information directly overlaid on the physical infrastructure, providing better understanding and decision-making capabilities. However, these digital models are currently disconnected from the physical world, and can contain costly discrepancies between the intended designs and the resulting infrastructure. There is a missed opportunity to make use of the rich data available to the Mixed Reality device’s built-in sensors.

This project will use Mixed Reality headsets to extract information from the wearer’s surroundings to enhance digital infrastructure models in real time. Recent advances in Computer Vision Scene Understanding allow device sensors not only to collect geometric data about an environment, but also to extract rich information about its state, the activities of its occupants, and important contextual information about object relationships. This information can be used to augment  the existing models and can be visualised in real time.

This PhD project is suited to an enthusiastic student who is keen to bridge the gap between these areas of research.

Required knowledge

Experience in one or more of the following:

  • Computer Vision
  • Scene Understanding
  • Augmented/Mixed/Virtual Reality
  • Data Visualisation
  • C# and Unity

Project funding

Project based scholarship

Learn more about minimum entry requirements.