Computational modelling of sign languages aims to develop technologies that can serve as means for understanding (i.e., recognising) and producing (i.e., generating) a particular sign language. This field of research aims to provide the technological means for two-way mediated communication between hearing and deaf people. However, research on the computational modelling of sign languages is still in its infancy.
This PhD project aims to design, formalise, develop and evaluate technologies for the automatic recognition of Australian Sign Language. The invented technologies will be an essential step towards a wide range of applications that will assist in Australian Sign Language communication and education (e.g., applications that support acquisition and training of sign language skills), thus will enhance awareness, facilitate inclusion, participation and equality, and improve access for people who are deaf or hard of hearing.
This PhD project will invent the first Australian Sign Language recogniser to distinguish 1000+ signs. The project’s goal is two-fold: (1) To develop a general sign language recogniser and (2) To provide mechanisms for personalising the recogniser. To achieve this goal, the project will address three tasks: (1) Develop supervised learning methods for recognising Australian Sign Language from structured data, (2) Given the limited amount of structured data for Australian Sign Language, the project will further develop transfer learning methods for knowledge transfer from structured data for other sign languages (e.g., New Zealand Sign Language and British Sign Language), and (3) Develop unsupervised learning methods to localise known and discover unknown signs in unstructured data.
Please apply before 15 December 2022.
In addition to excellent numerical and programming skills, previous experience in Computer Vision, Machine Learning, Deep Learning, Computer Graphics, and Australian Sign Language would be highly advantageous.