Skip to main content

Developing classifiers for offensive material

Primary supervisor

Campbell Wilson


This project will seek to further the research into and development of machine learning techniques that may be used to triage, classify, and otherwise process material of a distressing nature (such as child exploitation material). It will involve the use of deep neural networks for image, video, audio, social network, and/or text classification.

The Faculty of Information Technology has a mission to advance social good through its research. Key to this mission is the AiLECS (Artificial Intelligence for Law Enforcement and Community Safety) research lab. The AiLECS lab is a joint initiative of Monash University and the Australian Federal Police, and researches the ethical application of AI theories and techniques to problems of interest to law enforcement agencies. The work of the lab is applied in nature, we seek to rapidly translate our research into real-world solutions to significant threats to community safety.

PhD students will not directly handle this data as, in many cases, it is illegal to possess and transmit such material. A related project is developing infrastructure designed to separate researchers from harmful exposure to distressing and/or illegal material. However, this presents challenges to research in advancing technological solutions dealing with this data. This project will research improvements to classification models as well as ways in which AI techniques for classification can be developed in this restricted-data context. 


Required knowledge

This project will suit candidates with experience in machine learning, specifically deep learning techniques, with an interest in applied research.  

Project funding


Learn more about minimum entry requirements.