Skip to main content

Towards secure and trustworthy deep learning systems

Primary supervisor

Xiaoning Du

Research area

Software Engineering

Over the past decades, we have witnessed the emergence and rapid development of deep learning. DL has been successfully deployed in many real-life applications, including face recognition, automatic speech recognition, and autonomous driving, etc. However, due to the intrinsic vulnerability and the lack of rigorous verification, DL systems suffer from quality and security issues, such as the Alexa/Siri manipulation and the autonomous car accidents. Developing secure and trustworthy DL systems is challenging, especially given the limited time budget. For traditional software, the software development lifecycle greatly improves its quality and productivity. Here calls for a systematic development lifecycle for the DL systems. Due to the fundamentally different programming paradigm and logic representation from traditional software, the existing development lifecycle can not fully suit the characteristics of DL systems. This project,  from the engineering perspective, aims to explore a better development process for DL, covering requirement analysis, data collection and labeling, data cleaning, network design, training, testing, and operation.

Required knowledge

deep learning, natural language processing, software testing


Learn more about minimum entry requirements.