Skip to main content

Towards adversarially robust deep models

Primary supervisor

Munawar Hayat

Deep Neural Networks have shown remarkable performance across a wide range of computer vision tasks. They are however vulnerable to carefully crafted, human imperceptible perturbations, which once added to the input images, can easily fool models' decisions. Such adversarial perturbations, therefore, pose a serious concern to the deployment of deep learning models in real-life scenarios. This project will aim towards developing reliable and trustworthy deep networks by e.g., exploring robust training strategies, loss formulations, and architectural modifications.  


Learn more about minimum entry requirements.