Skip to main content

Practical Attacks against Deep Learning Apps

Primary supervisor

Xingliang Yuan

Research area

Cybersecurity

With the growing computing power of mobile devices, deep learning (DL) models are deployed on mobile apps to enable more private, accurate, and fast services for end-users. To facilitate the deployment of DL to apps, major companies have been extending DL frameworks to mobile-end, e.g., Tensorflow Lite, Pytorch Mobile, Caffe2 Mobile, etc. In particular, Google recently enabled Tensorflow Lite for training at mobile apps in addition to inference in November 2021 (https://blog.tensorflow.org/2021/11/on-device-training-in-tensorflow-lite.html). This advancement further allows model fine-tuning and federated learning on mobile devices to achieve personalisation and overcome the data acquisition barrier for mobile apps.  

Problems: Despite its promising benefits, DL models deployed in the mobile apps also face new attack surfaces and raise critical security concerns. First, models are deemed as intellectual properties of companies that could cost enormous human labour and capital investment. Recent work [1] shows current DL models can be simply extracted by standard static and dynamic analysis of mobile apps. Second, models normally embed private information of training data, as demonstrated in well-known membership inference attacks [2], and thus exposing DL models would also harm the privacy of data owners. Third, DL apps such as object detection in traffic apps and images classification and behaviour prediction in clinical trial apps are critical to decision making. Adversarial attacks that can lead to the compromise of the DL app integrity would threaten human safety and even lead to death. 

To comprehensively understand security risks and issues of DL apps, in this project, we would like to investigate practical attacks against real-world DL apps. 

Research Task I: Analyse real-world attacker capability in the context of the DL model deployment at mobile apps.

Research Task II: Explore and devise model extraction/stealing attacks against DL models during the runtime of mobile apps. 

Research Task III: Explore and devise membership inference attacks against mobile DL models, and model inversion attacks that could recover private user input to the DL apps.

Research Task IV: Design and implement adversarial attacks that can lead to misclassification or reduce the performance of mobile DL models. 

Impact: The outcome of this project will provide a comprehensive understanding of security risks and threats when developing and deploying DL mobile apps to the broader academia and industrial community. It will also serve as a guideline on how to design secure DL mobile apps that can detect and mitigate those practical attacks against DL mobile apps, so as to prevent capital loss of mobile apps companies and protect the data of data owners and end-users involved in DL training and inference.  

 

[1] Sun et al., "Mind your weight(s): A large-scale study on insufficient machine learning model protection in mobile apps.", USENIX Security, 2021.

[2] Shokri et al., "Membership inference attacks against machine learning models.", IEEE S&P, 2017.

Required knowledge

Knowledge in deep learning and adversarial attacks on deep learning;

Knowledge in android apps;

Skills in python, java, machine learning frameworks;

Project funding

Other

Learn more about minimum entry requirements.