Skip to main content

Primary supervisor

Xingliang Yuan

Co-supervisors


Graph neural networks (GNNs) are widely used in many applications. Their training graph data and the model itself are considered sensitive and face growing privacy threats.
 

Aim/outline

In this project, we aim to explore how GNNs models may leak sensitive information during training or inferencing periods. We will investigate various attacks targeting at the privacy of the GNNs, e.g., (1) model extraction attacks which aims at stealing the GNN model, and (2) membership inference attacks which determine whether a graph data record or subgraph was used to train an GNN model or not. To mitigate the privacy leakage, we will then propose defence mechanisms for the future design of GNN systems. They are expected to reduce the attack performance while maintaining a high utility.
 

Required knowledge

  • Graph Neural Network
  • Python
  • Machine Learning Frameworks