Graph structured data plays a crucial role in a broad range of applications such as social networks, geographical maps, and web link data. Graph neural networks (GNNs) while being effective for modelling graph data, are also vulnerable to adversarial attacks.
In this project, we aim to explore the adversarial robustness of GNNs against misclassification attacks. We will initially investigate attacks on GNNs for classification by perturbing the graph topology or injecting carefully designed perturbations to nodal features. These proposed attacks can contaminate specific nodes or degrade the performances of the entire GNNs. As a next step, we will come up with defence mechanisms to improve the robustness of GNNs and demonstrate their efficacy against the misclassification attacks raised in the real world.
Graph Neural Network, Python, Machine Learning Frameworks