RoGAT: a robust GNN combined revised GAT with adjusted graphs


Abstract in English

Graph Neural Networks(GNNs) are useful deep learning models to deal with the non-Euclid data. However, recent works show that GNNs are vulnerable to adversarial attacks. Small perturbations can lead to poor performance in many GNNs, such as Graph attention networks(GATs). Therefore, enhancing the robustness of GNNs is a critical problem. Robust GAT(RoGAT) is proposed to improve the robustness of GNNs in this paper, . Note that the original GAT uses the attention mechanism for different edges but is still sensitive to the perturbation, RoGAT adjusts the edges weight to adjust the attention scores progressively. Firstly, RoGAT tunes the edges weight based on the assumption that the adjacent nodes should have similar nodes. Secondly, RoGAT further tunes the features to eliminate features noises since even for the clean graph, there exists some unreasonable data. Then, we trained the adjusted GAT model to defense the adversarial attacks. Different experiments against targeted and untargeted attacks demonstrate that RoGAT outperforms significantly than most the state-of-the-art defense methods. The implementation of RoGAT based on the DeepRobust repository for adversarial attacks.

Download