EDITS: Modeling and Mitigating Data Bias for Graph Neural Networks


Abstract in English

Graph Neural Networks (GNNs) have recently demonstrated superior capability of tackling graph analytical problems in various applications. Nevertheless, with the wide-spreading practice of GNNs in high-stake decision-making processes, there is an increasing societal concern that GNNs could make discriminatory decisions that may be illegal towards certain demographic groups. Although some explorations have been made towards developing fair GNNs, existing approaches are tailored for a specific GNN model. However, in practical scenarios, myriads of GNN variants have been proposed for different tasks, and it is costly to train and fine-tune existing debiasing models for different GNNs. Also, bias in a trained model could originate from training data, while how to mitigate bias in the graph data is usually overlooked. In this work, different from existing work, we first propose novel definitions and metrics to measure the bias in an attributed network, which leads to the optimization objective to mitigate bias. Based on the optimization objective, we develop a framework named EDITS to mitigate the bias in attributed networks while preserving useful information. EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks. Extensive experiments on both synthetic and real-world datasets demonstrate the validity of the proposed bias metrics and the superiority of EDITS on both bias mitigation and utility maintenance. Open-source implementation: https://github.com/yushundong/EDITS.

Download