|
|
Soft-GNN: towards robust graph neural networks via self-adaptive data utilization |
Yao WU1,2, Hong HUANG1( ), Yu SONG3, Hai JIN1 |
1. National Engineering Research Center for Big Data Technology and System, Service Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan 430074, China 2. College of Information and Communication, National University of Defense Technology, Wuhan 430019, China 3. Department of Computer Science and Operations Research, Université de Montréal, Montreal H3C 3J7, Canada |
|
|
Abstract Graph neural networks (GNNs) have gained traction and have been applied to various graph-based data analysis tasks due to their high performance. However, a major concern is their robustness, particularly when faced with graph data that has been deliberately or accidentally polluted with noise. This presents a challenge in learning robust GNNs under noisy conditions. To address this issue, we propose a novel framework called Soft-GNN, which mitigates the influence of label noise by adapting the data utilized in training. Our approach employs a dynamic data utilization strategy that estimates adaptive weights based on prediction deviation, local deviation, and global deviation. By better utilizing significant training samples and reducing the impact of label noise through dynamic data selection, GNNs are trained to be more robust. We evaluate the performance, robustness, generality, and complexity of our model on five real-world datasets, and our experimental results demonstrate the superiority of our approach over existing methods.
|
Keywords
graph neural networks
node classification
label noise
robustness
|
Corresponding Author(s):
Hong HUANG
|
Just Accepted Date: 22 March 2024
Issue Date: 14 May 2024
|
|
1 |
B, Wu J, Li C, Hou G, Fu Y, Bian L, Chen J Huang . Recent advances in reliable deep graph learning: adversarial attack, inherent noise, and distribution shift. 2022, arXiv preprint arXiv: 2202.07114
|
2 |
S Y, Li S J, Huang S Chen . Crowdsourcing aggregation with deep Bayesian learning. Science China Information Sciences, 2021, 64( 3): 130104
|
3 |
N T, Hoang J J, Choong T Murata . Learning graph neural networks with noisy labels. In: Proceedings of the ICLR LLD 2019. 2019
|
4 |
D, Zügner A, Akbarnejad S Günnemann . Adversarial attacks on neural networks for graph data. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 2847−2856
|
5 |
Y, Li J, Yin L Chen . Unified robust training for graph neural networks against label noise. In: Proceedings of the 25th Pacific-Asia Conference on Knowledge Discovery and Data Mining. 2021, 528−540
|
6 |
Y, Zhuo X, Zhou J Wu . Training graph convolutional neural network against label noise. In: Proceedings of the 28th International Conference on Neural Information Processing. 2021, 677−689
|
7 |
X, Du T, Bian Y, Rong B, Han T, Liu T, Xu W, Huang J Huang . PI-GNN: a novel perspective on semi-supervised node classification against noisy labels. 2021, arXiv preprint arXiv: 2106.07451
|
8 |
E, Dai C, Aggarwal S Wang . NRGNN: learning a label noise resistant graph neural network on sparsely and noisily labeled graphs. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021, 227−236
|
9 |
T N, Kipf M Welling . Semi-supervised classification with graph convolutional networks. In: Proceedings of ICLR 2017. 2017
|
10 |
L, Huang C, Zhang H Zhang . Self-adaptive training: beyond empirical risk minimization. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2020, 1624
|
11 |
B, Frenay M Verleysen . Classification in the presence of label noise: a survey. IEEE Transactions on Neural Networks and Learning Systems, 2014, 25( 5): 845–869
|
12 |
J, Bootkrajang A Kabán . Classification of mislabelled microarrays using robust sparse logistic regression. Bioinformatics, 2013, 29( 7): 870–877
|
13 |
X, Shi W Che . Combating with extremely noisy samples in weakly supervised slot filling for automatic diagnosis. Frontiers of Computer Science, 2023, 17( 5): 175333
|
14 |
T, Xiao T, Xia Y, Yang C, Huang X Wang . Learning from massive noisy labeled data for image classification. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition. 2015, 2691−2699
|
15 |
Q, Tian H, Sun S, Peng T Ma . Self-adaptive label filtering learning for unsupervised domain adaptation. Frontiers of Computer Science, 2023, 17( 1): 171308
|
16 |
C, Beck H, Booth M, El-Assady M Butt . Representation problems in linguistic annotations: ambiguity, variation, uncertainty, error and bias. In: Proceedings of the 14th Linguistic Annotation Workshop. 2020, 60−73
|
17 |
B, Biggio B, Nelson P Laskov . Support vector machines under adversarial label noise. In: Proceedings of the 3rd Asian Conference on Machine Learning. 2011, 97−112
|
18 |
J, Abellán A R Masegosa . An experimental study about simple decision trees for bagging ensemble on datasets with classification noise. In: Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning and Uncertainty. 2009, 446−456
|
19 |
R, Wang T, Liu D Tao . Multiclass learning with partially corrupted labels. IEEE Transactions on Neural Networks and Learning Systems, 2018, 29( 6): 2568–2580
|
20 |
S, Sigurdsson J, Larsen L K, Hansen P A, Philipsen H C Wulf . Outlier estimation and detection application to skin lesion classification. In: Proceedings of 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing. 2002, 1049−1052
|
21 |
C, Bouveyron S Girard . Robust supervised classification with mixture models: learning from data with uncertain labels. Pattern Recognition, 2009, 42( 11): 2649–2658
|
22 |
C E, Brodley M A Friedl . Identifying mislabeled training data. Journal of Artificial Intelligence Research, 1999, 11: 131–167
|
23 |
J, Thongkam G, Xu Y, Zhang F Huang . Support vector machine for outlier detection in breast cancer survivability prediction. In: Proceedings of the Asia-Pacific Web Conference. 2008, 99−109
|
24 |
A L B, Miranda L P F, Garcia A C P L F, Carvalho A C Lorena . Use of classification algorithms in noise detection and elimination. In: Proceedings of the 4th International Conference on Hybrid Artificial Intelligence Systems. 2009, 417−424
|
25 |
E, Dedeoglu H T, Kesgin M F Amasyali . A robust optimization method for label noisy datasets based on adaptive threshold: adaptive-k. Frontiers of Computer Science, 2024, 18( 4): 184315
|
26 |
Y, Wang A, Kucukelbir D M Blei . Robust probabilistic modeling with Bayesian data reweighting. In: Proceedings of the 34th International Conference on Machine Learning. 2017, 3646−3655
|
27 |
L, Jiang Z, Zhou T, Leung L J, Li F F Li . MentorNet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: Proceedings of the 35th International Conference on Machine Learning. 2018, 2304−2313
|
28 |
J, Shu Q, Xie L, Yi Q, Zhao S, Zhou Z, Xu D Meng . Meta-weight-net: learning an explicit mapping for sample weighting. In: Proceedings of the 33rd International Conference on Neural Information Processing Systems. 2019, 172
|
29 |
D, Nguyen C K, Mummadi T P N, Ngo T H P, Nguyen L, Beggel T Brox . SELF: learning to filter noisy labels with self-ensembling. In: Proceedings of the 8th International Conference on Learning Representations. 2020
|
30 |
P, Chen B, Liao G, Chen S Zhang . Understanding and utilizing deep neural networks trained with noisy labels. In: Proceedings of the 36th International Conference on Machine Learning. 2019, 1062−1070
|
31 |
G, Zheng A H, Awadallah S Dumais . Meta label correction for noisy label learning. In: Proceedings of the 35th AAAI Conference on Artificial Intelligence. 2021, 11053−11061
|
32 |
J, Liu R, Li C Sun . Co-correcting: noise-tolerant medical image classification via mutual label correction. IEEE Transactions on Medical Imaging, 2021, 40( 12): 3580–3592
|
33 |
E, Malach S Shalev-Shwartz . Decoupling "when to update" from "how to update". In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 961−971
|
34 |
B, Han Q, Yao X, Yu G, Niu M, Xu W, Hu I W, Tsang M Sugiyama . Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 8536−8546
|
35 |
X, Guo W Wang . Towards making co-training suffer less from insufficient views. Frontiers of Computer Science, 2019, 13( 1): 99–105
|
36 |
Gasteiger J, Bojchevski A, Günnemann S. Predict then propagate: Graph neural networks meet personalized pagerank. In: Proceedings of the International Conference on Learning Representations (ICLR). 2019
|
37 |
Wei X, Gong X, Zhan Y, Du B, Luo Y, Hu W. CLNode: curriculum learning for node classification. In: Proceedings of the 16th ACM International Conference on Web Search and Data Mining. 2023, 670−678
|
38 |
Y, Liu Z, Wu Z, Lu G, Wen J, Ma G, Lu X Zhu . Multi-teacher self-training for semi-supervised node classification with noisy labels. In: Proceedings of the 31st ACM International Conference on Multimedia. 2023, 2946−2954
|
39 |
K, Kuang P, Cui S, Athey R, Xiong B Li . Stable prediction across unknown environments. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 1617−1626
|
40 |
M, Hwang Y, Jeong W Sung . Data distribution search to select core-set for machine learning. In: Proceedings of the 9th International Conference on Smart Media and Applications. 2020, 172−176
|
41 |
M, Paul S, Ganguli G K Dziugaite . Deep learning on a data diet: finding important examples early in training. In: Proceedings of the 34th International Conference on Neural Information Processing Systems. 2021, 20596−20607
|
42 |
H, Wang J Leskovec . Unifying graph convolutional neural networks and label propagation. 2020, arXiv preprint arXiv: 2002.06755
|
43 |
W, Dong J, Wu Y, Luo Z, Ge P Wang . Node representation learning in graph via node-to-neighbourhood mutual information maximization. In: Proceedings of 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022, 16620−16629
|
44 |
S, Hochreiter J Schmidhuber . Long short-term memory. Neural Computation, 1997, 9( 8): 1735–1780
|
45 |
P, Sen G, Namata M, Bilgic L, Getoor B, Gallagher T Eliassi-Rad . Collective classification in network data. AI Magazine, 2008, 29( 3): 93–106
|
46 |
G, Namata B, London L, Getoor B Huang . Query-driven active surveying for collective classification. In: Proceedings of the Workshop on Mining and Learning with Graphs. 2012
|
47 |
P, Mernyei C Cangea . Wiki-CS: a wikipedia-based benchmark for graph neural networks. In: Proceedings of the Graph Representation Learning and Beyond Workshop on ICML. 2020
|
48 |
O, Shchur M, Mumme A, Bojchevski S Günnemann . Pitfalls of graph neural network evaluation. 2019, arXiv preprint arXiv: 1811.05868
|
49 |
Z, Zhang M R Sabuncu . Generalized cross entropy loss for training deep neural networks with noisy labels. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018, 8792−8802
|
50 |
M, Zhu X, Wang C, Shi H, Ji P Cui . Interpreting and unifying graph neural networks with an optimization framework. In: Proceedings of the Web Conference 2021. 2021, 1215−1226
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
|
Shared |
|
|
|
|
|
Discussed |
|
|
|
|