Please wait a minute...
Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

Front. Comput. Sci.    2025, Vol. 19 Issue (2) : 192104    https://doi.org/10.1007/s11704-023-3307-2
Architecture
A comprehensive survey on graph neural network accelerators
Jingyu LIU1,2, Shi CHEN1,2, Li SHEN1,2()
1. School of Computer, National University of Defense Technology, Changsha 410073, China
2. Key Laboratory of Advanced Microprocessor Chips and Systems, Changsha 410073, China
 Download: PDF(16089 KB)   HTML
 Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks
Abstract

Deep learning has gained superior accuracy on Euclidean structure data in neural networks. As a result, non-Euclidean structure data, such as graph data, has more sophisticated structural information, which can be applied in neural networks as well to address more complex and practical problems. However, actual graph data obeys a power-law distribution, so the adjacent matrix of a graph is random and sparse. Graph processing accelerator (GPA) is designed to handle the problems above. However, graph computing only processes 1-dimensional data. In graph neural networks (GNNs), graph data is multi-dimensional. Consequently, GNNs include the execution processes of both traditional graph processing and neural network, which have irregular memory access and regular computation, respectively. To obtain more information in graph data and require better model generalization ability, the layers of GNN are deeper, so the overhead of memory access and computation is considerable. At present, GNN accelerators are designed to deal with this issue. In this paper, we conduct a systematic survey regarding the design and implementation of GNN accelerators. Specifically, we review the challenges faced by GNN accelerators, and existing related works in detail to process them. Finally, we evaluate previous works and propose future directions in this booming field.

Keywords graph neural network      accelerators      graph convolutional networks      design space exploration      deep learning      domain-specific architecture     
Corresponding Author(s): Li SHEN   
Just Accepted Date: 22 November 2023   Issue Date: 18 March 2024
 Cite this article:   
Jingyu LIU,Shi CHEN,Li SHEN. A comprehensive survey on graph neural network accelerators[J]. Front. Comput. Sci., 2025, 19(2): 192104.
 URL:  
https://academic.hep.com.cn/fcs/EN/10.1007/s11704-023-3307-2
https://academic.hep.com.cn/fcs/EN/Y2025/V19/I2/192104
Fig.1  The entire process of GNN [50]
  
Name Model supported Stage optimized Platform Hybrid or uniform
HyGCN GCNs Inference ASIC Hybrid
Auten et al. [55] GCNs, GATs Inference ASIC Hybrid
GraphACT GCNs Training CPU-FPGA Hybrid
DeepBurning-GL [56] GCNs Inference FPGA Hybrid
AWB-GCN GCNs Inference ASIC Uniform
GCNAX [28] GCNs Inference ASIC Uniform
Cambricon-G [57] GCN,GraphSAGE Both ASIC Uniform
BlockGNN [58] GCNs,GAT Inference FPGA Uniform
ReGraphX [59] GNNs Training PIM Uniform
I-GCN [30] GCNs Inference ASIC Hybrid
GRIP [60] GCN, GraphSAGE, GIN Inference ASIC Hybrid
EnGN [54] GCNs, GRN Inference ASIC Uniform
Huang et al. [27] GCNs Inference PIM Uniform
GCOD [29] GCNs Inference ASIC Uniform
ReGNN [61] GNNs Inference ASIC Hybrid
ReGNN [62] GCNs Inference PIM Hybrid
Graphite [31] GNNs Both CPU Uniform
SmartSAGE [63] GraphSAGE Training CPU-FPGA Uniform
CoGNN [64] GNNs Training GPU Uniform
PASGCN [34] GCNs Inference PIM Uniform
FlowGNN [65] GNNs Inference ASIC Hybrid
SGCN [33] GCNs Inference ASIC Hybrid
GROW [32] GCNs Inference ASIC Uniform
GraNDe [66] GCNs Inference ASIC Hybrid
GNNAdvisor [25] GNNs Both GPU Uniform
GNNLab [20] GNNs Training GPU Uniform
Degree-Quant [22] GNNs Both CPU-GPU Uniform
Flexgraph [21] GNNs Training CPU Uniform
SGQuant [23] GNNs Both Memory-constrained Devices Uniform
QGTC [24] GNNs Both GPU Uniform
Xu et al. [35] GCNs Training GPU Uniform
Tab.1  Current GNN acceleration architectures
Fig.2  The design aspects of GNN accelerators
Aggregation Combination
Access pattern Indirect and Irregular Direct and regular
Data reusability Low High
Computation Pattern Dynamic and Irregular Static and regular
Computation Intensity Low High
Execution Bound Memory Compute
Tab.2  Execution behaviors in GNNs [49]
Name Nodes Edges Features Storage Classes
Pummbed(PB) 19717 88648 500 38MB 3
Cora(CR) 2708 10556 1433 15MB 7
Citseer(CS) 3327 9104 3703 47MB 6
Reddit(RD) 232965 11465892 602 1.8GB 41
NELL(NE) 65755 266144 5414 1.3GB 210
Tab.3  Current common GNN dataset
Fig.3  The non-zero elements of GNN adjacent matrix (a) and weight matrix (b) [26]
Fig.4  An example of inference on vertex B using a two-layer GCN. The nodeflow describes the propagation of features within a message-passing layer (MPL) [60]. (a) Input graph; (b) nodeflow; (c) inference dataflow
Fig.5  Execution time breakdown of the two phases [49]
Fig.6  Breakdown of the pipeline slots spent on retiring micro-ops or stalled by different bottlenecks during a full-batch training of GraphSAGE on a CPU [31]
Fig.7  Overview of GNNAdvisor [25]
Fig.8  A breakdown of memory usage and data similarity for different stages of the SET model when training OGB-Papers over multiple GPUs (G0, G1,...) with 16 GB of memory each [20]
Fig.9  Illustrative examples of EdgeUpdate redundancy (ER) and Aggregation redundancy (AR) [61]
Fig.10  Overview of ReGNN [61]
Fig.11  Overview of I-GCN [30]
Fig.12  Overview of GCOD [29]
Fig.13  Overview of GROW [32]
Fig.14  Overview of SGCN [33]
Fig.15  Per-minibatch scheduling between CPU and FPGA (up), and between FPGA computation modules (down) [52]
Fig.16  The number of operations for the five datasets (first layer) using the two execution orders [28]
Fig.17  FlexGraph architecture [21]
Fig.18  Execution paths of backward aggregations in two layers on the example graph [35]
Fig.19  Overview of GRIP [60]
Fig.20  The main framework of QGTC [24]
Fig.21  High-level view of the stochastic element of Degree-Quant. Masked (high in-degree) nodes, in green, operate at full precision, while unmasked nodes (red) operate at reduced precision. High in-degree nodes contribute most to poor gradient estimates, hence they are stochastically masked more often [22]
Fig.22  Multi-Granularity Quantization: (a) Component-wise, (b) Topology-aware, (c) Layer-wise, and (d) Uniform Quantization. NOTE: the same color represents the same quantization bit [23]
Fig.23  Overview of FlowGNN [65]
Fig.24  The ReRAM architecture [28]
Fig.25  The REFLIP architecture [27]
Fig.26  Comparing and contrasting the classic graph mapping and our flipped mapping for GCNs. (a) A sample graph, (b) traditional graph processing mapping scheme that maps sparse edge data into crossbar arrays, and (c) the flipped-mapping scheme in REFLIP that maps multi-dimensional vertex features into crossbar arrays. The notation ei,j represents an edge pointing from the vertex i to the vertex j [27]
Fig.27  Overview of ReGNN [62]
Fig.28  GCN design with the two design patterns. (a) Adjacent matrix; (b) GCN design with general MM (GEMM) crossbars; (c) GCN design with CAM crossbars and MAC crossbars [34]
Fig.29  Overview of PASGCN [34]
Fig.30  Overview of GraNDe [66]
  
  
  
Notations Descriptions
G A graph
|?| The length of a set
V The set of nodes in a graph
? Element-wise product
AT Transpose of adjacent matrix A
vi A node vi V
E The set of edges in a graph
N(v) the neighbors of node v
Dv The degrees of a node v
eij An edge eij E
N The number of nodes, N=|V|
M The number of edges, M=|E|
D The dimension of a node vector
X RN×D The feature matrix of a graph
x RN The feature vector of a graph when D equals 1
XiRD The feature vector of the node vi
  Table A1 Symbols and definitions
Models Combine Aggregate
GCN hvl?1 cul
GIN MLPl(hvl?1,Wl) (1+?)?cvl+cul
GraphSAGE hvl?1 Mean(cul)
GAT hvl?1 αv,ul?cul
  Table A2 Computational operations on two phases of GNN
1 Cao S, Lu W, Xu Q. Deep neural networks for learning graph representations. In: Proceedings of the 30th AAAI Conference on Artificial Intelligence. 2016, 1145−1152
2 P, Velickovic G, Cucurull A, Casanova A, Romero P, Liò Y Bengio . Graph attention networks. In: Proceedings of the 6th International Conference on Learning Representations. 2018
3 J, You R, Ying X, Ren W L, Hamilton J Leskovec . GraphRNN: a deep generative model for graphs. 2018, arXiv preprint arXiv: 1802.08773
4 K, Xu W, Hu J, Leskovec S Jegelka . How powerful are graph neural networks? In: Proceedings of the 7th International Conference on Learning Representations. 2019
5 Z, Wu S, Pan F, Chen G, Long C, Zhang P S Yu . A comprehensive survey on graph neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2021, 32( 1): 4–24
6 W L, Hamilton Z, Ying J Leskovec . Inductive representation learning on large graphs. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017, 1025−1035
7 R, Ying R, He K, Chen P, Eksombatchai W L, Hamilton J Leskovec . Graph convolutional neural networks for web-scale recommender systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 974−983
8 T N, Kipf M Welling . Semi-supervised classification with graph convolutional networks. In: Proceedings of the 5th International Conference on Learning Representations. 2017
9 H, Gao Z, Wang S Ji . Large-scale learnable graph convolutional networks. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2018, 1416−1424
10 Li R, Wang S, Zhu F, Huang J. Adaptive graph convolutional neural networks. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, (AAAI 18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI 18). 2018, 434
11 Xu K, Hu W, Leskovec J, Jegelka S. How powerful are graph neural networks? 2018, arXiv preprint arXiv: 1810.00826
12 Zhang M, Cui Z, Neumann M, Chen Y. An end-to-end deep learning architecture for graph classification. In: Proceedings of the 32nd AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th Innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). 2018, 4438−4445
13 Yin L, Wang J, Zheng H. Exploring architecture, dataflow, and sparsity for gcn accelerators: a holistic framework. In: Proceedings of the Great Lakes Symposium on VLSI 2023. 2023, 489–495
14 Garg R, Qin E, Munoz-Matrinez F, Guirado R, Jain A, Abadal S, Abellan J L, Acacio M E, Alarcon E, Rajamanickam S, Krishna T. Understanding the design space of sparse/dense multiphase gnn dataflows on spatial accelerators. In: Proceedings of IEEE International Parallel and Distributed Processing Symposium. 2022, 571–582
15 T, Hamaguchi H, Oiwa M, Shimbo Y Matsumoto . Knowledge transfer for out-of-knowledge-base entities: a graph neural network approach. In: Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017, 1802−1808
16 M, Schlichtkrull T N, Kipf P, Bloem den Berg R, van I, Titov M Welling . Modeling relational data with graph convolutional networks. In: Proceedings of the 15th International Conference on the Semantic Web. 2018, 593−607
17 J, Zhou G, Cui S, Hu Z, Zhang C, Yang Z, Liu L, Wang C, Li M Sun . Graph neural networks: a review of methods and applications. AI Open, 2020, 1: 57–81
18 L, Ma Z, Yang Y, Miao J, Xue M, Wu L, Zhou Y Dai . NeuGraph: parallel deep neural network computation on large graphs. In: Proceedings of 2019 USENIX Annual Technical Conference. 2019, 443−458
19 M, Yan Z, Chen L, Deng X, Ye Z, Zhang D, Fan Y Xie . Characterizing and understanding GCNs on GPU. IEEE Computer Architecture Letters, 2020, 19( 1): 22–25
20 Yang J, Tang D, Song X, Wang L, Yin Q, Chen R, Yu W, Zhou J. GNNLab: a factored system for sample based GNN training over GPUs. In: Proceedings of the 17th European Conference on Computer Systems. 2022, 417−434
21 Wang L, Yin Q, Tian C, Yang J, Chen R, Yu W, Yao Z, Zhou J. FlexGraph: a flexible and efficient distributed framework for GNN training. In: Proceedings of the 16th European Conference on Computer Systems. 2021, 67−82
22 Tailor S A, Fernández-Marqués J, Lane N D. Degree-quant: quantization-aware training for graph neural networks. In: Proceedings of the 9th International Conference on Learning Representations. 2021
23 B, Feng Y, Wang X, Li S, Yang X, Peng Y Ding . SGQuant: squeezing the last bit on graph neural networks with specialized quantization. In: Proceedings of the 32nd IEEE International Conference on Tools with Artificial Intelligence. 2020, 1044−1052
24 Y, Wang B, Feng Y Ding . QGTC: accelerating quantized graph neural networks via GPU tensor core. In: Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming. 2022, 107−119
25 Y, Wang B, Feng G, Li S, Li L, Deng Y, Xie Y Ding . GNNAdvisor: an adaptive and efficient runtime system for GNN acceleration on GPUs. In: Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation. 2021, 515−531
26 T, Geng A, Li R, Shi C, Wu T, Wang Y, Li P, Haghi A, Tumeo S, Che S, Reinhardt M C Herbordt . AWB-GCN: a graph convolutional network accelerator with runtime workload rebalancing. In: Proceedings of the 53rd Annual IEEE/ACM International Symposium on Microarchitecture. 2020, 922−936
27 Y, Huang L, Zheng P, Yao Q, Wang X, Liao H, Jin J Xue . Accelerating graph convolutional networks using crossbar-based processing-in-memory architectures. In: Proceedings of IEEE International Symposium on High-Performance Computer Architecture. 2022, 1029−1042
28 J, Li A, Louri A, Karanth R C Bunescu . GCNAX: a flexible and energy-efficient accelerator for graph convolutional neural networks. In: Proceedings of IEEE International Symposium on High-Performance Computer Architecture. 2021, 775−788
29 H, You T, Geng Y, Zhang A, Li Y Lin . GCoD: graph convolutional network acceleration via dedicated algorithm and accelerator Co-design. In: Proceedings of IEEE International Symposium on High-Performance Computer Architecture. 2022, 460−474
30 T, Geng C, Wu Y, Zhang C, Tan C, Xie H, You M, Herbordt Y, Lin A Li . I-GCN: a graph convolutional network accelerator with runtime locality enhancement through islandization. In: Proceedings of the 54th Annual IEEE/ACM International Symposium on Microarchitecture. 2021, 1051−1063
31 Z, Gong H, Ji Y, Yao C W, Fletcher C J, Hughes J Torrellas . Graphite: optimizing graph neural networks on CPUs through cooperative software-hardware techniques. In: Proceedings of the 49th Annual International Symposium on Computer Architecture. 2022, 916−931
32 R, Hwang M, Kang J, Lee D, Kam Y, Lee M Rhu . GROW: a row-stationary sparse-dense GEMM accelerator for memory-efficient graph convolutional neural networks. In: Proceedings of IEEE International Symposium on High-Performance Computer Architecture. 2023, 42−55
33 M, Yoo J, Song J, Lee N, Kim Y, Kim J Lee . SGCN: exploiting compressed-sparse features in deep graph convolutional network accelerators. In: Proceedings of IEEE International Symposium on High-Performance Computer Architecture. 2023, 1−14
34 T, Yang D, Li F, Ma Z, Song Y, Zhao J, Zhang F, Liu L Jiang . PASGCN: an ReRAM-based PIM design for GCN with adaptively sparsified graphs. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2023, 42( 1): 150–163
35 S, Xu Z, Shao C, Yang X, Liao H Jin . Accelerating backward aggregation in GCN training with execution path preparing on GPUs. IEEE Transactions on Parallel and Distributed Systems, 2022, 33( 12): 4891–4902
36 C, Gui L, Zheng B, He C, Liu X, Chen X, Liao H Jin . A survey on graph processing accelerators: challenges and opportunities. Journal of Computer Science and Technology, 2019, 34( 2): 339–371
37 A, Roy I, Mihailovic W Zwaenepoel . X-stream: edge-centric graph processing using streaming partitions. In: Proceedings of the 24th Symposium on Operating Systems Principles. 2013, 472−488
38 B, Perozzi R, Al-Rfou S Skiena . DeepWalk: online learning of social representations. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2014, 701−710
39 M M, Ozdal S, Yesil T, Kim A, Ayupov J, Greth S, Burns O Ozturk . Energy efficient architecture for graph analytics accelerators. In: Proceedings of the 43rd ACM/IEEE Annual International Symposium on Computer Architecture. 2016, 166−177
40 M, Zhang Y, Zhuo C, Wang M, Gao Y, Wu K, Chen C, Kozyrakis X Qian . GraphP: reducing communication for PIM-based graph processing with efficient data partition. In: Proceedings of IEEE International Symposium on High Performance Computer Architecture. 2018, 544−557
41 L, Song Y, Zhuo X, Qian H, Li Y Chen . GraphR: accelerating graph processing using ReRAM. In: Proceedings of IEEE International Symposium on High Performance Computer Architecture. 2018, 531−543
42 C, Xie L, Yan W J, Li Z Zhang . Distributed power-law graph computing: theoretical and empirical analysis. In: Proceedings of the 27th International Conference on Neural Information Processing Systems. 2014, 1673−1681
43 J E, Gonzalez Y, Low H, Gu D, Bickson C Guestrin . PowerGraph: distributed graph-parallel computation on natural graphs. In: Proceedings of the 10th USENIX Conference on Operating Systems Design and Implementation. 2012, 17−30
44 S, Heidari Y, Simmhan R N, Calheiros R Buyya . Scalable graph processing frameworks: a taxonomy and open challenges. ACM Computing Surveys, 2019, 51( 3): 60
45 T J, Ham L, Wu N, Sundaram N, Satish M Martonosi . Graphicionado: a high-performance and energy-efficient accelerator for graph analytics. In: Proceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture. 2016, 1−13
46 M, Yan X, Hu S, Li A, Basak H, Li X, Ma I, Akgun Y, Feng P, Gu L, Deng X, Ye Z, Zhang D, Fan Y Xie . Alleviating irregularity in graph analytics acceleration: a hardware/software Co-design approach. In: Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. 2019, 615−628
47 S, Zhang Z, Qin Y, Yang L, Shen Z Wang . Transparent partial page migration between CPU and GPU. Frontiers of Computer Science, 2020, 14( 3): 143101
48 S, Fan J, Fei L Shen . Accelerating deep learning with a parallel mechanism using CPU + MIC. International Journal of Parallel Programming, 2018, 46( 4): 660–673
49 M, Yan L, Deng X, Hu L, Liang Y, Feng X, Ye Z, Zhang D, Fan Y Xie . HyGCN: a GCN accelerator with hybrid architecture. In: Proceedings of IEEE International Symposium on High Performance Computer Architecture. 2020, 15−29
50 T, Yang D, Li Y, Han Y, Zhao F, Liu X, Liang Z, He L Jiang . PIMGCN: a ReRAM-based PIM design for graph convolutional network acceleration. In: Proceedings of the 58th ACM/IEEE Design Automation Conference. 2021, 583−588
51 H Yang . AliGraph: a comprehensive graph neural network platform. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019, 3165−3166
52 H, Zeng V K Prasanna . GraphACT: accelerating GCN training on CPU-FPGA heterogeneous platforms. In: Proceedings of 2020 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays. 2020, 255−265
53 Kung H T. Why systolic architectures? Computer, 1982, 15(1): 37−46
54 S, Liang Y, Wang C, Liu L, He H, Li D, Xu X Li . EnGN: a high-throughput and energy-efficient accelerator for large graph neural networks. IEEE Transactions on Computers, 2021, 70( 9): 1511–1525
55 A, Auten M, Tomei R Kumar . Hardware acceleration of graph neural networks. In: Proceedings of the 57th ACM/IEEE Design Automation Conference. 2020, 1−6
56 S, Liang C, Liu Y, Wang H, Li X Li . DeepBurning-GL: an automated framework for generating graph neural network accelerators. In: Proceedings of IEEE/ACM International Conference on Computer Aided Design. 2020, 72
57 X, Song T, Zhi Z, Fan Z, Zhang X, Zeng W, Li X, Hu Z, Du Q, Guo Y Chen . Cambricon-G: a polyvalent energy-efficient accelerator for dynamic graph neural networks. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2022, 41( 1): 116–128
58 Zhou Z, Shi B, Zhang Z, Guan Y, Sun G, Luo G. BlockGNN: towards efficient GNN acceleration using block-circulant weight matrices. In: Proceedings of the 58th ACM/IEEE Design Automation Conference. 2021, 1009−1014
59 A I, Arka J R, Doppa P P, Pande B K, Joardar K Chakrabarty . RegraphX: NoC-enabled 3D heterogeneous ReRAM architecture for training graph neural networks. In: Proceedings of Design, Automation & Test in Europe Conference & Exhibition. 2021, 1667−1672
60 K, Kiningham P, Levis C Ré . GRIP: a graph neural network accelerator architecture. IEEE Transactions on Computers, 2023, 72( 4): 914–925
61 C, Chen K, Li Y, Li X Zou . ReGNN: a redundancy-eliminated graph neural networks accelerator. In: Proceedings of IEEE International Symposium on High-Performance Computer Architecture. 2022, 429−443
62 C, Liu H, Liu H, Jin X, Liao Y, Zhang Z, Duan J, Xu H Li . ReGNN: a ReRAM-based heterogeneous architecture for general graph neural networks. In: Proceedings of the 59th ACM/IEEE Design Automation Conference. 2022, 469−474
63 Y, Lee J, Chung M Rhu . SmartsAGE: training large-scale graph neural networks using in-storage processing architectures. In: Proceedings of the 49th Annual International Symposium on Computer Architecture. 2022, 932−945
64 Q, Sun Y, Liu H, Yang R, Zhang M, Dun M, Li X, Liu W, Xiao Y, Li Z, Luan D Qian . CoGNN: efficient scheduling for concurrent GNN training on GPUs. In: Proceedings of International Conference on High Performance Computing, Networking, Storage and Analysis. 2022, 39
65 R, Sarkar S, Abi-Karam Y, He L, Sathidevi C Hao . FlowGNN: a dataflow architecture for real-time workload-agnostic graph neural network inference. In: Proceedings of the 29th IEEE International Symposium on High-Performance Computer Architecture. 2023, 1099−1112
66 S, Yun B, Kim J, Park H, Nam J H, Ahn E Lee . GraNDe: near-data processing architecture with adaptive matrix mapping for graph convolutional networks. IEEE Computer Architecture Letters, 2022, 21( 2): 45–48
67 F G Gustavson . Two fast algorithms for sparse matrices: multiplication and permuted transposition. ACM Transactions on Mathematical Software, 1978, 4( 3): 250–269
[1] Tao HE, Ming LIU, Yixin CAO, Zekun WANG, Zihao ZHENG, Bing QIN. Exploring & exploiting high-order graph structure for sparse knowledge graph completion[J]. Front. Comput. Sci., 2025, 19(2): 192306-.
[2] Shangwei WU, Yingtong XIONG, Hui LIANG, Chuliang WENG. D2-GCN: a graph convolutional network with dynamic disentanglement for node classification[J]. Front. Comput. Sci., 2025, 19(1): 191305-.
[3] Lingling ZHAO, Shitao SONG, Pengyan WANG, Chunyu WANG, Junjie WANG, Maozu GUO. A MLP-Mixer and mixture of expert model for remaining useful life prediction of lithium-ion batteries[J]. Front. Comput. Sci., 2024, 18(5): 185329-.
[4] Junfei TANG, Ran SONG, Yuxin HUANG, Shengxiang GAO, Zhengtao YU. Semantic-aware entity alignment for low resource language knowledge graph[J]. Front. Comput. Sci., 2024, 18(4): 184319-.
[5] Enes DEDEOGLU, Himmet Toprak KESGIN, Mehmet Fatih AMASYALI. A robust optimization method for label noisy datasets based on adaptive threshold: Adaptive-k[J]. Front. Comput. Sci., 2024, 18(4): 184315-.
[6] Hengyu LIU, Tiancheng ZHANG, Fan LI, Minghe YU, Ge YU. A probabilistic generative model for tracking multi-knowledge concept mastery probability[J]. Front. Comput. Sci., 2024, 18(3): 183602-.
[7] Mingzhi YUAN, Kexue FU, Zhihao LI, Manning WANG. Decoupled deep hough voting for point cloud registration[J]. Front. Comput. Sci., 2024, 18(2): 182703-.
[8] Mingzhen LI, Changxi LIU, Jianjin LIAO, Xuegui ZHENG, Hailong YANG, Rujun SUN, Jun XU, Lin GAN, Guangwen YANG, Zhongzhi LUAN, Depei QIAN. Towards optimized tensor code generation for deep learning on sunway many-core processor[J]. Front. Comput. Sci., 2024, 18(2): 182101-.
[9] Hanadi AL-MEKHLAFI, Shiguang LIU. Single image super-resolution: a comprehensive review and recent insight[J]. Front. Comput. Sci., 2024, 18(1): 181702-.
[10] Miao ZHANG, Tingting HE, Ming DONG. Meta-path reasoning of knowledge graph for commonsense question answering[J]. Front. Comput. Sci., 2024, 18(1): 181303-.
[11] Yongquan LIANG, Qiuyu SONG, Zhongying ZHAO, Hui ZHOU, Maoguo GONG. BA-GNN: Behavior-aware graph neural network for session-based recommendation[J]. Front. Comput. Sci., 2023, 17(6): 176613-.
[12] Yufei ZENG, Zhixin LI, Zhenbin CHEN, Huifang MA. Aspect-level sentiment analysis based on semantic heterogeneous graph convolutional network[J]. Front. Comput. Sci., 2023, 17(6): 176340-.
[13] Yamin HU, Hao JIANG, Zongyao HU. Measuring code maintainability with deep neural networks[J]. Front. Comput. Sci., 2023, 17(6): 176214-.
[14] Jinwei LUO, Mingkai HE, Weike PAN, Zhong MING. BGNN: Behavior-aware graph neural network for heterogeneous session-based recommendation[J]. Front. Comput. Sci., 2023, 17(5): 175336-.
[15] Yuan GAO, Xiang WANG, Xiangnan HE, Huamin FENG, Yongdong ZHANG. Rumor detection with self-supervised learning on texts and social graph[J]. Front. Comput. Sci., 2023, 17(4): 174611-.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed