Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

, Volume 19 Issue 2

For Selected: View Abstracts Toggle Thumbnails
Architecture
A comprehensive survey on graph neural network accelerators
Jingyu LIU, Shi CHEN, Li SHEN
Front. Comput. Sci.. 2025, 19 (2): 192104-.  
https://doi.org/10.1007/s11704-023-3307-2

Abstract   HTML   PDF (16089KB)

Deep learning has gained superior accuracy on Euclidean structure data in neural networks. As a result, non-Euclidean structure data, such as graph data, has more sophisticated structural information, which can be applied in neural networks as well to address more complex and practical problems. However, actual graph data obeys a power-law distribution, so the adjacent matrix of a graph is random and sparse. Graph processing accelerator (GPA) is designed to handle the problems above. However, graph computing only processes 1-dimensional data. In graph neural networks (GNNs), graph data is multi-dimensional. Consequently, GNNs include the execution processes of both traditional graph processing and neural network, which have irregular memory access and regular computation, respectively. To obtain more information in graph data and require better model generalization ability, the layers of GNN are deeper, so the overhead of memory access and computation is considerable. At present, GNN accelerators are designed to deal with this issue. In this paper, we conduct a systematic survey regarding the design and implementation of GNN accelerators. Specifically, we review the challenges faced by GNN accelerators, and existing related works in detail to process them. Finally, we evaluate previous works and propose future directions in this booming field.

Figures and Tables | References | Related Articles | Metrics
Artificial Intelligence
Exploring & exploiting high-order graph structure for sparse knowledge graph completion
Tao HE, Ming LIU, Yixin CAO, Zekun WANG, Zihao ZHENG, Bing QIN
Front. Comput. Sci.. 2025, 19 (2): 192306-.  
https://doi.org/10.1007/s11704-023-3521-y

Abstract   HTML   PDF (17469KB)

Sparse Knowledge Graph (KG) scenarios pose a challenge for previous Knowledge Graph Completion (KGC) methods, that is, the completion performance decreases rapidly with the increase of graph sparsity. This problem is also exacerbated because of the widespread existence of sparse KGs in practical applications. To alleviate this challenge, we present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities to supplement insufficient structure features and distill logical reasoning knowledge for sparse KGC. The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller. The reasoning path distiller explores high-order graph structures such as reasoning paths and encodes them as rich-semantic edges, explicitly compositing long-range dependencies into the predictor. This step also plays an essential role in densifying KGs, effectively alleviating the sparse issue. Furthermore, the path distiller further distills logical reasoning knowledge from these mined reasoning paths into the predictor. These two components are jointly optimized using a well-designed variational EM algorithm. Extensive experiments and analyses on four sparse benchmarks demonstrate the effectiveness of our proposed method.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Nonconvex and discriminative transfer subspace learning for unsupervised domain adaptation
Yueying LIU, Tingjin LUO
Front. Comput. Sci.. 2025, 19 (2): 192307-.  
https://doi.org/10.1007/s11704-023-3228-0

Abstract   HTML   PDF (15936KB)

Unsupervised transfer subspace learning is one of the challenging and important topics in domain adaptation, which aims to classify unlabeled target data by using source domain information. The traditional transfer subspace learning methods often impose low-rank constraints, i.e., trace norm, to preserve data structural information of different domains. However, trace norm is only the convex surrogate to approximate the ideal low-rank constraints and may make their solutions seriously deviate from the original optimums. In addition, the traditional methods directly use the strict labels of source domain, which is difficult to deal with label noise. To solve these problems, we propose a novel nonconvex and discriminative transfer subspace learning method named NDTSL by incorporating Schatten-p norm and soft label matrix. Specifically, Schatten-p norm can be imposed to approximate the low-rank constraints and obtain a better low-rank representation. Then, we design and adopt soft label matrix in source domain to learn a more flexible classifier and enhance the discriminative ability of target data. Besides, due to the nonconvexity of Schatten-p norm, we design an efficient alternative algorithm IALM to solve it. Finally, experimental results on several public transfer tasks demonstrate the effectiveness of NDTSL compared with several state-of-the-art methods.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Integrating element correlation with prompt-based spatial relation extraction
Feng WANG, Sheng XU, Peifeng LI, Qiaoming ZHU
Front. Comput. Sci.. 2025, 19 (2): 192308-.  
https://doi.org/10.1007/s11704-023-3305-4

Abstract   HTML   PDF (3418KB)

Spatial relations in text refer to how a geographical entity is located in space in relation to a reference entity. Extracting spatial relations from text is a fundamental task in natural language understanding. Previous studies have only focused on generic fine-tuning methods with additional classifiers, ignoring the importance of the semantic correlation between different spatial elements and the large offset between the relation extraction task and the pre-trained models. To address the above two issues, we propose a spatial relation extraction model based on Dual-view Prompt and Element Correlation (DPEC). Specifically, we first reformulate spatial relation extraction as a mask language model with a Dual-view Prompt (i.e., Link Prompt and Confidence Prompt). Link Prompt can not only guide the model to incorporate more contextual information related to the spatial relation extraction task, but also better adapt to the original pre-training task of the language models. Meanwhile, Confidence Prompt can measure the confidence of candidate triplets in Link Prompt and work as a supplement to identify those easily confused examples in Link Prompt. Moreover, we incorporate the element correlation to measure the consistency between different spatial elements, which is an effective cue for identifying the rationality of spatial relations. Experimental results on the popular SpaceEval show that our DPEC significantly outperforms the SOTA baselines.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Labeling-based centrality approaches for identifying critical edges on temporal graphs
Tianming ZHANG, Jie ZHAO, Cibo YU, Lu CHEN, Yunjun GAO, Bin CAO, Jing FAN, Ge YU
Front. Comput. Sci.. 2025, 19 (2): 192601-.  
https://doi.org/10.1007/s11704-023-3424-y

Abstract   HTML   PDF (8616KB)

Edge closeness and betweenness centralities are widely used path-based metrics for characterizing the importance of edges in networks. In general graphs, edge closeness centrality indicates the importance of edges by the shortest distances from the edge to all the other vertices. Edge betweenness centrality ranks which edges are significant based on the fraction of all-pairs shortest paths that pass through the edge. Nowadays, extensive research efforts go into centrality computation over general graphs that omit time dimension. However, numerous real-world networks are modeled as temporal graphs, where the nodes are related to each other at different time instances. The temporal property is important and should not be neglected because it guides the flow of information in the network. This state of affairs motivates the paper’s study of edge centrality computation methods on temporal graphs. We introduce the concepts of the label, and label dominance relation, and then propose multi-thread parallel labeling-based methods on OpenMP to efficiently compute edge closeness and betweenness centralities w.r.t. three types of optimal temporal paths. For edge closeness centrality computation, a time segmentation strategy and two observations are presented to aggregate some related temporal edges for uniform processing. For edge betweenness centrality computation, to improve efficiency, temporal edge dependency formulas, a labeling-based forward-backward scanning strategy, and a compression-based optimization method are further proposed to iteratively accumulate centrality values. Extensive experiments using 13 real temporal graphs are conducted to provide detailed insights into the efficiency and effectiveness of the proposed methods. Compared with state-of-the-art methods, labeling-based methods are capable of up to two orders of magnitude speedup.

Figures and Tables | References | Related Articles | Metrics
Image and Graphics
FPSMix: data augmentation strategy for point cloud classification
Taiyan CHEN, Xianghua YING
Front. Comput. Sci.. 2025, 19 (2): 192701-.  
https://doi.org/10.1007/s11704-023-3455-4

Abstract   HTML   PDF (3296KB)

Data augmentation is a widely used regularization strategy in deep neural networks to mitigate overfitting and enhance generalization. In the context of point cloud data, mixing two samples to generate new training examples has proven to be effective. In this paper, we propose a novel and effective approach called Farthest Point Sampling Mix (FPSMix) for augmenting point cloud data. Our method leverages farthest point sampling, a technique used in point cloud processing, to generate new samples by mixing points from two original point clouds. Another key innovation of our approach is the introduction of a significance-based loss function, which assigns weights to the soft labels of the mixed samples based on the classification loss of each part of the new sample that is separated from the two original point clouds. This way, our method takes into account the importance of different parts of the mixed sample during the training process, allowing the model to learn better global features. Experimental results demonstrate that our FPSMix, combined with the significance-based loss function, improves the classification accuracy of point cloud models and achieves comparable performance with state-of-the-art data augmentation methods. Moreover, our approach is complementary to techniques that focus on local features, and their combined use further enhances the classification accuracy of the baseline model.

Figures and Tables | References | Related Articles | Metrics
SSA: semantic structure aware inference on CNN networks for weakly pixel-wise dense predictions without cost
Yanpeng SUN, Zechao LI
Front. Comput. Sci.. 2025, 19 (2): 192702-.  
https://doi.org/10.1007/s11704-024-3571-9

Abstract   HTML   PDF (5384KB)

The pixel-wise dense prediction tasks based on weakly supervisions currently use Class Attention Maps (CAMs) to generate pseudo masks as ground-truth. However, existing methods often incorporate trainable modules to expand the immature class activation maps, which can result in significant computational overhead and complicate the training process. In this work, we investigate the semantic structure information concealed within the CNN network, and propose a semantic structure aware inference (SSA) method that utilizes this information to obtain high-quality CAM without any additional training costs. Specifically, the semantic structure modeling module (SSM) is first proposed to generate the class-agnostic semantic correlation representation, where each item denotes the affinity degree between one category of objects and all the others. Then, the immature CAM are refined through a dot product operation that utilizes semantic structure information. Finally, the polished CAMs from different backbone stages are fused as the output. The advantage of SSA lies in its parameter-free nature and the absence of additional training costs, which makes it suitable for various weakly supervised pixel-dense prediction tasks. We conducted extensive experiments on weakly supervised object localization and weakly supervised semantic segmentation, and the results confirm the effectiveness of SSA.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Linkable and traceable anonymous authentication with fine-grained access control
Peng LI, Junzuo LAI, Dehua ZHOU, Lianguan HUANG, Meng SUN, Wei WU, Ye YANG
Front. Comput. Sci.. 2025, 19 (2): 192801-.  
https://doi.org/10.1007/s11704-023-3225-3

Abstract   HTML   PDF (11846KB)

To prevent misuse of privacy, numerous anonymous authentication schemes with linkability and/or traceability have been proposed to ensure different types of accountabilities. Previous schemes cannot simultaneously achieve public linking and tracing while holding access control, therefore, a new tool named linkable and traceable anonymous authentication with fine-grained access control (LTAA-FGAC) is offered, which is designed to satisfy: (i) access control, i.e., only authorized users who meet a designated authentication policy are approved to authenticate messages; (ii) public linkability, i.e., anyone can tell whether two authentications with respect to a common identifier are created by an identical user; (iii) public traceability, i.e., everyone has the ability to deduce a double-authentication user’s identity from two linked authentications without the help of other parties. We formally define the basic security requirements for the new tool, and also give a generic construction so as to satisfy these requirements. Then, we present a formal security proof and an implementation of our proposed LTAA-FGAC scheme.

Figures and Tables | References | Related Articles | Metrics
8 articles