Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

, Volume 16 Issue 5

For Selected: View Abstracts Toggle Thumbnails
RESEARCH ARTICLE
GCSS: a global collaborative scheduling strategy for wide-area high-performance computing
Yao SONG, Limin XIAO, Liang WANG, Guangjun QIN, Bing WEI, Baicheng YAN, Chenhao ZHANG
Front. Comput. Sci.. 2022, 16 (5): 165105-.  
https://doi.org/10.1007/s11704-021-0353-5

Abstract   HTML   PDF (15795KB)

Wide-area high-performance computing is widely used for large-scale parallel computing applications owing to its high computing and storage resources. However, the geographical distribution of computing and storage resources makes efficient task distribution and data placement more challenging. To achieve a higher system performance, this study proposes a two-level global collaborative scheduling strategy for wide-area high-performance computing environments. The collaborative scheduling strategy integrates lightweight solution selection, redundant data placement and task stealing mechanisms, optimizing task distribution and data placement to achieve efficient computing in wide-area environments. The experimental results indicate that compared with the state-of-the-art collaborative scheduling algorithm HPS+, the proposed scheduling strategy reduces the makespan by 23.24%, improves computing and storage resource utilization by 8.28% and 21.73% respectively, and achieves similar global data migration costs.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
REVIEW ARTICLE
Prediction of job characteristics for intelligent resource allocation in HPC systems: a survey and future directions
Zhengxiong HOU, Hong SHEN, Xingshe ZHOU, Jianhua GU, Yunlan WANG, Tianhai ZHAO
Front. Comput. Sci.. 2022, 16 (5): 165107-.  
https://doi.org/10.1007/s11704-022-0625-8

Abstract   HTML   PDF (2874KB)

Nowadays, high-performance computing (HPC) clusters are increasingly popular. Large volumes of job logs recording many years of operation traces have been accumulated. In the same time, the HPC cloud makes it possible to access HPC services remotely. For executing applications, both HPC end-users and cloud users need to request specific resources for different workloads by themselves. As users are usually not familiar with the hardware details and software layers, as well as the performance behavior of the underlying HPC systems. It is hard for them to select optimal resource configurations in terms of performance, cost, and energy efficiency. Hence, how to provide on-demand services with intelligent resource allocation is a critical issue in the HPC community. Prediction of job characteristics plays a key role for intelligent resource allocation. This paper presents a survey of the existing work and future directions for prediction of job characteristics for intelligent resource allocation in HPC systems. We first review the existing techniques in obtaining performance and energy consumption data of jobs. Then we survey the techniques for single-objective oriented predictions on runtime, queue time, power and energy consumption, cost and optimal resource configuration for input jobs, as well as multi-objective oriented predictions. We conclude after discussing future trends, research challenges and possible solutions towards intelligent resource allocation in HPC systems.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
Self-corrected unsupervised domain adaptation
Yunyun WANG, Chao WANG, Hui XUE, Songcan CHEN
Front. Comput. Sci.. 2022, 16 (5): 165323-.  
https://doi.org/10.1007/s11704-021-1010-8

Abstract   HTML   PDF (2345KB)

Unsupervised domain adaptation (UDA), which aims to use knowledge from a label-rich source domain to help learn unlabeled target domain, has recently attracted much attention. UDA methods mainly concentrate on source classification and distribution alignment between domains to expect the correct target prediction. While in this paper, we attempt to learn the target prediction end to end directly, and develop a Self-corrected unsupervised domain adaptation (SCUDA) method with probabilistic label correction. SCUDA adopts a probabilistic label corrector to learn and correct the target labels directly. Specifically, besides model parameters, those target pseudo-labels are also updated in learning and corrected by the anchor-variable, which preserves the class candidates for samples. Experiments on real datasets show the competitiveness of SCUDA.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Community detection with attributed random walk via seed replacement
Yang CHANG, Huifang MA, Liang CHANG, Zhixin LI
Front. Comput. Sci.. 2022, 16 (5): 165324-.  
https://doi.org/10.1007/s11704-021-0482-x

Abstract   HTML   PDF (16591KB)

Community detection methods based on random walks are widely adopted in various network analysis tasks. It could capture structures and attributed information while alleviating the issues of noises. Though random walks on plain networks have been studied before, in real-world networks, nodes are often not pure vertices, but own different characteristics, described by the rich set of data associated with them. These node attributes contain plentiful information that often complements the network, and bring opportunities to the random-walk-based analysis. However, node attributes make the node interactions more complicated and are heterogeneous with respect to topological structures. Accordingly, attributed community detection based on random walk is challenging as it requires joint modelling of graph structures and node attributes.

To bridge this gap, we propose a Community detection with Attributed random walk via Seed replacement (CAS). Our model is able to conquer the limitation of directly utilize the original network topology and ignore the attribute information. In particular, the algorithm consists of four stages to better identify communities. (1) Select initial seed nodes in the network; (2) Capture the better-quality seed replacement path set; (3) Generate the structure-attribute interaction transition matrix and perform the colored random walk; (4) Utilize the parallel conductance to expand the communities. Experiments on synthetic and real-world networks demonstrate the effectiveness of CAS.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
LETTER
RESEARCH ARTICLE
Citywide package deliveries via crowdshipping: minimizing the efforts from crowdsourcers
Sijing CHENG, Chao CHEN, Shenle PAN, Hongyu HUANG, Wei ZHANG, Yuming FENG
Front. Comput. Sci.. 2022, 16 (5): 165327-.  
https://doi.org/10.1007/s11704-021-0568-5

Abstract   HTML   PDF (32358KB)

Most current crowdsourced logistics aim to minimize systems cost and maximize delivery capacity, but the efforts of crowdsourcers such as drivers are almost ignored. In the delivery process, drivers usually need to take long-distance detours in hitchhiking rides based package deliveries. In this paper, we propose an approach that integrates offline trajectory data mining and online route-and-schedule optimization in the hitchhiking ride scenario to find optimal delivery routes for packages and drivers. Specifically, we propose a two-phase framework for the delivery route planning and scheduling. In the first phase, the historical trajectory data are mined offline to build the package transport network. In the second phase, we model the delivery route planning and package-taxi matching as an integer linear programming problem and solve it with the Gurobi optimizer. After that, taxis are scheduled to deliver packages with optimal delivery paths via a newly designed scheduling strategy. We evaluate our approach with the real-world datasets; the results show that our proposed approach can complete citywide package deliveries with a high success rate and low extra efforts of taxi drivers.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Multiband decomposition and spectral discriminative analysis for motor imagery BCI via deep neural network
Pengpai WANG, Mingliang WANG, Yueying ZHOU, Ziming XU, Daoqiang ZHANG
Front. Comput. Sci.. 2022, 16 (5): 165328-.  
https://doi.org/10.1007/s11704-021-0587-2

Abstract   HTML   PDF (13632KB)

Human limb movement imagery, which can be used in limb neural disorders rehabilitation and brain-controlled external devices, has become a significant control paradigm in the domain of brain-computer interface (BCI). Although numerous pioneering studies have been devoted to motor imagery classification based on electroencephalography (EEG) signal, their performance is somewhat limited due to insufficient analysis of key effective frequency bands of EEG signals. In this paper, we propose a model of multiband decomposition and spectral discriminative analysis for motor imagery classification, which is called variational sample-long short term memory (VS-LSTM) network. Specifically, we first use a channel fusion operator to reduce the signal channels of the raw EEG signal. Then, we use the variational mode decomposition (VMD) model to decompose the EEG signal into six band-limited intrinsic mode functions (BIMFs) for further signal noise reduction. In order to select discriminative frequency bands, we calculate the sample entropy (SampEn) value of each frequency band and select the maximum value. Finally, to predict the classification of motor imagery, a LSTM model is used to predict the class of frequency band with the largest SampEn value. An open-access public data is used to evaluated the effectiveness of the proposed model. In the data, 15 subjects performed motor imagery tasks with elbow flexion / extension, forearm supination / pronation and hand open/close of right upper limb. The experiment results show that the average classification result of seven kinds of motor imagery was 76.2%, the average accuracy of motor imagery binary classification is 96.6% (imagery vs. rest), respectively, which outperforms the state-of-the-art deep learning-based models. This framework significantly improves the accuracy of motor imagery by selecting effective frequency bands. This research is very meaningful for BCIs, and it is inspiring for end-to-end learning research.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
REVIEW ARTICLE
A survey of discourse parsing
Jiaqi LI, Ming LIU, Bing QIN, Ting LIU
Front. Comput. Sci.. 2022, 16 (5): 165329-.  
https://doi.org/10.1007/s11704-021-0500-z

Abstract   HTML   PDF (2450KB)

Discourse parsing is an important research area in natural language processing (NLP), which aims to parse the discourse structure of coherent sentences. In this survey, we introduce several different kinds of discourse parsing tasks, mainly including RST-style discourse parsing, PDTB-style discourse parsing, and discourse parsing for multiparty dialogue. For these tasks, we introduce the classical and recent existing methods, especially neural network approaches. After that, we describe the applications of discourse parsing for other NLP tasks, such as machine reading comprehension and sentiment analysis. Finally, we discuss the future trends of the task.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
Efficient policy evaluation by matrix sketching
Cheng CHEN, Weinan ZHANG, Yong YU
Front. Comput. Sci.. 2022, 16 (5): 165330-.  
https://doi.org/10.1007/s11704-021-0354-4

Abstract   HTML   PDF (2745KB)

In the reinforcement learning, policy evaluation aims to predict long-term values of a state under a certain policy. Since high-dimensional representations become more and more common in the reinforcement learning, how to reduce the computational cost becomes a significant problem to the policy evaluation. Many recent works focus on adopting matrix sketching methods to accelerate least-square temporal difference (TD) algorithms and quasi-Newton temporal difference algorithms. Among these sketching methods, the truncated incremental SVD shows better performance because it is stable and efficient. However, the convergence properties of the incremental SVD is still open. In this paper, we first show that the conventional incremental SVD algorithms could have enormous approximation errors in the worst case. Then we propose a variant of incremental SVD with better theoretical guarantees by shrinking the singular values periodically. Moreover, we employ our improved incremental SVD to accelerate least-square TD and quasi-Newton TD algorithms. The experimental results verify the correctness and effectiveness of our methods.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Improving meta-learning model via meta-contrastive loss
Pinzhuo TIAN, Yang GAO
Front. Comput. Sci.. 2022, 16 (5): 165331-.  
https://doi.org/10.1007/s11704-021-1188-9

Abstract   HTML   PDF (6396KB)

Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Endowing rotation invariance for 3D finger shape and vein verification
Hongbin XU, Weili YANG, Qiuxia WU, Wenxiong KANG
Front. Comput. Sci.. 2022, 16 (5): 165332-.  
https://doi.org/10.1007/s11704-021-0475-9

Abstract   HTML   PDF (41164KB)

Finger vein biometrics have been extensively studied for the capability to detect aliveness, and the high security as intrinsic traits. However, vein pattern distortion caused by finger rotation degrades the performance of CNN in 2D finger vein recognition, especially in a contactless mode. To address the finger posture variation problem, we propose a 3D finger vein verification system extracting axial rotation invariant feature. An efficient 3D finger vein reconstruction optimization model is proposed and several accelerating strategies are adopted to achieve real-time 3D reconstruction on an embedded platform. The main contribution in this paper is that we are the first to propose a novel 3D point-cloud-based end-to-end neural network to extract deep axial rotation invariant feature, namely 3DFVSNet. In the network, the rotation problem is transformed to a permutation problem with the help of specially designed rotation groups. Finally, to validate the performance of the proposed network more rigorously and enrich the database resources for the finger vein recognition community, we built the largest publicly available 3D finger vein dataset with different degrees of finger rotation, namely the Large-scale Finger Multi-Biometric Database-3D Pose Varied Finger Vein (SCUT LFMB-3DPVFV) Dataset. Experimental results on 3D finger vein datasets show that our 3DFVSNet holds strong robustness against axial rotation compared to other approaches.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
LETTER
Metric learning for domain adversarial network
Haifeng HU, Yan YANG, Yueming YIN, Jiansheng WU
Front. Comput. Sci.. 2022, 16 (5): 165341-.  
https://doi.org/10.1007/s11704-022-1342-z

Abstract   HTML   PDF (651KB)
Figures and Tables | References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
A mobile edge computing-based applications execution framework for Internet of Vehicles
Libing WU, Rui ZHANG, Qingan LI, Chao MA, Xiaochuan SHI
Front. Comput. Sci.. 2022, 16 (5): 165506-.  
https://doi.org/10.1007/s11704-021-0425-6

Abstract   HTML   PDF (10709KB)

Mobile edge computing (MEC) is a promising technology for the Internet of Vehicles, especially in terms of application offloading and resource allocation. Most existing offloading schemes are sub-optimal, since these offloading strategies consider an application as a whole. In comparison, in this paper we propose an application-centric framework and build a finer-grained offloading scheme based on application partitioning. In our framework, each application is modelled as a directed acyclic graph, where each node represents a subtask and each edge represents the data flow dependency between a pair of subtasks. Both vehicles and MEC server within the communication range can be used as candidate offloading nodes. Then, the offloading involves assigning these computing nodes to subtasks. In addition, the proposed offloading scheme deal with the delay constraint of each subtask. The experimental evaluation show that, compared to existing non-partitioning offloading schemes, this proposed one effectively improves the performance of the application in terms of execution time and throughput.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
LETTER
RESEARCH ARTICLE
Efficient and stable quorum-based log replication and replay for modern cluster-databases
Donghui WANG, Peng CAI, Weining QIAN, Aoying ZHOU
Front. Comput. Sci.. 2022, 16 (5): 165612-.  
https://doi.org/10.1007/s11704-020-0210-y

Abstract   HTML   PDF (7276KB)

The modern in-memory database (IMDB) can support highly concurrent on-line transaction processing (OLTP) workloads and generate massive transactional logs per second. Quorum-based replication protocols such as Paxos or Raft have been widely used in the distributed databases to offer higher availability and fault-tolerance. However, it is non-trivial to replicate IMDB because high transaction rate has brought new challenges. First, the leader node in quorum replication should have adaptivity by considering various transaction arrival rates and the processing capability of follower nodes. Second, followers are required to replay logs to catch up the state of the leader in the highly concurrent setting to reduce visibility gap. Third, modern databases are often built with a cluster of commodity machines connected by low configuration networks, in which the network anomalies often happen. In this case, the performance would be significantly affected because the follower node falls into the long-duration exception handling process (e.g., fetch lost logs from the leader). To this end, we build QuorumX, an efficient and stable quorum-based replication framework for IMDB under heavy OLTP workloads. QuorumX combines critical path based batching and pipeline batching to provide an adaptive log propagation scheme to obtain a stable and high performance at various settings. Further, we propose a safe and coordination-free log replay scheme to minimize the visibility gap between the leader and follower IMDBs. We further carefully design the process for the follower node in order to alleviate the influence of the unreliable network on the replication performance. Our evaluation results with the YCSB, TPC-C and a realistic micro-benchmark demonstrate that QuorumX achieves the performance close to asynchronous primary-backup replication and could always provide a stable service with data consistency and a low-level visibility gap.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
LETTER
Fast and efficient parallel breadth-first search with power-law graph transformation
Zite JIANG, Tao LIU, Shuai ZHANG, Mengting YUAN, Haihang YOU
Front. Comput. Sci.. 2022, 16 (5): 165613-.  
https://doi.org/10.1007/s11704-021-1004-6

Abstract   HTML   PDF (2243KB)
Figures and Tables | References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
Line drawing via saliency map and ETF
Shiguang LIU, Ziqi LIU
Front. Comput. Sci.. 2022, 16 (5): 165707-.  
https://doi.org/10.1007/s11704-021-1027-z

Abstract   HTML   PDF (9069KB)

Line drawing is a means of superior visual communication which can effectively convey useful information to viewers. Artists usually draw what they see rather than what exists, which means the most attractive object is emphasized while the rest are inhibited. Moreover, artists draw the whole object with coherent lines instead of fractured lines, which also contribute to the outstanding visual effect. From these perspectives, generating line drawings with saliency and coherence remains a great challenge. Existing line drawing generation methods ignore these important properties and cannot generate coherent lines in some cases since they do not take into account how artists draw a picture. To this end, a novel saliency-aware line drawing method was proposed to better grasp the viewer’s attention on an image. First, a saliency enhanced line extraction method combining saliency map and edge tangent flow was proposed to ensure the saliency and coherence of lines, especially in salient but low contrast areas. Then, the salient information was highlighted while irrelevant details were eliminated by inhibiting lines in less salient areas to enhance the saliency of the line drawing. Finally, the transparency of lines was adjusted to further emphasize important information. Various results showed that our method can generate line drawings with better visual saliency and coherence than the state-of-the-art methods.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
REVIEW ARTICLE
Challenges and future directions of secure federated learning: a survey
Kaiyue ZHANG, Xuan SONG, Chenhan ZHANG, Shui YU
Front. Comput. Sci.. 2022, 16 (5): 165817-.  
https://doi.org/10.1007/s11704-021-0598-z

Abstract   HTML   PDF (10553KB)

Federated learning came into being with the increasing concern of privacy security, as people’s sensitive information is being exposed under the era of big data. It is an algorithm that does not collect users’ raw data, but aggregates model parameters from each client and therefore protects user’s privacy. Nonetheless, due to the inherent distributed nature of federated learning, it is more vulnerable under attacks since users may upload malicious data to break down the federated learning server. In addition, some recent studies have shown that attackers can recover information merely from parameters. Hence, there is still lots of room to improve the current federated learning frameworks. In this survey, we give a brief review of the state-of-the-art federated learning techniques and detailedly discuss the improvement of federated learning. Several open issues and existing solutions in federated learning are discussed. We also point out the future research directions of federated learning.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
RESEARCH ARTICLE
(Full) Leakage resilience of Fiat-Shamir signatures over lattices
Yuejun LIU, Yongbin ZHOU, Rui ZHANG, Yang TAO
Front. Comput. Sci.. 2022, 16 (5): 165819-.  
https://doi.org/10.1007/s11704-021-0586-3

Abstract   HTML   PDF (1340KB)

Fiat-Shamir is a mainstream construction paradigm of lattice-based signature schemes. While its theoretical security is well-studied, its implementation security in the presence of leakage is a relatively under-explored topic. Specifically, even some side-channel attacks on lattice-based Fiat-Shamir signature (FS-Sig) schemes have been proposed since 2016, little work on the leakage resilience of these schemes appears. Worse still, the proof idea of the leakage resilience of FS-Sig schemes based on traditional number-theoretic assumptions does not apply to most lattice-based FS-Sig schemes. For this, we propose a framework to construct fully leakage resilient lattice-based FS-Sig schemes in the bounded memory leakage (BML) model. The framework consists of two parts. The first part shows how to construct leakage resilient FS-Sig schemes in BML model from leakage resilient versions of non-lossy or lossy identification schemes, which can be instantiated based on lattice assumptions. The second part shows how to construct fully leakage resilient FS-Sig schemes based on leakage resilient ones together with a new property called state reconstruction. We show almost all lattice-based FS-Sig schemes have this property. As a concrete application of our fundamental framework, we apply it to existing lattice-based FS-Sig schemes and provide analysis results of their security in the leakage setting.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
LETTER
RESEARCH ARTICLE
Efficient protocols for heavy hitter identification with local differential privacy
Dan ZHAO, Suyun ZHAO, Hong CHEN, Ruixuan LIU, Cuiping LI, Wenjuan LIANG
Front. Comput. Sci.. 2022, 16 (5): 165825-.  
https://doi.org/10.1007/s11704-021-0412-y

Abstract   HTML   PDF (5598KB)

Local differential privacy (LDP), which is a technique that employs unbiased statistical estimations instead of real data, is usually adopted in data collection, as it can protect every user’s privacy and prevent the leakage of sensitive information. The segment pairs method (SPM), multiple-channel method (MCM) and prefix extending method (PEM) are three known LDP protocols for heavy hitter identification as well as the frequency oracle (FO) problem with large domains. However, the low scalability of these three LDP algorithms often limits their application. Specifically, communication and computation strongly affect their efficiency. Moreover, excessive grouping or sharing of privacy budgets makes the results inaccurate. To address the above-mentioned problems, this study proposes independent channel (IC) and mixed independent channel (MIC), which are efficient LDP protocols for FO with a large domains. We design a flexible method for splitting a large domain to reduce the number of sub-domains. Further, we employ the false positive rate with interaction to obtain an accurate estimation. Numerical experiments demonstrate that IC outperforms all the existing solutions under the same privacy guarantee while MIC performs well under a small privacy budget with the lowest communication cost.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
Towards a better prediction of subcellular location of long non-coding RNA
Zhao-Yue ZHANG, Zi-Jie SUN, Yu-He YANG, Hao LIN
Front. Comput. Sci.. 2022, 16 (5): 165903-.  
https://doi.org/10.1007/s11704-021-1015-3

Abstract   HTML   PDF (2939KB)

The spatial distribution pattern of long non-coding RNA (lncRNA) in cell is tightly related to their function. With the increment of publicly available subcellular location data, a number of computational methods have been developed for the recognition of the subcellular localization of lncRNA. Unfortunately, these computational methods suffer from the low discriminative power of redundant features or overfitting of oversampling. To address those issues and enhance the prediction performance, we present a support vector machine-based approach by incorporating mutual information algorithm and incremental feature selection strategy. As a result, the new predictor could achieve the overall accuracy of 91.60%. The highly automated web-tool is available at lin-group.cn/server/iLoc-LncRNA(2.0)/website. It will help to get the knowledge of lncRNA subcellular localization.

Figures and Tables | References | Supplementary Material | Related Articles | Metrics
24 articles