Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

Top Read Articles
Published in last 1 year |  In last 2 years |  In last 3 years |  All
Please wait a minute...
For Selected: View Abstracts Toggle Thumbnails
Fully distributed identity-based threshold signatures with identifiable aborts
Yan JIANG, Youwen ZHU, Jian WANG, Xingxin LI
Front. Comput. Sci.    2023, 17 (5): 175813-null.   https://doi.org/10.1007/s11704-022-2370-4
Abstract   HTML   PDF (13073KB)

Identity-based threshold signature (IDTS) is a forceful primitive to protect identity and data privacy, in which parties can collaboratively sign a given message as a signer without reconstructing a signing key. Nevertheless, most IDTS schemes rely on a trusted key generation center (KGC). Recently, some IDTS schemes can achieve escrow-free security against corrupted KGC, but all of them are vulnerable to denial-of-service attacks in the dishonest majority setting, where cheaters may force the protocol to abort without providing any feedback. In this work, we present a fully decentralized IDTS scheme to resist corrupted KGC and denial-of-service attacks. To this end, we design threshold protocols to achieve distributed key generation, private key extraction, and signing generation which can withstand the collusion between KGCs and signers, and then we propose an identification mechanism that can detect the identity of cheaters during key generation, private key extraction and signing generation. Finally, we formally prove that the proposed scheme is threshold unforgeability against chosen message attacks. The experimental results show that the computation time of both key generation and signing generation is <1 s, and private key extraction is about 3 s, which is practical in the distributed environment.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
Energy inefficiency diagnosis for Android applications: a literature review
Yuxia SUN, Jiefeng FANG, Yanjia CHEN, Yepang LIU, Zhao CHEN, Song GUO, Xinkai CHEN, Ziyuan TAN
Front. Comput. Sci.    2023, 17 (1): 171201-null.   https://doi.org/10.1007/s11704-021-0532-4
Abstract   HTML   PDF (3970KB)

Android applications are becoming increasingly powerful in recent years. While their functionality is still of paramount importance to users, the energy efficiency of these applications is also gaining more and more attention. Researchers have discovered various types of energy defects in Android applications, which could quickly drain the battery power of mobile devices. Such defects not only cause inconvenience to users, but also frustrate Android developers as diagnosing the energy inefficiency of a software product is a non-trivial task. In this work, we perform a literature review to understand the state of the art of energy inefficiency diagnosis for Android applications. We identified 55 research papers published in recent years and classified existing studies from four different perspectives, including power estimation method, hardware component, types of energy defects, and program analysis approach. We also did a cross-perspective analysis to summarize and compare our studied techniques. We hope that our review can help structure and unify the literature and shed light on future research, as well as drawing developers' attention to build energy-efficient Android applications.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
Meta-BN Net for few-shot learning
Wei GAO, Mingwen SHAO, Jun SHU, Xinkai ZHUANG
Front. Comput. Sci.    2023, 17 (1): 171302-null.   https://doi.org/10.1007/s11704-021-1237-4
Abstract   HTML   PDF (7851KB)

In this paper, we propose a lightweight network with an adaptive batch normalization module, called Meta-BN Net, for few-shot classification. Unlike existing few-shot learning methods, which consist of complex models or algorithms, our approach extends batch normalization, an essential part of current deep neural network training, whose potential has not been fully explored. In particular, a meta-module is introduced to learn to generate more powerful affine transformation parameters, known as γ and β, in the batch normalization layer adaptively so that the representation ability of batch normalization can be activated. The experimental results on miniImageNet demonstrate that Meta-BN Net not only outperforms the baseline methods at a large margin but also is competitive with recent state-of-the-art few-shot learning methods. We also conduct experiments on Fewshot-CIFAR100 and CUB datasets, and the results show that our approach is effective to boost the performance of weak baseline networks. We believe our findings can motivate to explore the undiscovered capacity of base components in a neural network as well as more efficient few-shot learning methods.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
Intellectual property protection for deep semantic segmentation models
Hongjia RUAN, Huihui SONG, Bo LIU, Yong CHENG, Qingshan LIU
Front. Comput. Sci.    2023, 17 (1): 171306-null.   https://doi.org/10.1007/s11704-021-1186-y
Abstract   HTML   PDF (13894KB)

Deep neural networks have achieved great success in varieties of artificial intelligent fields. Since training a good deep model is often challenging and costly, such deep models are of great value and even the key commercial intellectual properties. Recently, deep model intellectual property protection has drawn great attention from both academia and industry, and numerous works have been proposed. However, most of them focus on the classification task. In this paper, we present the first attempt at protecting deep semantic segmentation models from potential infringements. In details, we design a new hybrid intellectual property protection framework by combining the trigger-set based and passport based watermarking simultaneously. Within it, the trigger-set based watermarking mechanism aims to force the network output copyright watermarks for a pre-defined trigger image set, which enables black-box remote ownership verification. And the passport based watermarking mechanism is to eliminate the ambiguity attack risk of trigger-set based watermarking by adding an extra passport layer into the target model. Through extensive experiments, the proposed framework not only demonstrates its effectiveness upon existing segmentation models, but also shows strong robustness to different attack techniques.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(4)
Spreadsheet quality assurance: a literature review
Pak-Lok POON, Man Fai LAU, Yuen Tak YU, Sau-Fun TANG
Front. Comput. Sci.    2024, 18 (2): 182203-.   https://doi.org/10.1007/s11704-023-2384-6
Abstract   HTML   PDF (1637KB)

Spreadsheets are very common for information processing to support decision making by both professional developers and non-technical end users. Moreover, business intelligence and artificial intelligence are increasingly popular in the industry nowadays, where spreadsheets have been used as, or integrated into, intelligent or expert systems in various application domains. However, it has been repeatedly reported that faults often exist in operational spreadsheets, which could severely compromise the quality of conclusions and decisions based on the spreadsheets. With a view to systematically examining this problem via survey of existing work, we have conducted a comprehensive literature review on the quality issues and related techniques of spreadsheets over a 35.5-year period (from January 1987 to June 2022) for target journals and a 10.5-year period (from January 2012 to June 2022) for target conferences. Among other findings, two major ones are: (a) Spreadsheet quality is best addressed throughout the whole spreadsheet life cycle, rather than just focusing on a few specific stages of the life cycle. (b) Relatively more studies focus on spreadsheet testing and debugging (related to fault detection and removal) when compared with spreadsheet specification, modeling, and design (related to development). As prevention is better than cure, more research should be performed on the early stages of the spreadsheet life cycle. Enlightened by our comprehensive review, we have identified the major research gaps as well as highlighted key research directions for future work in the area.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
A survey of music emotion recognition
Donghong HAN, Yanru KONG, Jiayi HAN, Guoren WANG
Front. Comput. Sci.    2022, 16 (6): 166335-null.   https://doi.org/10.1007/s11704-021-0569-4
Abstract   HTML   PDF (888KB)

Music is the language of emotions. In recent years, music emotion recognition has attracted widespread attention in the academic and industrial community since it can be widely used in fields like recommendation systems, automatic music composing, psychotherapy, music visualization, and so on. Especially with the rapid development of artificial intelligence, deep learning-based music emotion recognition is gradually becoming mainstream. This paper gives a detailed survey of music emotion recognition. Starting with some preliminary knowledge of music emotion recognition, this paper first introduces some commonly used evaluation metrics. Then a three-part research framework is put forward. Based on this three-part research framework, the knowledge and algorithms involved in each part are introduced with detailed analysis, including some commonly used datasets, emotion models, feature extraction, and emotion recognition algorithms. After that, the challenging problems and development trends of music emotion recognition technology are proposed, and finally, the whole paper is summarized.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(14) WebOfScience(21)
Scene-adaptive crowd counting method based on meta learning with dual-input network DMNet
Haoyu ZHAO, Weidong MIN, Jianqiang XU, Qi WANG, Yi ZOU, Qiyan FU
Front. Comput. Sci.    2023, 17 (1): 171304-null.   https://doi.org/10.1007/s11704-021-1207-x
Abstract   HTML   PDF (6952KB)

Crowd counting is recently becoming a hot research topic, which aims to count the number of the people in different crowded scenes. Existing methods are mainly based on training-testing pattern and rely on large data training, which fails to accurately count the crowd in real-world scenes because of the limitation of model’s generalization capability. To alleviate this issue, a scene-adaptive crowd counting method based on meta-learning with Dual-illumination Merging Network (DMNet) is proposed in this paper. The proposed method based on learning-to-learn and few-shot learning is able to adapt different scenes which only contain a few labeled images. To generate high quality density map and count the crowd in low-lighting scene, the DMNet is proposed, which contains Multi-scale Feature Extraction module and Element-wise Fusion Module. The Multi-scale Feature Extraction module is used to extract the image feature by multi-scale convolutions, which helps to improve network accuracy. The Element-wise Fusion module fuses the low-lighting feature and illumination-enhanced feature, which supplements the missing illumination in low-lighting environments. Experimental results on benchmarks, WorldExpo’10, DISCO, USCD, and Mall, show that the proposed method outperforms the existing state-of-the-art methods in accuracy and gets satisfied results.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(3)
VulLoc: vulnerability localization based on inducing commits and fixing commits
Lili BO, Yue LI, Xiaobing SUN, Xiaoxue WU, Bin LI
Front. Comput. Sci.    2023, 17 (3): 173207-null.   https://doi.org/10.1007/s11704-022-1729-x
Abstract   HTML   PDF (662KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1)
A subgraph matching algorithm based on subgraph index for knowledge graph
Yunhao SUN, Guanyu LI, Jingjing DU, Bo NING, Heng CHEN
Front. Comput. Sci.    2022, 16 (3): 163606-null.   https://doi.org/10.1007/s11704-020-0360-y
Abstract   HTML   PDF (12660KB)

The problem of subgraph matching is one fundamental issue in graph search, which is NP-Complete problem. Recently, subgraph matching has become a popular research topic in the field of knowledge graph analysis, which has a wide range of applications including question answering and semantic search. In this paper, we study the problem of subgraph matching on knowledge graph. Specifically, given a query graph q and a data graph G, the problem of subgraph matching is to conduct all possible subgraph isomorphic mappings of q on G. Knowledge graph is formed as a directed labeled multi-graph having multiple edges between a pair of vertices and it has more dense semantic and structural features than general graph. To accelerate subgraph matching on knowledge graph, we propose a novel subgraph matching algorithm based on subgraph index for knowledge graph, called as F G q T-Match. The subgraph matching algorithm consists of two key designs. One design is a subgraph index of matching-driven flow graph ( F G q T), which reduces redundant calculations in advance. Another design is a multi-label weight matrix, which evaluates a near-optimal matching tree for minimizing the intermediate candidates. With the aid of these two key designs, all subgraph isomorphic mappings are quickly conducted only by traversing F G q T. Extensive empirical studies on real and synthetic graphs demonstrate that our techniques outperform the state-of-the-art algorithms.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(8) WebOfScience(15)
Towards a better prediction of subcellular location of long non-coding RNA
Zhao-Yue ZHANG, Zi-Jie SUN, Yu-He YANG, Hao LIN
Front. Comput. Sci.    2022, 16 (5): 165903-null.   https://doi.org/10.1007/s11704-021-1015-3
Abstract   HTML   PDF (2939KB)

The spatial distribution pattern of long non-coding RNA (lncRNA) in cell is tightly related to their function. With the increment of publicly available subcellular location data, a number of computational methods have been developed for the recognition of the subcellular localization of lncRNA. Unfortunately, these computational methods suffer from the low discriminative power of redundant features or overfitting of oversampling. To address those issues and enhance the prediction performance, we present a support vector machine-based approach by incorporating mutual information algorithm and incremental feature selection strategy. As a result, the new predictor could achieve the overall accuracy of 91.60%. The highly automated web-tool is available at lin-group.cn/server/iLoc-LncRNA(2.0)/website. It will help to get the knowledge of lncRNA subcellular localization.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(12) WebOfScience(22)
Challenges and future directions of secure federated learning: a survey
Kaiyue ZHANG, Xuan SONG, Chenhan ZHANG, Shui YU
Front. Comput. Sci.    2022, 16 (5): 165817-null.   https://doi.org/10.1007/s11704-021-0598-z
Abstract   HTML   PDF (10553KB)

Federated learning came into being with the increasing concern of privacy security, as people’s sensitive information is being exposed under the era of big data. It is an algorithm that does not collect users’ raw data, but aggregates model parameters from each client and therefore protects user’s privacy. Nonetheless, due to the inherent distributed nature of federated learning, it is more vulnerable under attacks since users may upload malicious data to break down the federated learning server. In addition, some recent studies have shown that attackers can recover information merely from parameters. Hence, there is still lots of room to improve the current federated learning frameworks. In this survey, we give a brief review of the state-of-the-art federated learning techniques and detailedly discuss the improvement of federated learning. Several open issues and existing solutions in federated learning are discussed. We also point out the future research directions of federated learning.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(24) WebOfScience(39)
Rehearsal: learning from prediction to decision
Zhi-Hua ZHOU
Front. Comput. Sci.    2022, 16 (4): 164352-.   https://doi.org/10.1007/s11704-022-2900-0
Abstract   HTML   PDF (614KB)
Table and Figures | Reference | Related Articles | Metrics
Cited: WebOfScience(9)
DeepM6ASeq-EL: prediction of human N6-methyladenosine (m 6A) sites with LSTM and ensemble learning
Juntao CHEN, Quan ZOU, Jing LI
Front. Comput. Sci.    2022, 16 (2): 162302-null.   https://doi.org/10.1007/s11704-020-0180-0
Abstract   HTML   PDF (7889KB)

N6-methyladenosine (m 6A) is a prevalent methylation modification and plays a vital role in various biological processes, such as metabolism, mRNA processing, synthesis, and transport. Recent studies have suggested that m 6A modification is related to common diseases such as cancer, tumours, and obesity. Therefore, accurate prediction of methylation sites in RNA sequences has emerged as a critical issue in the area of bioinformatics. However, traditional high-throughput sequencing and wet bench experimental techniques have the disadvantages of high costs, significant time requirements and inaccurate identification of sites. But through the use of traditional experimental methods, researchers have produced many large databases of m 6A sites. With the support of these basic databases and existing deep learning methods, we developed an m 6A site predictor named DeepM6ASeq-EL, which integrates an ensemble of five LSTM and CNN classifiers with the combined strategy of hard voting. Compared to the state-of-the-art prediction method WHISTLE (average AUC 0.948 and 0.880), the DeepM6ASeq-EL had a lower accuracy in m 6A site prediction (average AUC: 0.861 for the full transcript models and 0.809 for the mature messenger RNA models) when tested on six independent datasets.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(50) WebOfScience(68)
Cancer classification with data augmentation based on generative adversarial networks
Kaimin WEI, Tianqi LI, Feiran HUANG, Jinpeng CHEN, Zefan HE
Front. Comput. Sci.    2022, 16 (2): 162601-null.   https://doi.org/10.1007/s11704-020-0025-x
Abstract   HTML   PDF (8634KB)

Accurate diagnosis is a significant step in cancer treatment. Machine learning can support doctors in prognosis decision-making, and its performance is always weakened by the high dimension and small quantity of genetic data. Fortunately, deep learning can effectively process the high dimensional data with growing. However, the problem of inadequate data remains unsolved and has lowered the performance of deep learning. To end it, we propose a generative adversarial model that uses non target cancer data to help target generator training. We use the reconstruction loss to further stabilize model training and improve the quality of generated samples. We also present a cancer classification model to optimize classification performance. Experimental results prove that mean absolute error of cancer gene made by our model is 19.3% lower than DC-GAN, and the classification accuracy rate of our produced data is higher than the data created by GAN. As for the classification model, the classification accuracy of our model reaches 92.6%, which is 7.6% higher than the model without any generated data.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(10) WebOfScience(20)
SCENERY: a lightweight block cipher based on Feistel structure
Jingya FENG, Lang LI
Front. Comput. Sci.    2022, 16 (3): 163813-null.   https://doi.org/10.1007/s11704-020-0115-9
Abstract   HTML   PDF (2691KB)

In this paper, we propose a new lightweight block cipher called SCENERY. The main purpose of SCENERY design applies to hardware and software platforms. SCENERY is a 64-bit block cipher supporting 80-bit keys, and its data processing consists of 28 rounds. The round function of SCENERY consists of 8 4 × 4 S-boxes in parallel and a 32 × 32 binary matrix, and we can implement SCENERY with some basic logic instructions. The hardware implementation of SCENERY only requires 1438 GE based on 0.18 um CMOS technology, and the software implementation of encrypting or decrypting a block takes approximately 1516 clock cycles on 8-bit microcontrollers and 364 clock cycles on 64-bit processors. Compared with other encryption algorithms, the performance of SCENERY is well balanced for both hardware and software. By the security analyses, SCENERY can achieve enough security margin against known attacks, such as differential cryptanalysis, linear cryptanalysis, impossible differential cryptanalysis and related-key attacks.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(5) WebOfScience(12)
New development of cognitive diagnosis models
Yingjie LIU, Tiancheng ZHANG, Xuecen WANG, Ge YU, Tao LI
Front. Comput. Sci.    2023, 17 (1): 171604-null.   https://doi.org/10.1007/s11704-022-1128-3
Abstract   HTML   PDF (1078KB)

Cognitive diagnosis is the judgment of the student’s cognitive ability, is a wide-spread concern in educational science. The cognitive diagnosis model (CDM) is an essential method to realize cognitive diagnosis measurement. This paper presents new research on the cognitive diagnosis model and introduces four individual aspects of probability-based CDM and deep learning-based CDM. These four aspects are higher-order latent trait, polytomous responses, polytomous attributes, and multilevel latent traits. The paper also sorts on the contained ideas, model structures and respective characteristics, and provides direction for developing cognitive diagnosis in the future.

Table and Figures | Reference | Related Articles | Metrics
Cited: Crossref(2) WebOfScience(5)
A survey of Intel SGX and its applications
Wei ZHENG, Ying WU, Xiaoxue WU, Chen FENG, Yulei SUI, Xiapu LUO, Yajin ZHOU
Front. Comput. Sci.    2021, 15 (3): 153808-null.   https://doi.org/10.1007/s11704-019-9096-y
Abstract   PDF (432KB)

This paper presents a comprehensive survey on the development of Intel SGX (software guard extensions) processors and its applications. With the advent of SGX in 2013 and its subsequent development, the corresponding research works are also increasing rapidly. In order to get a more comprehensive literature review related to SGX, we have made a systematic analysis of the related papers in this area. We first search through five large-scale paper retrieval libraries by keywords (i.e., ACM Digital Library, IEEE/IET Electronic Library, SpringerLink, Web of Science, and Elsevier Science Direct). We read and analyze a total of 128 SGX-related papers. The first round of extensive study is conducted to classify them. The second round of intensive study is carried out to complete a comprehensive analysis of the paper from various aspects. We start with the working environment of SGX and make a conclusive summary of trusted execution environment (TEE).We then focus on the applications of SGX. We also review and study multifarious attack methods to SGX framework and some recent security improvementsmade on SGX. Finally, we summarize the advantages and disadvantages of SGX with some future research opportunities. We hope this review could help the existing and future research works on SGX and its application for both developers and users.

Reference | Related Articles | Metrics
Cited: Crossref(13) WebOfScience(18)
Accountable attribute-based authentication with fine-grained access control and its application to crowdsourcing
Peng LI, Junzuo LAI, Yongdong WU
Front. Comput. Sci.    2023, 17 (1): 171802-null.   https://doi.org/10.1007/s11704-021-0593-4
Abstract   HTML   PDF (1556KB)

We introduce a new notion called accountable attribute-based authentication with fine-grained access control (AccABA), which achieves (i) fine-grained access control that prevents ineligible users from authenticating; (ii) anonymity such that no one can recognize the identity of a user; (iii) public accountability, i.e., as long as a user authenticates two different messages, the corresponding authentications will be easily identified and linked, and anyone can reveal the user’s identity without any help from a trusted third party. Then, we formalize the security requirements in terms of unforgeability, anonymity, linkability and traceability, and give a generic construction to fulfill these requirements. Based on AccABA, we further present the first attribute-based, fair, anonymous and publicly traceable crowdsourcing scheme on blockchain, which is designed to filter qualified workers to participate in tasks, and ensures the fairness of the competition between workers, and finally balances the tension between anonymity and accountability.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(3)
A survey on federated learning: a perspective from multi-party computation
Fengxia LIU, Zhiming ZHENG, Yexuan SHI, Yongxin TONG, Yi ZHANG
Front. Comput. Sci.    2024, 18 (1): 181336-null.   https://doi.org/10.1007/s11704-023-3282-7
Abstract   HTML   PDF (1635KB)

Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets. To enhance privacy in federated learning, multi-party computation can be leveraged for secure communication and computation during model training. This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy, as well as the corresponding optimization techniques to improve model accuracy and training efficiency. We also pinpoint future directions to deploy federated learning to a wider range of applications.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Improving fault localization with pre-training
Zhuo ZHANG, Ya LI, Jianxin XUE, Xiaoguang MAO
Front. Comput. Sci.    2024, 18 (1): 181205-null.   https://doi.org/10.1007/s11704-023-2597-8
Abstract   HTML   PDF (315KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Certificateless network coding proxy signatures from lattice
Huifang YU, Ning WANG
Front. Comput. Sci.    2023, 17 (5): 175810-null.   https://doi.org/10.1007/s11704-022-2128-z
Abstract   HTML   PDF (1904KB)

Network coding can improve the information transmission efficiency and reduces the network resource consumption, so it is a very good platform for information transmission. Certificateless proxy signatures are widely applied in information security fields. However, certificateless proxy signatures based on classical number theory are not suitable for the network coding environment and cannot resist the quantum computing attacks. In view of this, we construct certificateless network coding proxy signatures from lattice (LCL-NCPS). LCL-NCPS is new multi-source signature scheme which has the characteristics of anti-quantum, anti-pollution and anti-forgery. In LCL-NCPS, each source node user can output a message vector to intermediate node and sink node, and the message vectors from different source nodes will be linearly combined to achieve the aim of improving the network transmission rate and network robustness. In terms of efficiency analysis of space dimension, LCL-NCPS can obtain the lower computation complexity by reducing the dimension of proxy key. In terms of efficiency analysis of time dimension, LCL-NCPS has higher computation efficiency in signature and verification.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Side-channel analysis attacks based on deep learning network
Yu OU, Lang LI
Front. Comput. Sci.    2022, 16 (2): 162303-null.   https://doi.org/10.1007/s11704-020-0209-4
Abstract   HTML   PDF (6444KB)

There has been a growing interest in the side-channel analysis (SCA) field based on deep learning (DL) technology. Various DL network or model has been developed to improve the efficiency of SCA. However, few studies have investigated the impact of the different models on attack results and the exact relationship between power consumption traces and intermediate values. Based on the convolutional neural network and the autoencoder, this paper proposes a Template Analysis Pre-trained DL Classification model named TAPDC which contains three sub-networks. The TAPDC model detects the periodicity of power trace, relating power to the intermediate values and mining the deeper features by the multi-layer convolutional net. We implement the TAPDC model and compare it with two classical models in a fair experiment. The evaluative results show that the TAPDC model with autoencoder and deep convolution feature extraction structure in SCA can more effectively extract information from power consumption trace. Also, Using the classifier layer, this model links power information to the probability of intermediate value. It completes the conversion from power trace to intermediate values and greatly improves the efficiency of the power attack.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(5) WebOfScience(9)
Large sequence models for sequential decision-making: a survey
Muning WEN, Runji LIN, Hanjing WANG, Yaodong YANG, Ying WEN, Luo MAI, Jun WANG, Haifeng ZHANG, Weinan ZHANG
Front. Comput. Sci.    2023, 17 (6): 176349-.   https://doi.org/10.1007/s11704-023-2689-5
Abstract   HTML   PDF (2853KB)

Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e.g., GPT-3 and Swin Transformer. Although originally designed for prediction problems, it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems, which are typically beset by long-standing issues involving sample efficiency, credit assignment, and partial observability. In recent years, sequence models, especially the Transformer, have attracted increasing interest in the RL communities, spawning numerous approaches with notable effectiveness and generalizability. This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer, by discussing the connection between sequential decision-making and sequence modeling, and categorizing them based on the way they utilize the Transformer. Moreover, this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making, encompassing theoretical foundations, network architectures, algorithms, and efficient training systems.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(4)
The mass, fake news, and cognition security
Bin GUO, Yasan DING, Yueheng SUN, Shuai MA, Ke LI, Zhiwen YU
Front. Comput. Sci.    2021, 15 (3): 153806-null.   https://doi.org/10.1007/s11704-020-9256-0
Abstract   PDF (338KB)

The widespread fake news in social networks is posing threats to social stability, economic development, and political democracy, etc. Numerous studies have explored the effective detection approaches of online fake news, while few works study the intrinsic propagation and cognition mechanisms of fake news. Since the development of cognitive science paves a promising way for the prevention of fake news, we present a new research area called Cognition Security (CogSec), which studies the potential impacts of fake news on human cognition, ranging from misperception, untrusted knowledge acquisition, targeted opinion/attitude formation, to biased decision making, and investigates the effective ways for fake news debunking. CogSec is a multidisciplinary research field that leverages the knowledge from social science, psychology, cognition science, neuroscience, AI and computer science. We first propose related definitions to characterize CogSec and review the literature history. We further investigate the key research challenges and techniques of CogSec, including humancontent cognition mechanism, social influence and opinion diffusion, fake news detection, and malicious bot detection. Finally, we summarize the open issues and future research directions, such as the cognition mechanism of fake news, influence maximization of fact-checking information, early detection of fake news, fast refutation of fake news, and so on.

Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(8) WebOfScience(8)
A comprehensive perspective of contrastive self-supervised learning
Songcan CHEN, Chuanxing GENG
Front. Comput. Sci.    2021, 15 (4): 154332-null.   https://doi.org/10.1007/s11704-021-1900-9
Abstract   PDF (296KB)
Reference | Related Articles | Metrics
Cited: WebOfScience(1)
Mean estimation over numeric data with personalized local differential privacy
Qiao XUE, Youwen ZHU, Jian WANG
Front. Comput. Sci.    2022, 16 (3): 163806-null.   https://doi.org/10.1007/s11704-020-0103-0
Abstract   HTML   PDF (10273KB)

The fast development of the Internet and mobile devices results in a crowdsensing business model, where individuals (users) are willing to contribute their data to help the institution (data collector) analyze and release useful information. However, the reveal of personal data will bring huge privacy threats to users, which will impede the wide application of the crowdsensing model. To settle the problem, the definition of local differential privacy (LDP) is proposed. Afterwards, to respond to the varied privacy preference of users, researchers propose a new model, i.e., personalized local differential privacy (PLDP), which allow users to specify their own privacy parameters. In this paper, we focus on a basic task of calculating the mean value over a single numeric attribute with PLDP. Based on the previous schemes for mean estimation under LDP, we employ PLDP model to design novel schemes (LAP, DCP, PWP) to provide personalized privacy for each user. We then theoretically analysis the worst-case variance of three proposed schemes and conduct experiments on synthetic and real datasets to evaluate the performance of three methods. The theoretical and experimental results show the optimality of PWP in the low privacy regime and a slight advantage of DCP in the high privacy regime.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(9) WebOfScience(10)
Compositional metric learning for multi-label classification
Yan-Ping SUN, Min-Ling ZHANG
Front. Comput. Sci.    2021, 15 (5): 155320-null.   https://doi.org/10.1007/s11704-020-9294-7
Abstract   PDF (577KB)

Multi-label classification aims to assign a set of proper labels for each instance, where distance metric learning can help improve the generalization ability of instance-based multi-label classification models. Existing multi-label metric learning techniques work by utilizing pairwise constraints to enforce that examples with similar label assignments should have close distance in the embedded feature space. In this paper, a novel distance metric learning approach for multi-label classification is proposed by modeling structural interactions between instance space and label space. On one hand, compositional distance metric is employed which adopts the representation of a weighted sum of rank-1 PSD matrices based on component bases. On the other hand, compositional weights are optimized by exploiting triplet similarity constraints derived from both instance and label spaces. Due to the compositional nature of employed distance metric, the resulting problem admits quadratic programming formulation with linear optimization complexity w.r.t. the number of training examples.We also derive the generalization bound for the proposed approach based on algorithmic robustness analysis of the compositional metric. Extensive experiments on sixteen benchmark data sets clearly validate the usefulness of compositional metric in yielding effective distance metric for multi-label classification.

Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(16) WebOfScience(24)
GCSS: a global collaborative scheduling strategy for wide-area high-performance computing
Yao SONG, Limin XIAO, Liang WANG, Guangjun QIN, Bing WEI, Baicheng YAN, Chenhao ZHANG
Front. Comput. Sci.    2022, 16 (5): 165105-null.   https://doi.org/10.1007/s11704-021-0353-5
Abstract   HTML   PDF (15795KB)

Wide-area high-performance computing is widely used for large-scale parallel computing applications owing to its high computing and storage resources. However, the geographical distribution of computing and storage resources makes efficient task distribution and data placement more challenging. To achieve a higher system performance, this study proposes a two-level global collaborative scheduling strategy for wide-area high-performance computing environments. The collaborative scheduling strategy integrates lightweight solution selection, redundant data placement and task stealing mechanisms, optimizing task distribution and data placement to achieve efficient computing in wide-area environments. The experimental results indicate that compared with the state-of-the-art collaborative scheduling algorithm HPS+, the proposed scheduling strategy reduces the makespan by 23.24%, improves computing and storage resource utilization by 8.28% and 21.73% respectively, and achieves similar global data migration costs.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(4)
Graph convolution machine for context-aware recommender system
Jiancan WU, Xiangnan HE, Xiang WANG, Qifan WANG, Weijian CHEN, Jianxun LIAN, Xing XIE
Front. Comput. Sci.    2022, 16 (6): 166614-null.   https://doi.org/10.1007/s11704-021-0261-8
Abstract   HTML   PDF (13815KB)

The latest advance in recommendation shows that better user and item representations can be learned via performing graph convolutions on the user-item interaction graph. However, such finding is mostly restricted to the collaborative filtering (CF) scenario, where the interaction contexts are not available. In this work, we extend the advantages of graph convolutions to context-aware recommender system (CARS, which represents a generic type of models that can handle various side information). We propose Graph Convolution Machine (GCM), an end-to-end framework that consists of three components: an encoder, graph convolution (GC) layers, and a decoder. The encoder projects users, items, and contexts into embedding vectors, which are passed to the GC layers that refine user and item embeddings with context-aware graph convolutions on the user-item graph. The decoder digests the refined embeddings to output the prediction score by considering the interactions among user, item, and context embeddings. We conduct experiments on three real-world datasets from Yelp and Amazon, validating the effectiveness of GCM and the benefits of performing graph convolutions for CARS.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(11) WebOfScience(24)
Robust watermarking of databases in order-preserving encrypted domain
Shijun XIANG, Guanqi RUAN, Hao LI, Jiayong HE
Front. Comput. Sci.    2022, 16 (2): 162804-null.   https://doi.org/10.1007/s11704-020-0112-z
Abstract   HTML   PDF (16432KB)

Security of databases has always been a hot topic in the field of information security. Privacy protection can be realized by encrypting data, while data copyright can be protected by using digital watermarking technology. By combining these two technologies, a database’s copyright and privacy problems in the cloud can be effectively solved. Based on order-preserving encryption scheme (OPES), circular histogram and digital watermarking technology, this paper proposes a new robust watermarking scheme for protection of databases in the encrypted domain. Firstly, the OPES is used to encrypt data to avoid exposing the data in the cloud. Then, the encrypted data are grouped and modified by the use of a circular histogram for embedding a digital watermark. The common data query operations in database are available for the encrypted watermarking database. In receivers, the digital watermark and the original data can be restored through a secret key and a key table. Experimental results have shown that the proposed algorithm is robust against common database attacks in the encrypted domain.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(4) WebOfScience(4)