Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

Top Read Articles
Published in last 1 year |  In last 2 years |  In last 3 years |  All
Please wait a minute...
For Selected: View Abstracts Toggle Thumbnails
Fully distributed identity-based threshold signatures with identifiable aborts
Yan JIANG, Youwen ZHU, Jian WANG, Xingxin LI
Front. Comput. Sci.    2023, 17 (5): 175813-null.   https://doi.org/10.1007/s11704-022-2370-4
Abstract   HTML   PDF (13073KB)

Identity-based threshold signature (IDTS) is a forceful primitive to protect identity and data privacy, in which parties can collaboratively sign a given message as a signer without reconstructing a signing key. Nevertheless, most IDTS schemes rely on a trusted key generation center (KGC). Recently, some IDTS schemes can achieve escrow-free security against corrupted KGC, but all of them are vulnerable to denial-of-service attacks in the dishonest majority setting, where cheaters may force the protocol to abort without providing any feedback. In this work, we present a fully decentralized IDTS scheme to resist corrupted KGC and denial-of-service attacks. To this end, we design threshold protocols to achieve distributed key generation, private key extraction, and signing generation which can withstand the collusion between KGCs and signers, and then we propose an identification mechanism that can detect the identity of cheaters during key generation, private key extraction and signing generation. Finally, we formally prove that the proposed scheme is threshold unforgeability against chosen message attacks. The experimental results show that the computation time of both key generation and signing generation is <1 s, and private key extraction is about 3 s, which is practical in the distributed environment.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
Energy inefficiency diagnosis for Android applications: a literature review
Yuxia SUN, Jiefeng FANG, Yanjia CHEN, Yepang LIU, Zhao CHEN, Song GUO, Xinkai CHEN, Ziyuan TAN
Front. Comput. Sci.    2023, 17 (1): 171201-null.   https://doi.org/10.1007/s11704-021-0532-4
Abstract   HTML   PDF (3970KB)

Android applications are becoming increasingly powerful in recent years. While their functionality is still of paramount importance to users, the energy efficiency of these applications is also gaining more and more attention. Researchers have discovered various types of energy defects in Android applications, which could quickly drain the battery power of mobile devices. Such defects not only cause inconvenience to users, but also frustrate Android developers as diagnosing the energy inefficiency of a software product is a non-trivial task. In this work, we perform a literature review to understand the state of the art of energy inefficiency diagnosis for Android applications. We identified 55 research papers published in recent years and classified existing studies from four different perspectives, including power estimation method, hardware component, types of energy defects, and program analysis approach. We also did a cross-perspective analysis to summarize and compare our studied techniques. We hope that our review can help structure and unify the literature and shed light on future research, as well as drawing developers' attention to build energy-efficient Android applications.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
Meta-BN Net for few-shot learning
Wei GAO, Mingwen SHAO, Jun SHU, Xinkai ZHUANG
Front. Comput. Sci.    2023, 17 (1): 171302-null.   https://doi.org/10.1007/s11704-021-1237-4
Abstract   HTML   PDF (7851KB)

In this paper, we propose a lightweight network with an adaptive batch normalization module, called Meta-BN Net, for few-shot classification. Unlike existing few-shot learning methods, which consist of complex models or algorithms, our approach extends batch normalization, an essential part of current deep neural network training, whose potential has not been fully explored. In particular, a meta-module is introduced to learn to generate more powerful affine transformation parameters, known as γ and β, in the batch normalization layer adaptively so that the representation ability of batch normalization can be activated. The experimental results on miniImageNet demonstrate that Meta-BN Net not only outperforms the baseline methods at a large margin but also is competitive with recent state-of-the-art few-shot learning methods. We also conduct experiments on Fewshot-CIFAR100 and CUB datasets, and the results show that our approach is effective to boost the performance of weak baseline networks. We believe our findings can motivate to explore the undiscovered capacity of base components in a neural network as well as more efficient few-shot learning methods.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
Intellectual property protection for deep semantic segmentation models
Hongjia RUAN, Huihui SONG, Bo LIU, Yong CHENG, Qingshan LIU
Front. Comput. Sci.    2023, 17 (1): 171306-null.   https://doi.org/10.1007/s11704-021-1186-y
Abstract   HTML   PDF (13894KB)

Deep neural networks have achieved great success in varieties of artificial intelligent fields. Since training a good deep model is often challenging and costly, such deep models are of great value and even the key commercial intellectual properties. Recently, deep model intellectual property protection has drawn great attention from both academia and industry, and numerous works have been proposed. However, most of them focus on the classification task. In this paper, we present the first attempt at protecting deep semantic segmentation models from potential infringements. In details, we design a new hybrid intellectual property protection framework by combining the trigger-set based and passport based watermarking simultaneously. Within it, the trigger-set based watermarking mechanism aims to force the network output copyright watermarks for a pre-defined trigger image set, which enables black-box remote ownership verification. And the passport based watermarking mechanism is to eliminate the ambiguity attack risk of trigger-set based watermarking by adding an extra passport layer into the target model. Through extensive experiments, the proposed framework not only demonstrates its effectiveness upon existing segmentation models, but also shows strong robustness to different attack techniques.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(4)
Spreadsheet quality assurance: a literature review
Pak-Lok POON, Man Fai LAU, Yuen Tak YU, Sau-Fun TANG
Front. Comput. Sci.    2024, 18 (2): 182203-.   https://doi.org/10.1007/s11704-023-2384-6
Abstract   HTML   PDF (1637KB)

Spreadsheets are very common for information processing to support decision making by both professional developers and non-technical end users. Moreover, business intelligence and artificial intelligence are increasingly popular in the industry nowadays, where spreadsheets have been used as, or integrated into, intelligent or expert systems in various application domains. However, it has been repeatedly reported that faults often exist in operational spreadsheets, which could severely compromise the quality of conclusions and decisions based on the spreadsheets. With a view to systematically examining this problem via survey of existing work, we have conducted a comprehensive literature review on the quality issues and related techniques of spreadsheets over a 35.5-year period (from January 1987 to June 2022) for target journals and a 10.5-year period (from January 2012 to June 2022) for target conferences. Among other findings, two major ones are: (a) Spreadsheet quality is best addressed throughout the whole spreadsheet life cycle, rather than just focusing on a few specific stages of the life cycle. (b) Relatively more studies focus on spreadsheet testing and debugging (related to fault detection and removal) when compared with spreadsheet specification, modeling, and design (related to development). As prevention is better than cure, more research should be performed on the early stages of the spreadsheet life cycle. Enlightened by our comprehensive review, we have identified the major research gaps as well as highlighted key research directions for future work in the area.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
A survey of music emotion recognition
Donghong HAN, Yanru KONG, Jiayi HAN, Guoren WANG
Front. Comput. Sci.    2022, 16 (6): 166335-null.   https://doi.org/10.1007/s11704-021-0569-4
Abstract   HTML   PDF (888KB)

Music is the language of emotions. In recent years, music emotion recognition has attracted widespread attention in the academic and industrial community since it can be widely used in fields like recommendation systems, automatic music composing, psychotherapy, music visualization, and so on. Especially with the rapid development of artificial intelligence, deep learning-based music emotion recognition is gradually becoming mainstream. This paper gives a detailed survey of music emotion recognition. Starting with some preliminary knowledge of music emotion recognition, this paper first introduces some commonly used evaluation metrics. Then a three-part research framework is put forward. Based on this three-part research framework, the knowledge and algorithms involved in each part are introduced with detailed analysis, including some commonly used datasets, emotion models, feature extraction, and emotion recognition algorithms. After that, the challenging problems and development trends of music emotion recognition technology are proposed, and finally, the whole paper is summarized.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(14) WebOfScience(21)
Scene-adaptive crowd counting method based on meta learning with dual-input network DMNet
Haoyu ZHAO, Weidong MIN, Jianqiang XU, Qi WANG, Yi ZOU, Qiyan FU
Front. Comput. Sci.    2023, 17 (1): 171304-null.   https://doi.org/10.1007/s11704-021-1207-x
Abstract   HTML   PDF (6952KB)

Crowd counting is recently becoming a hot research topic, which aims to count the number of the people in different crowded scenes. Existing methods are mainly based on training-testing pattern and rely on large data training, which fails to accurately count the crowd in real-world scenes because of the limitation of model’s generalization capability. To alleviate this issue, a scene-adaptive crowd counting method based on meta-learning with Dual-illumination Merging Network (DMNet) is proposed in this paper. The proposed method based on learning-to-learn and few-shot learning is able to adapt different scenes which only contain a few labeled images. To generate high quality density map and count the crowd in low-lighting scene, the DMNet is proposed, which contains Multi-scale Feature Extraction module and Element-wise Fusion Module. The Multi-scale Feature Extraction module is used to extract the image feature by multi-scale convolutions, which helps to improve network accuracy. The Element-wise Fusion module fuses the low-lighting feature and illumination-enhanced feature, which supplements the missing illumination in low-lighting environments. Experimental results on benchmarks, WorldExpo’10, DISCO, USCD, and Mall, show that the proposed method outperforms the existing state-of-the-art methods in accuracy and gets satisfied results.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(3)
VulLoc: vulnerability localization based on inducing commits and fixing commits
Lili BO, Yue LI, Xiaobing SUN, Xiaoxue WU, Bin LI
Front. Comput. Sci.    2023, 17 (3): 173207-null.   https://doi.org/10.1007/s11704-022-1729-x
Abstract   HTML   PDF (662KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1)
A subgraph matching algorithm based on subgraph index for knowledge graph
Yunhao SUN, Guanyu LI, Jingjing DU, Bo NING, Heng CHEN
Front. Comput. Sci.    2022, 16 (3): 163606-null.   https://doi.org/10.1007/s11704-020-0360-y
Abstract   HTML   PDF (12660KB)

The problem of subgraph matching is one fundamental issue in graph search, which is NP-Complete problem. Recently, subgraph matching has become a popular research topic in the field of knowledge graph analysis, which has a wide range of applications including question answering and semantic search. In this paper, we study the problem of subgraph matching on knowledge graph. Specifically, given a query graph q and a data graph G, the problem of subgraph matching is to conduct all possible subgraph isomorphic mappings of q on G. Knowledge graph is formed as a directed labeled multi-graph having multiple edges between a pair of vertices and it has more dense semantic and structural features than general graph. To accelerate subgraph matching on knowledge graph, we propose a novel subgraph matching algorithm based on subgraph index for knowledge graph, called as F G q T-Match. The subgraph matching algorithm consists of two key designs. One design is a subgraph index of matching-driven flow graph ( F G q T), which reduces redundant calculations in advance. Another design is a multi-label weight matrix, which evaluates a near-optimal matching tree for minimizing the intermediate candidates. With the aid of these two key designs, all subgraph isomorphic mappings are quickly conducted only by traversing F G q T. Extensive empirical studies on real and synthetic graphs demonstrate that our techniques outperform the state-of-the-art algorithms.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(8) WebOfScience(15)
Towards a better prediction of subcellular location of long non-coding RNA
Zhao-Yue ZHANG, Zi-Jie SUN, Yu-He YANG, Hao LIN
Front. Comput. Sci.    2022, 16 (5): 165903-null.   https://doi.org/10.1007/s11704-021-1015-3
Abstract   HTML   PDF (2939KB)

The spatial distribution pattern of long non-coding RNA (lncRNA) in cell is tightly related to their function. With the increment of publicly available subcellular location data, a number of computational methods have been developed for the recognition of the subcellular localization of lncRNA. Unfortunately, these computational methods suffer from the low discriminative power of redundant features or overfitting of oversampling. To address those issues and enhance the prediction performance, we present a support vector machine-based approach by incorporating mutual information algorithm and incremental feature selection strategy. As a result, the new predictor could achieve the overall accuracy of 91.60%. The highly automated web-tool is available at lin-group.cn/server/iLoc-LncRNA(2.0)/website. It will help to get the knowledge of lncRNA subcellular localization.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(12) WebOfScience(22)
Challenges and future directions of secure federated learning: a survey
Kaiyue ZHANG, Xuan SONG, Chenhan ZHANG, Shui YU
Front. Comput. Sci.    2022, 16 (5): 165817-null.   https://doi.org/10.1007/s11704-021-0598-z
Abstract   HTML   PDF (10553KB)

Federated learning came into being with the increasing concern of privacy security, as people’s sensitive information is being exposed under the era of big data. It is an algorithm that does not collect users’ raw data, but aggregates model parameters from each client and therefore protects user’s privacy. Nonetheless, due to the inherent distributed nature of federated learning, it is more vulnerable under attacks since users may upload malicious data to break down the federated learning server. In addition, some recent studies have shown that attackers can recover information merely from parameters. Hence, there is still lots of room to improve the current federated learning frameworks. In this survey, we give a brief review of the state-of-the-art federated learning techniques and detailedly discuss the improvement of federated learning. Several open issues and existing solutions in federated learning are discussed. We also point out the future research directions of federated learning.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(24) WebOfScience(39)
Rehearsal: learning from prediction to decision
Zhi-Hua ZHOU
Front. Comput. Sci.    2022, 16 (4): 164352-.   https://doi.org/10.1007/s11704-022-2900-0
Abstract   HTML   PDF (614KB)
Table and Figures | Reference | Related Articles | Metrics
Cited: WebOfScience(9)
SCENERY: a lightweight block cipher based on Feistel structure
Jingya FENG, Lang LI
Front. Comput. Sci.    2022, 16 (3): 163813-null.   https://doi.org/10.1007/s11704-020-0115-9
Abstract   HTML   PDF (2691KB)

In this paper, we propose a new lightweight block cipher called SCENERY. The main purpose of SCENERY design applies to hardware and software platforms. SCENERY is a 64-bit block cipher supporting 80-bit keys, and its data processing consists of 28 rounds. The round function of SCENERY consists of 8 4 × 4 S-boxes in parallel and a 32 × 32 binary matrix, and we can implement SCENERY with some basic logic instructions. The hardware implementation of SCENERY only requires 1438 GE based on 0.18 um CMOS technology, and the software implementation of encrypting or decrypting a block takes approximately 1516 clock cycles on 8-bit microcontrollers and 364 clock cycles on 64-bit processors. Compared with other encryption algorithms, the performance of SCENERY is well balanced for both hardware and software. By the security analyses, SCENERY can achieve enough security margin against known attacks, such as differential cryptanalysis, linear cryptanalysis, impossible differential cryptanalysis and related-key attacks.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(5) WebOfScience(12)
New development of cognitive diagnosis models
Yingjie LIU, Tiancheng ZHANG, Xuecen WANG, Ge YU, Tao LI
Front. Comput. Sci.    2023, 17 (1): 171604-null.   https://doi.org/10.1007/s11704-022-1128-3
Abstract   HTML   PDF (1078KB)

Cognitive diagnosis is the judgment of the student’s cognitive ability, is a wide-spread concern in educational science. The cognitive diagnosis model (CDM) is an essential method to realize cognitive diagnosis measurement. This paper presents new research on the cognitive diagnosis model and introduces four individual aspects of probability-based CDM and deep learning-based CDM. These four aspects are higher-order latent trait, polytomous responses, polytomous attributes, and multilevel latent traits. The paper also sorts on the contained ideas, model structures and respective characteristics, and provides direction for developing cognitive diagnosis in the future.

Table and Figures | Reference | Related Articles | Metrics
Cited: Crossref(2) WebOfScience(5)
Accountable attribute-based authentication with fine-grained access control and its application to crowdsourcing
Peng LI, Junzuo LAI, Yongdong WU
Front. Comput. Sci.    2023, 17 (1): 171802-null.   https://doi.org/10.1007/s11704-021-0593-4
Abstract   HTML   PDF (1556KB)

We introduce a new notion called accountable attribute-based authentication with fine-grained access control (AccABA), which achieves (i) fine-grained access control that prevents ineligible users from authenticating; (ii) anonymity such that no one can recognize the identity of a user; (iii) public accountability, i.e., as long as a user authenticates two different messages, the corresponding authentications will be easily identified and linked, and anyone can reveal the user’s identity without any help from a trusted third party. Then, we formalize the security requirements in terms of unforgeability, anonymity, linkability and traceability, and give a generic construction to fulfill these requirements. Based on AccABA, we further present the first attribute-based, fair, anonymous and publicly traceable crowdsourcing scheme on blockchain, which is designed to filter qualified workers to participate in tasks, and ensures the fairness of the competition between workers, and finally balances the tension between anonymity and accountability.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(3)
A survey on federated learning: a perspective from multi-party computation
Fengxia LIU, Zhiming ZHENG, Yexuan SHI, Yongxin TONG, Yi ZHANG
Front. Comput. Sci.    2024, 18 (1): 181336-null.   https://doi.org/10.1007/s11704-023-3282-7
Abstract   HTML   PDF (1635KB)

Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets. To enhance privacy in federated learning, multi-party computation can be leveraged for secure communication and computation during model training. This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy, as well as the corresponding optimization techniques to improve model accuracy and training efficiency. We also pinpoint future directions to deploy federated learning to a wider range of applications.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Improving fault localization with pre-training
Zhuo ZHANG, Ya LI, Jianxin XUE, Xiaoguang MAO
Front. Comput. Sci.    2024, 18 (1): 181205-null.   https://doi.org/10.1007/s11704-023-2597-8
Abstract   HTML   PDF (315KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Certificateless network coding proxy signatures from lattice
Huifang YU, Ning WANG
Front. Comput. Sci.    2023, 17 (5): 175810-null.   https://doi.org/10.1007/s11704-022-2128-z
Abstract   HTML   PDF (1904KB)

Network coding can improve the information transmission efficiency and reduces the network resource consumption, so it is a very good platform for information transmission. Certificateless proxy signatures are widely applied in information security fields. However, certificateless proxy signatures based on classical number theory are not suitable for the network coding environment and cannot resist the quantum computing attacks. In view of this, we construct certificateless network coding proxy signatures from lattice (LCL-NCPS). LCL-NCPS is new multi-source signature scheme which has the characteristics of anti-quantum, anti-pollution and anti-forgery. In LCL-NCPS, each source node user can output a message vector to intermediate node and sink node, and the message vectors from different source nodes will be linearly combined to achieve the aim of improving the network transmission rate and network robustness. In terms of efficiency analysis of space dimension, LCL-NCPS can obtain the lower computation complexity by reducing the dimension of proxy key. In terms of efficiency analysis of time dimension, LCL-NCPS has higher computation efficiency in signature and verification.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Large sequence models for sequential decision-making: a survey
Muning WEN, Runji LIN, Hanjing WANG, Yaodong YANG, Ying WEN, Luo MAI, Jun WANG, Haifeng ZHANG, Weinan ZHANG
Front. Comput. Sci.    2023, 17 (6): 176349-.   https://doi.org/10.1007/s11704-023-2689-5
Abstract   HTML   PDF (2853KB)

Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e.g., GPT-3 and Swin Transformer. Although originally designed for prediction problems, it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems, which are typically beset by long-standing issues involving sample efficiency, credit assignment, and partial observability. In recent years, sequence models, especially the Transformer, have attracted increasing interest in the RL communities, spawning numerous approaches with notable effectiveness and generalizability. This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer, by discussing the connection between sequential decision-making and sequence modeling, and categorizing them based on the way they utilize the Transformer. Moreover, this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making, encompassing theoretical foundations, network architectures, algorithms, and efficient training systems.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(4)
Mean estimation over numeric data with personalized local differential privacy
Qiao XUE, Youwen ZHU, Jian WANG
Front. Comput. Sci.    2022, 16 (3): 163806-null.   https://doi.org/10.1007/s11704-020-0103-0
Abstract   HTML   PDF (10273KB)

The fast development of the Internet and mobile devices results in a crowdsensing business model, where individuals (users) are willing to contribute their data to help the institution (data collector) analyze and release useful information. However, the reveal of personal data will bring huge privacy threats to users, which will impede the wide application of the crowdsensing model. To settle the problem, the definition of local differential privacy (LDP) is proposed. Afterwards, to respond to the varied privacy preference of users, researchers propose a new model, i.e., personalized local differential privacy (PLDP), which allow users to specify their own privacy parameters. In this paper, we focus on a basic task of calculating the mean value over a single numeric attribute with PLDP. Based on the previous schemes for mean estimation under LDP, we employ PLDP model to design novel schemes (LAP, DCP, PWP) to provide personalized privacy for each user. We then theoretically analysis the worst-case variance of three proposed schemes and conduct experiments on synthetic and real datasets to evaluate the performance of three methods. The theoretical and experimental results show the optimality of PWP in the low privacy regime and a slight advantage of DCP in the high privacy regime.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(9) WebOfScience(10)
GCSS: a global collaborative scheduling strategy for wide-area high-performance computing
Yao SONG, Limin XIAO, Liang WANG, Guangjun QIN, Bing WEI, Baicheng YAN, Chenhao ZHANG
Front. Comput. Sci.    2022, 16 (5): 165105-null.   https://doi.org/10.1007/s11704-021-0353-5
Abstract   HTML   PDF (15795KB)

Wide-area high-performance computing is widely used for large-scale parallel computing applications owing to its high computing and storage resources. However, the geographical distribution of computing and storage resources makes efficient task distribution and data placement more challenging. To achieve a higher system performance, this study proposes a two-level global collaborative scheduling strategy for wide-area high-performance computing environments. The collaborative scheduling strategy integrates lightweight solution selection, redundant data placement and task stealing mechanisms, optimizing task distribution and data placement to achieve efficient computing in wide-area environments. The experimental results indicate that compared with the state-of-the-art collaborative scheduling algorithm HPS+, the proposed scheduling strategy reduces the makespan by 23.24%, improves computing and storage resource utilization by 8.28% and 21.73% respectively, and achieves similar global data migration costs.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(4)
Graph convolution machine for context-aware recommender system
Jiancan WU, Xiangnan HE, Xiang WANG, Qifan WANG, Weijian CHEN, Jianxun LIAN, Xing XIE
Front. Comput. Sci.    2022, 16 (6): 166614-null.   https://doi.org/10.1007/s11704-021-0261-8
Abstract   HTML   PDF (13815KB)

The latest advance in recommendation shows that better user and item representations can be learned via performing graph convolutions on the user-item interaction graph. However, such finding is mostly restricted to the collaborative filtering (CF) scenario, where the interaction contexts are not available. In this work, we extend the advantages of graph convolutions to context-aware recommender system (CARS, which represents a generic type of models that can handle various side information). We propose Graph Convolution Machine (GCM), an end-to-end framework that consists of three components: an encoder, graph convolution (GC) layers, and a decoder. The encoder projects users, items, and contexts into embedding vectors, which are passed to the GC layers that refine user and item embeddings with context-aware graph convolutions on the user-item graph. The decoder digests the refined embeddings to output the prediction score by considering the interactions among user, item, and context embeddings. We conduct experiments on three real-world datasets from Yelp and Amazon, validating the effectiveness of GCM and the benefits of performing graph convolutions for CARS.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(11) WebOfScience(24)
Collaborative eye tracking based code review through real-time shared gaze visualization
Shiwei CHENG, Jialing WANG, Xiaoquan SHEN, Yijian CHEN, Anind DEY
Front. Comput. Sci.    2022, 16 (3): 163704-null.   https://doi.org/10.1007/s11704-020-0422-1
Abstract   HTML   PDF (3221KB)

Code review is intended to find bugs in early development phases, improving code quality for later integration and testing. However, due to the lack of experience with algorithm design, or software development, individual novice programmers face challenges while reviewing code. In this paper, we utilize collaborative eye tracking to record the gaze data from multiple reviewers, and share the gaze visualization among them during the code review process. The visualizations, such as borders highlighting current reviewed code lines, transition lines connecting related reviewed code lines, reveal the visual attention about program functions that can facilitate understanding and bug tracing. This can help novice reviewers to make sense to confirm the potential bugs or avoid repeated reviewing of code, and potentially even help to improve reviewing skills. We built a prototype system, and conducted a user study with paired reviewers. The results showed that the shared real-time visualization allowed the reviewers to find bugs more efficiently.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(3) WebOfScience(7)
Unsupervised statistical text simplification using pre-trained language modeling for initialization
Jipeng QIANG, Feng ZHANG, Yun LI, Yunhao YUAN, Yi ZHU, Xindong WU
Front. Comput. Sci.    2023, 17 (1): 171303-null.   https://doi.org/10.1007/s11704-022-1244-0
Abstract   HTML   PDF (20653KB)

Unsupervised text simplification has attracted much attention due to the scarcity of high-quality parallel text simplification corpora. Recent an unsupervised statistical text simplification based on phrase-based machine translation system (UnsupPBMT) achieved good performance, which initializes the phrase tables using the similar words obtained by word embedding modeling. Since word embedding modeling only considers the relevance between words, the phrase table in UnsupPBMT contains a lot of dissimilar words. In this paper, we propose an unsupervised statistical text simplification using pre-trained language modeling BERT for initialization. Specifically, we use BERT as a general linguistic knowledge base for predicting similar words. Experimental results show that our method outperforms the state-of-the-art unsupervised text simplification methods on three benchmarks, even outperforms some supervised baselines.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(2)
VenomAttack: automated and adaptive activity hijacking in Android
Pu SUN, Sen CHEN, Lingling FAN, Pengfei GAO, Fu SONG, Min YANG
Front. Comput. Sci.    2023, 17 (1): 171801-null.   https://doi.org/10.1007/s11704-021-1126-x
Abstract   HTML   PDF (4539KB)

Activity hijacking is one of the most powerful attacks in Android. Though promising, all the prior activity hijacking attacks suffer from some limitations and have limited attack capabilities. They no longer pose security threats in recent Android due to the presence of effective defense mechanisms. In this work, we propose the first automated and adaptive activity hijacking attack, named VenomAttack, enabling a spectrum of customized attacks (e.g., phishing, spoofing, and DoS) on a large scale in recent Android, even the state-of-the-art defense mechanisms are deployed. Specifically, we propose to use hotpatch techniques to identify vulnerable devices and update attack payload without re-installation and re-distribution, hence bypassing offline detection. We present a newly-discovered flaw in Android and a bug in derivatives of Android, each of which allows us to check if a target app is running in the background or not, by which we can determine the right attack timing via a designed transparent activity. We also propose an automated fake activity generation approach, allowing large-scale attacks. Requiring only the common permission INTERNET, we can hijack activities at the right timing without destroying the GUI integrity of the foreground app. We conduct proof-of-concept attacks, showing that VenomAttack poses severe security risks on recent Android versions. The user study demonstrates the effectiveness of VenomAttack in real-world scenarios, achieving a high success rate (95%) without users’ awareness. That would call more attention to the stakeholders like Google.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Underwater image enhancement by maximum-likelihood based adaptive color correction and robust scattering removal
Bo WANG, Zitong KANG, Pengwei DONG, Fan WANG, Peng MA, Jiajing BAI, Pengwei LIANG, Chongyi LI
Front. Comput. Sci.    2023, 17 (2): 172702-null.   https://doi.org/10.1007/s11704-022-1205-7
Abstract   HTML   PDF (29491KB)

Underwater images often exhibit severe color deviations and degraded visibility, which limits many practical applications in ocean engineering. Although extensive research has been conducted into underwater image enhancement, little of which demonstrates the significant robustness and generalization for diverse real-world underwater scenes. In this paper, we propose an adaptive color correction algorithm based on the maximum likelihood estimation of Gaussian parameters, which effectively removes color casts of a variety of underwater images. A novel algorithm using weighted combination of gradient maps in HSV color space and absolute difference of intensity for accurate background light estimation is proposed, which circumvents the influence of white or bright regions that challenges existing physical model-based methods. To enhance contrast of resultant images, a piece-wise affine transform is applied to the transmission map estimated via background light differential. Finally, with the estimated background light and transmission map, the scene radiance is recovered by addressing an inverse problem of image formation model. Extensive experiments reveal that our results are characterized by natural appearance and genuine color, and our method achieves competitive performance with the state-of-the-art methods in terms of objective evaluation metrics, which further validates the better robustness and higher generalization ability of our enhancement model.

Table and Figures | Reference | Related Articles | Metrics
A survey of discourse parsing
Jiaqi LI, Ming LIU, Bing QIN, Ting LIU
Front. Comput. Sci.    2022, 16 (5): 165329-null.   https://doi.org/10.1007/s11704-021-0500-z
Abstract   HTML   PDF (2450KB)

Discourse parsing is an important research area in natural language processing (NLP), which aims to parse the discourse structure of coherent sentences. In this survey, we introduce several different kinds of discourse parsing tasks, mainly including RST-style discourse parsing, PDTB-style discourse parsing, and discourse parsing for multiparty dialogue. For these tasks, we introduce the classical and recent existing methods, especially neural network approaches. After that, we describe the applications of discourse parsing for other NLP tasks, such as machine reading comprehension and sentiment analysis. Finally, we discuss the future trends of the task.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(4) WebOfScience(8)
DNACDS: Cloud IoE big data security and accessing scheme based on DNA cryptography
Ashish SINGH, Abhinav KUMAR, Suyel NAMASUDRA
Front. Comput. Sci.    2024, 18 (1): 181801-null.   https://doi.org/10.1007/s11704-022-2193-3
Abstract   HTML   PDF (9570KB)

The Internet of Everything (IoE) based cloud computing is one of the most prominent areas in the digital big data world. This approach allows efficient infrastructure to store and access big real-time data and smart IoE services from the cloud. The IoE-based cloud computing services are located at remote locations without the control of the data owner. The data owners mostly depend on the untrusted Cloud Service Provider (CSP) and do not know the implemented security capabilities. The lack of knowledge about security capabilities and control over data raises several security issues. Deoxyribonucleic Acid (DNA) computing is a biological concept that can improve the security of IoE big data. The IoE big data security scheme consists of the Station-to-Station Key Agreement Protocol (StS KAP) and Feistel cipher algorithms. This paper proposed a DNA-based cryptographic scheme and access control model (DNACDS) to solve IoE big data security and access issues. The experimental results illustrated that DNACDS performs better than other DNA-based security schemes. The theoretical security analysis of the DNACDS shows better resistance capabilities.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(2)
Verifiable searchable symmetric encryption for conjunctive keyword queries in cloud storage
Qingqing GAN, Joseph K. LIU, Xiaoming WANG, Xingliang YUAN, Shi-Feng SUN, Daxin HUANG, Cong ZUO, Jianfeng WANG
Front. Comput. Sci.    2022, 16 (6): 166820-null.   https://doi.org/10.1007/s11704-021-0601-8
Abstract   HTML   PDF (27346KB)

Searchable symmetric encryption (SSE) has been introduced for secure outsourcing the encrypted database to cloud storage, while maintaining searchable features. Of various SSE schemes, most of them assume the server is honest but curious, while the server may be trustless in the real world. Considering a malicious server not honestly performing the queries, verifiable SSE (VSSE) schemes are constructed to ensure the verifiability of the search results. However, existing VSSE constructions only focus on single-keyword search or incur heavy computational cost during verification. To address this challenge, we present an efficient VSSE scheme, built on OXT protocol (Cash et al., CRYPTO 2013), for conjunctive keyword queries with sublinear search overhead. The proposed VSSE scheme is based on a privacy-preserving hash-based accumulator, by leveraging a well-established cryptographic primitive, Symmetric Hidden Vector Encryption (SHVE). Our VSSE scheme enables both correctness and completeness verifiability for the result without pairing operations, thus greatly reducing the computational cost in the verification process. Besides, the proposed VSSE scheme can still provide a proof when the search result is empty. Finally, the security analysis and experimental evaluation are given to demonstrate the security and practicality of the proposed scheme.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(2) WebOfScience(6)
Improving meta-learning model via meta-contrastive loss
Pinzhuo TIAN, Yang GAO
Front. Comput. Sci.    2022, 16 (5): 165331-null.   https://doi.org/10.1007/s11704-021-1188-9
Abstract   HTML   PDF (6396KB)

Recently, addressing the few-shot learning issue with meta-learning framework achieves great success. As we know, regularization is a powerful technique and widely used to improve machine learning algorithms. However, rare research focuses on designing appropriate meta-regularizations to further improve the generalization of meta-learning models in few-shot learning. In this paper, we propose a novel meta-contrastive loss that can be regarded as a regularization to fill this gap. The motivation of our method depends on the thought that the limited data in few-shot learning is just a small part of data sampled from the whole data distribution, and could lead to various bias representations of the whole data because of the different sampling parts. Thus, the models trained by a few training data (support set) and test data (query set) might misalign in the model space, making the model learned on the support set can not generalize well on the query data. The proposed meta-contrastive loss is designed to align the models of support and query sets to overcome this problem. The performance of the meta-learning model in few-shot learning can be improved. Extensive experiments demonstrate that our method can improve the performance of different gradient-based meta-learning models in various learning problems, e.g., few-shot regression and classification.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(3)