Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

Top Read Articles
Published in last 1 year |  In last 2 years |  In last 3 years |  All
Please wait a minute...
For Selected: View Abstracts Toggle Thumbnails
Fully distributed identity-based threshold signatures with identifiable aborts
Yan JIANG, Youwen ZHU, Jian WANG, Xingxin LI
Front. Comput. Sci.    2023, 17 (5): 175813-null.   https://doi.org/10.1007/s11704-022-2370-4
Abstract   HTML   PDF (13073KB)

Identity-based threshold signature (IDTS) is a forceful primitive to protect identity and data privacy, in which parties can collaboratively sign a given message as a signer without reconstructing a signing key. Nevertheless, most IDTS schemes rely on a trusted key generation center (KGC). Recently, some IDTS schemes can achieve escrow-free security against corrupted KGC, but all of them are vulnerable to denial-of-service attacks in the dishonest majority setting, where cheaters may force the protocol to abort without providing any feedback. In this work, we present a fully decentralized IDTS scheme to resist corrupted KGC and denial-of-service attacks. To this end, we design threshold protocols to achieve distributed key generation, private key extraction, and signing generation which can withstand the collusion between KGCs and signers, and then we propose an identification mechanism that can detect the identity of cheaters during key generation, private key extraction and signing generation. Finally, we formally prove that the proposed scheme is threshold unforgeability against chosen message attacks. The experimental results show that the computation time of both key generation and signing generation is <1 s, and private key extraction is about 3 s, which is practical in the distributed environment.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Spreadsheet quality assurance: a literature review
Pak-Lok POON, Man Fai LAU, Yuen Tak YU, Sau-Fun TANG
Front. Comput. Sci.    2024, 18 (2): 182203-.   https://doi.org/10.1007/s11704-023-2384-6
Abstract   HTML   PDF (1637KB)

Spreadsheets are very common for information processing to support decision making by both professional developers and non-technical end users. Moreover, business intelligence and artificial intelligence are increasingly popular in the industry nowadays, where spreadsheets have been used as, or integrated into, intelligent or expert systems in various application domains. However, it has been repeatedly reported that faults often exist in operational spreadsheets, which could severely compromise the quality of conclusions and decisions based on the spreadsheets. With a view to systematically examining this problem via survey of existing work, we have conducted a comprehensive literature review on the quality issues and related techniques of spreadsheets over a 35.5-year period (from January 1987 to June 2022) for target journals and a 10.5-year period (from January 2012 to June 2022) for target conferences. Among other findings, two major ones are: (a) Spreadsheet quality is best addressed throughout the whole spreadsheet life cycle, rather than just focusing on a few specific stages of the life cycle. (b) Relatively more studies focus on spreadsheet testing and debugging (related to fault detection and removal) when compared with spreadsheet specification, modeling, and design (related to development). As prevention is better than cure, more research should be performed on the early stages of the spreadsheet life cycle. Enlightened by our comprehensive review, we have identified the major research gaps as well as highlighted key research directions for future work in the area.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
VulLoc: vulnerability localization based on inducing commits and fixing commits
Lili BO, Yue LI, Xiaobing SUN, Xiaoxue WU, Bin LI
Front. Comput. Sci.    2023, 17 (3): 173207-null.   https://doi.org/10.1007/s11704-022-1729-x
Abstract   HTML   PDF (662KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1)
Improving fault localization with pre-training
Zhuo ZHANG, Ya LI, Jianxin XUE, Xiaoguang MAO
Front. Comput. Sci.    2024, 18 (1): 181205-null.   https://doi.org/10.1007/s11704-023-2597-8
Abstract   HTML   PDF (315KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
A survey on federated learning: a perspective from multi-party computation
Fengxia LIU, Zhiming ZHENG, Yexuan SHI, Yongxin TONG, Yi ZHANG
Front. Comput. Sci.    2024, 18 (1): 181336-null.   https://doi.org/10.1007/s11704-023-3282-7
Abstract   HTML   PDF (1635KB)

Federated learning is a promising learning paradigm that allows collaborative training of models across multiple data owners without sharing their raw datasets. To enhance privacy in federated learning, multi-party computation can be leveraged for secure communication and computation during model training. This survey provides a comprehensive review on how to integrate mainstream multi-party computation techniques into diverse federated learning setups for guaranteed privacy, as well as the corresponding optimization techniques to improve model accuracy and training efficiency. We also pinpoint future directions to deploy federated learning to a wider range of applications.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Certificateless network coding proxy signatures from lattice
Huifang YU, Ning WANG
Front. Comput. Sci.    2023, 17 (5): 175810-null.   https://doi.org/10.1007/s11704-022-2128-z
Abstract   HTML   PDF (1904KB)

Network coding can improve the information transmission efficiency and reduces the network resource consumption, so it is a very good platform for information transmission. Certificateless proxy signatures are widely applied in information security fields. However, certificateless proxy signatures based on classical number theory are not suitable for the network coding environment and cannot resist the quantum computing attacks. In view of this, we construct certificateless network coding proxy signatures from lattice (LCL-NCPS). LCL-NCPS is new multi-source signature scheme which has the characteristics of anti-quantum, anti-pollution and anti-forgery. In LCL-NCPS, each source node user can output a message vector to intermediate node and sink node, and the message vectors from different source nodes will be linearly combined to achieve the aim of improving the network transmission rate and network robustness. In terms of efficiency analysis of space dimension, LCL-NCPS can obtain the lower computation complexity by reducing the dimension of proxy key. In terms of efficiency analysis of time dimension, LCL-NCPS has higher computation efficiency in signature and verification.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Large sequence models for sequential decision-making: a survey
Muning WEN, Runji LIN, Hanjing WANG, Yaodong YANG, Ying WEN, Luo MAI, Jun WANG, Haifeng ZHANG, Weinan ZHANG
Front. Comput. Sci.    2023, 17 (6): 176349-.   https://doi.org/10.1007/s11704-023-2689-5
Abstract   HTML   PDF (2853KB)

Transformer architectures have facilitated the development of large-scale and general-purpose sequence models for prediction tasks in natural language processing and computer vision, e.g., GPT-3 and Swin Transformer. Although originally designed for prediction problems, it is natural to inquire about their suitability for sequential decision-making and reinforcement learning problems, which are typically beset by long-standing issues involving sample efficiency, credit assignment, and partial observability. In recent years, sequence models, especially the Transformer, have attracted increasing interest in the RL communities, spawning numerous approaches with notable effectiveness and generalizability. This survey presents a comprehensive overview of recent works aimed at solving sequential decision-making tasks with sequence models such as the Transformer, by discussing the connection between sequential decision-making and sequence modeling, and categorizing them based on the way they utilize the Transformer. Moreover, this paper puts forth various potential avenues for future research intending to improve the effectiveness of large sequence models for sequential decision-making, encompassing theoretical foundations, network architectures, algorithms, and efficient training systems.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(4)
DNACDS: Cloud IoE big data security and accessing scheme based on DNA cryptography
Ashish SINGH, Abhinav KUMAR, Suyel NAMASUDRA
Front. Comput. Sci.    2024, 18 (1): 181801-null.   https://doi.org/10.1007/s11704-022-2193-3
Abstract   HTML   PDF (9570KB)

The Internet of Everything (IoE) based cloud computing is one of the most prominent areas in the digital big data world. This approach allows efficient infrastructure to store and access big real-time data and smart IoE services from the cloud. The IoE-based cloud computing services are located at remote locations without the control of the data owner. The data owners mostly depend on the untrusted Cloud Service Provider (CSP) and do not know the implemented security capabilities. The lack of knowledge about security capabilities and control over data raises several security issues. Deoxyribonucleic Acid (DNA) computing is a biological concept that can improve the security of IoE big data. The IoE big data security scheme consists of the Station-to-Station Key Agreement Protocol (StS KAP) and Feistel cipher algorithms. This paper proposed a DNA-based cryptographic scheme and access control model (DNACDS) to solve IoE big data security and access issues. The experimental results illustrated that DNACDS performs better than other DNA-based security schemes. The theoretical security analysis of the DNACDS shows better resistance capabilities.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(2)
An improved master-apprentice evolutionary algorithm for minimum independent dominating set problem
Shiwei PAN, Yiming MA, Yiyuan WANG, Zhiguo ZHOU, Jinchao JI, Minghao YIN, Shuli HU
Front. Comput. Sci.    2023, 17 (4): 174326-.   https://doi.org/10.1007/s11704-022-2023-7
Abstract   HTML   PDF (5010KB)

The minimum independent dominance set (MIDS) problem is an important version of the dominating set with some other applications. In this work, we present an improved master-apprentice evolutionary algorithm for solving the MIDS problem based on a path-breaking strategy called MAE-PB. The proposed MAE-PB algorithm combines a construction function for the initial solution generation and candidate solution restarting. It is a multiple neighborhood-based local search algorithm that improves the quality of the solution using a path-breaking strategy for solution recombination based on master and apprentice solutions and a perturbation strategy for disturbing the solution when the algorithm cannot improve the solution quality within a certain number of steps. We show the competitiveness of the MAE-PB algorithm by presenting the computational results on classical benchmarks from the literature and a suite of massive graphs from real-world applications. The results show that the MAE-PB algorithm achieves high performance. In particular, for the classical benchmarks, the MAE-PB algorithm obtains the best-known results for seven instances, whereas for several massive graphs, it improves the best-known results for 62 instances. We investigate the proposed key ingredients to determine their impact on the performance of the proposed algorithm.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(5)
An efficient deep learning-assisted person re-identification solution for intelligent video surveillance in smart cities
Muazzam MAQSOOD, Sadaf YASMIN, Saira GILLANI, Maryam BUKHARI, Seungmin RHO, Sang-Soo YEO
Front. Comput. Sci.    2023, 17 (4): 174329-null.   https://doi.org/10.1007/s11704-022-2050-4
Abstract   HTML   PDF (17519KB)

Innovations on the Internet of Everything (IoE) enabled systems are driving a change in the settings where we interact in smart units, recognized globally as smart city environments. However, intelligent video-surveillance systems are critical to increasing the security of these smart cities. More precisely, in today’s world of smart video surveillance, person re-identification (Re-ID) has gained increased consideration by researchers. Various researchers have designed deep learning-based algorithms for person Re-ID because they have achieved substantial breakthroughs in computer vision problems. In this line of research, we designed an adaptive feature refinement-based deep learning architecture to conduct person Re-ID. In the proposed architecture, the inter-channel and inter-spatial relationship of features between the images of the same individual taken from nonidentical camera viewpoints are focused on learning spatial and channel attention. In addition, the spatial pyramid pooling layer is inserted to extract the multiscale and fixed-dimension feature vectors irrespective of the size of the feature maps. Furthermore, the model’s effectiveness is validated on the CUHK01 and CUHK02 datasets. When compared with existing approaches, the approach presented in this paper achieves encouraging Rank 1 and 5 scores of 24.6% and 54.8%, respectively.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(2) WebOfScience(4)
circ2CBA: prediction of circRNA-RBP binding sites combining deep learning and attention mechanism
Yajing GUO, Xiujuan LEI, Lian LIU, Yi PAN
Front. Comput. Sci.    2023, 17 (5): 175904-null.   https://doi.org/10.1007/s11704-022-2151-0
Abstract   HTML   PDF (9749KB)

Circular RNAs (circRNAs) are RNAs with closed circular structure involved in many biological processes by key interactions with RNA binding proteins (RBPs). Existing methods for predicting these interactions have limitations in feature learning. In view of this, we propose a method named circ2CBA, which uses only sequence information of circRNAs to predict circRNA-RBP binding sites. We have constructed a data set which includes eight sub-datasets. First, circ2CBA encodes circRNA sequences using the one-hot method. Next, a two-layer convolutional neural network (CNN) is used to initially extract the features. After CNN, circ2CBA uses a layer of bidirectional long and short-term memory network (BiLSTM) and the self-attention mechanism to learn the features. The AUC value of circ2CBA reaches 0.8987. Comparison of circ2CBA with other three methods on our data set and an ablation experiment confirm that circ2CBA is an effective method to predict the binding sites between circRNAs and RBPs.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(2) WebOfScience(6)
Jointly beam stealing attackers detection and localization without training: an image processing viewpoint
Yaoqi YANG, Xianglin WEI, Renhui XU, Weizheng WANG, Laixian PENG, Yangang WANG
Front. Comput. Sci.    2023, 17 (3): 173704-null.   https://doi.org/10.1007/s11704-022-1550-6
Abstract   HTML   PDF (10873KB)

Recently revealed beam stealing attacks could greatly threaten the security and privacy of IEEE 802.11ad communications. The premise to restore normal network service is detecting and locating beam stealing attackers without their cooperation. Current consistency-based methods are only valid for one single attacker and are parameter-sensitive. From the viewpoint of image processing, this paper proposes an algorithm to jointly detect and locate multiple beam stealing attackers based on RSSI (Received Signal Strength Indicator) map without the training process involved in deep learning-based solutions. Firstly, an RSSI map is constructed based on interpolating the raw RSSI data for enabling high-resolution localization while reducing monitoring cost. Secondly, three image processing steps, including edge detection and segmentation, are conducted on the constructed RSSI map to detect and locate multiple attackers without any prior knowledge about the attackers. To evaluate our proposal’s performance, a series of experiments are conducted based on the collected data. Experimental results have shown that in typical parameter settings, our algorithm’s positioning error does not exceed 0.41 m with a detection rate no less than 91%.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(3) WebOfScience(6)
Non-interactive SM2 threshold signature scheme with identifiable abort
Huiqiang LIANG, Jianhua CHEN
Front. Comput. Sci.    2024, 18 (1): 181802-null.   https://doi.org/10.1007/s11704-022-2288-x
Abstract   HTML   PDF (9042KB)

A threshold signature is a special digital signature in which the N-signer share the private key x and can construct a valid signature for any subset of the included t-signer, but less than t-signer cannot obtain any information. Considering the breakthrough achievements of threshold ECDSA signature and threshold Schnorr signature, the existing threshold SM2 signature is still limited to two parties or based on the honest majority setting, there is no more effective solution for the multiparty case. To make the SM2 signature have more flexible application scenarios, promote the application of the SM2 signature scheme in the blockchain system and secure cryptocurrency wallets. This paper designs a non-interactive threshold SM2 signature scheme based on partially homomorphic encryption and zero-knowledge proof. Only the last round requires the message input, so make our scheme non-interactive, and the pre-signing process takes 2 rounds of communication to complete after the key generation. We allow arbitrary threshold tn and design a key update strategy. It can achieve security with identifiable abort under the malicious majority, which means that if the signature process fails, we can find the failed party. Performance analysis shows that the computation and communication costs of the pre-signing process grows linearly with the parties, and it is only 1/3 of the Canetti’s threshold ECDSA (CCS'20).

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Vehicle color recognition based on smooth modulation neural network with multi-scale feature fusion
Mingdi HU, Long BAI, Jiulun FAN, Sirui ZHAO, Enhong CHEN
Front. Comput. Sci.    2023, 17 (3): 173321-null.   https://doi.org/10.1007/s11704-022-1389-x
Abstract   HTML   PDF (8165KB)

Vehicle Color Recognition (VCR) plays a vital role in intelligent traffic management and criminal investigation assistance. However, the existing vehicle color datasets only cover 13 classes, which can not meet the current actual demand. Besides, although lots of efforts are devoted to VCR, they suffer from the problem of class imbalance in datasets. To address these challenges, in this paper, we propose a novel VCR method based on Smooth Modulation Neural Network with Multi-Scale Feature Fusion (SMNN-MSFF). Specifically, to construct the benchmark of model training and evaluation, we first present a new VCR dataset with 24 vehicle classes, Vehicle Color-24, consisting of 10091 vehicle images from a 100-hour urban road surveillance video. Then, to tackle the problem of long-tail distribution and improve the recognition performance, we propose the SMNN-MSFF model with multi-scale feature fusion and smooth modulation. The former aims to extract feature information from local to global, and the latter could increase the loss of the images of tail class instances for training with class-imbalance. Finally, comprehensive experimental evaluation on Vehicle Color-24 and previously three representative datasets demonstrate that our proposed SMNN-MSFF outperformed state-of-the-art VCR methods. And extensive ablation studies also demonstrate that each module of our method is effective, especially, the smooth modulation efficiently help feature learning of the minority or tail classes. Vehicle Color-24 and the code of SMNN-MSFF are publicly available and can contact the author to obtain.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(3)
Meaningful image encryption algorithm based on compressive sensing and integer wavelet transform
Xiaoling HUANG, Youxia DONG, Guodong YE, Yang SHI
Front. Comput. Sci.    2023, 17 (3): 173804-null.   https://doi.org/10.1007/s11704-022-1419-8
Abstract   HTML   PDF (13152KB)

A new meaningful image encryption algorithm based on compressive sensing (CS) and integer wavelet transformation (IWT) is proposed in this study. First of all, the initial values of chaotic system are encrypted by RSA algorithm, and then they are open as public keys. To make the chaotic sequence more random, a mathematical model is constructed to improve the random performance. Then, the plain image is compressed and encrypted to obtain the secret image. Secondly, the secret image is inserted with numbers zero to extend its size same to the plain image. After applying IWT to the carrier image and discrete wavelet transformation (DWT) to the inserted image, the secret image is embedded into the carrier image. Finally, a meaningful carrier image embedded with secret plain image can be obtained by inverse IWT. Here, the measurement matrix is built by both chaotic system and Hadamard matrix, which not only retains the characteristics of Hadamard matrix, but also has the property of control and synchronization of chaotic system. Especially, information entropy of the plain image is employed to produce the initial conditions of chaotic system. As a result, the proposed algorithm can resist known-plaintext attack (KPA) and chosen-plaintext attack (CPA). By the help of asymmetric cipher algorithm RSA, no extra transmission is needed in the communication. Experimental simulations show that the normalized correlation (NC) values between the host image and the cipher image are high. That is to say, the proposed encryption algorithm is imperceptible and has good hiding effect.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(16) WebOfScience(34)
FPGA sharing in the cloud: a comprehensive analysis
Jinyang GUO, Lu ZHANG, José ROMERO HUNG, Chao LI, Jieru ZHAO, Minyi GUO
Front. Comput. Sci.    2023, 17 (5): 175106-.   https://doi.org/10.1007/s11704-022-2127-0
Abstract   HTML   PDF (3020KB)

Cloud vendors are actively adopting FPGAs into their infrastructures for enhancing performance and efficiency. As cloud services continue to evolve, FPGA (field programmable gate array) systems would play an even important role in the future. In this context, FPGA sharing in multi-tenancy scenarios is crucial for the wide adoption of FPGA in the cloud. Recently, many works have been done towards effective FPGA sharing at different layers of the cloud computing stack.

In this work, we provide a comprehensive survey of recent works on FPGA sharing. We examine prior art from different aspects and encapsulate relevant proposals on a few key topics. On the one hand, we discuss representative papers on FPGA resource sharing schemes; on the other hand, we also summarize important SW/HW techniques that support effective sharing. Importantly, we further analyze the system design cost behind FPGA sharing. Finally, based on our survey, we identify key opportunities and challenges of FPGA sharing in future cloud scenarios.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(3)
IXT: Improved searchable encryption for multi-word queries based on PSI
Yunbo YANG, Xiaolei DONG, Zhenfu CAO, Jiachen SHEN, Shangmin DOU
Front. Comput. Sci.    2023, 17 (5): 175811-null.   https://doi.org/10.1007/s11704-022-2236-9
Abstract   HTML   PDF (6166KB)

Oblivious Cross-Tags (OXT) [1] is the first efficient searchable encryption (SE) protocol for conjunctive queries in a single-writer single-reader framework. However, it also has a trade-off between security and efficiency by leaking partial database information to the server. Recent attacks on these SE schemes show that the leakages from these SE schemes can be used to recover the content of queried keywords. To solve this problem, Lai et al. [2] propose Hidden Cross-Tags (HXT), which reduces the access pattern leakage from Keyword Pair Result Pattern (KPRP) to Whole Result Pattern (WRP). However, the WRP leakage can also be used to recover some additional contents of queried keywords. This paper proposes Improved Cross-Tags (IXT), an efficient searchable encryption protocol that achieves access and searches pattern hiding based on the labeled private set intersection. We also prove the proposed labeled private set intersection (PSI) protocol is secure against semi-honest adversaries, and IXT is L-semi-honest secure (L is leakage function). Finally, we do experiments to compare IXT with HXT. The experimental results show that the storage overhead and computation overhead of the search phase at the client-side in IXT is much lower than those in HXT. Meanwhile, the experimental results also show that IXT is scalable and can be applied to various sizes of datasets.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
FragDPI: a novel drug-protein interaction prediction model based on fragment understanding and unified coding
Zhihui YANG, Juan LIU, Xuekai ZHU, Feng YANG, Qiang ZHANG, Hayat Ali SHAH
Front. Comput. Sci.    2023, 17 (5): 175903-null.   https://doi.org/10.1007/s11704-022-2163-9
Abstract   HTML   PDF (3426KB)

Prediction of drug-protein binding is critical for virtual drug screening. Many deep learning methods have been proposed to predict the drug-protein binding based on protein sequences and drug representation sequences. However, most existing methods extract features from protein and drug sequences separately. As a result, they can not learn the features characterizing the drug-protein interactions. In addition, the existing methods encode the protein (drug) sequence usually based on the assumption that each amino acid (atom) has the same contribution to the binding, ignoring different impacts of different amino acids (atoms) on the binding. However, the event of drug-protein binding usually occurs between conserved residue fragments in the protein sequence and atom fragments of the drug molecule. Therefore, a more comprehensive encoding strategy is required to extract information from the conserved fragments.

In this paper, we propose a novel model, named FragDPI, to predict the drug-protein binding affinity. Unlike other methods, we encode the sequences based on the conserved fragments and encode the protein and drug into a unified vector. Moreover, we adopt a novel two-step training strategy to train FragDPI. The pre-training step is to learn the interactions between different fragments using unsupervised learning. The fine-tuning step is for predicting the binding affinities using supervised learning. The experiment results have illustrated the superiority of FragDPI.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(2)
Three-dimensional quantum wavelet transforms
Haisheng LI, Guiqiong LI, Haiying XIA
Front. Comput. Sci.    2023, 17 (5): 175905-null.   https://doi.org/10.1007/s11704-022-1639-y
Abstract   HTML   PDF (13043KB)

Wavelet transform is being widely used in the field of information processing. One-dimension and two-dimension quantum wavelet transforms have been investigated as important tool algorithms. However, three-dimensional quantum wavelet transforms have not been reported. This paper proposes a multi-level three-dimensional quantum wavelet transform theory to implement the wavelet transform for quantum videos. Then, we construct the iterative formulas for the multi-level three-dimensional Haar and Daubechies D4 quantum wavelet transforms, respectively. Next, we design quantum circuits of the two wavelet transforms using iterative methods. Complexity analysis shows that the proposed wavelet transforms offer exponential speed-up over their classical counterparts. Finally, the proposed quantum wavelet transforms are selected to realize quantum video compression as a primary application. Simulation results reveal that the proposed wavelet transforms have better compression performance for quantum videos than two-dimension quantum wavelet transforms.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(4)
VIS+AI: integrating visualization with artificial intelligence for efficient data analysis
Xumeng WANG, Ziliang WU, Wenqi HUANG, Yating WEI, Zhaosong HUANG, Mingliang XU, Wei CHEN
Front. Comput. Sci.    2023, 17 (6): 176709-.   https://doi.org/10.1007/s11704-023-2691-y
Abstract   HTML   PDF (13671KB)

Visualization and artificial intelligence (AI) are well-applied approaches to data analysis. On one hand, visualization can facilitate humans in data understanding through intuitive visual representation and interactive exploration. On the other hand, AI is able to learn from data and implement bulky tasks for humans. In complex data analysis scenarios, like epidemic traceability and city planning, humans need to understand large-scale data and make decisions, which requires complementing the strengths of both visualization and AI. Existing studies have introduced AI-assisted visualization as AI4VIS and visualization-assisted AI as VIS4AI. However, how can AI and visualization complement each other and be integrated into data analysis processes are still missing. In this paper, we define three integration levels of visualization and AI. The highest integration level is described as the framework of VIS+AI, which allows AI to learn human intelligence from interactions and communicate with humans through visual interfaces. We also summarize future directions of VIS+AI to inspire related studies.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(3)
Attribute augmentation-based label integration for crowdsourcing
Yao ZHANG, Liangxiao JIANG, Chaoqun LI
Front. Comput. Sci.    2023, 17 (5): 175331-null.   https://doi.org/10.1007/s11704-022-2225-z
Abstract   HTML   PDF (4214KB)

Crowdsourcing provides an effective and low-cost way to collect labels from crowd workers. Due to the lack of professional knowledge, the quality of crowdsourced labels is relatively low. A common approach to addressing this issue is to collect multiple labels for each instance from different crowd workers and then a label integration method is used to infer its true label. However, to our knowledge, almost all existing label integration methods merely make use of the original attribute information and do not pay attention to the quality of the multiple noisy label set of each instance. To solve these issues, this paper proposes a novel three-stage label integration method called attribute augmentation-based label integration (AALI). In the first stage, we design an attribute augmentation method to enrich the original attribute space. In the second stage, we develop a filter to single out reliable instances with high-quality multiple noisy label sets. In the third stage, we use majority voting to initialize integrated labels of reliable instances and then use cross-validation to build multiple component classifiers on reliable instances to predict all instances. Experimental results on simulated and real-world crowdsourced datasets demonstrate that AALI outperforms all the other state-of-the-art competitors.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: Crossref(3) WebOfScience(8)
Unsupervised spectral feature selection algorithms for high dimensional data
Mingzhao WANG, Henry HAN, Zhao HUANG, Juanying XIE
Front. Comput. Sci.    2023, 17 (5): 175330-.   https://doi.org/10.1007/s11704-022-2135-0
Abstract   HTML   PDF (20445KB)

It is a significant and challenging task to detect the informative features to carry out explainable analysis for high dimensional data, especially for those with very small number of samples. Feature selection especially the unsupervised ones are the right way to deal with this challenge and realize the task. Therefore, two unsupervised spectral feature selection algorithms are proposed in this paper. They group features using advanced Self-Tuning spectral clustering algorithm based on local standard deviation, so as to detect the global optimal feature clusters as far as possible. Then two feature ranking techniques, including cosine-similarity-based feature ranking and entropy-based feature ranking, are proposed, so that the representative feature of each cluster can be detected to comprise the feature subset on which the explainable classification system will be built. The effectiveness of the proposed algorithms is tested on high dimensional benchmark omics datasets and compared to peer methods, and the statistical test are conducted to determine whether or not the proposed spectral feature selection algorithms are significantly different from those of the peer methods. The extensive experiments demonstrate the proposed unsupervised spectral feature selection algorithms outperform the peer ones in comparison, especially the one based on cosine similarity feature ranking technique. The statistical test results show that the entropy feature ranking based spectral feature selection algorithm performs best. The detected features demonstrate strong discriminative capabilities in downstream classifiers for omics data, such that the AI system built on them would be reliable and explainable. It is especially significant in building transparent and trustworthy medical diagnostic systems from an interpretable AI perspective.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(2)
A novel dense retrieval framework for long document retrieval
Jiajia WANG, Weizhong ZHAO, Xinhui TU, Tingting HE
Front. Comput. Sci.    2023, 17 (4): 174609-null.   https://doi.org/10.1007/s11704-022-2041-5
Abstract   HTML   PDF (644KB)
Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(1)
DBST: a lightweight block cipher based on dynamic S-box
Liuyan YAN, Lang LI, Ying GUO
Front. Comput. Sci.    2023, 17 (3): 173805-.   https://doi.org/10.1007/s11704-022-1677-5
Abstract   HTML   PDF (6943KB)

IoT devices have been widely used with the advent of 5G. These devices contain a large amount of private data during transmission. It is primely important for ensuring their security. Therefore, we proposed a lightweight block cipher based on dynamic S-box named DBST. It is introduced for devices with limited hardware resources and high throughput requirements. DBST is a 128-bit block cipher supporting 64-bit key, which is based on a new generalized Feistel variant structure. It retains the consistency and significantly boosts the diffusion of the traditional Feistel structure. The SubColumns of round function is implemented by combining bit-slice technology with subkeys. The S-box is dynamically associated with the key. It has been demonstrated that DBST has a good avalanche effect, low hardware area, and high throughput. Our S-box has been proven to have fewer differential features than RECTANGLE S-box. The security analysis of DBST reveals that it can against impossible differential attack, differential attack, linear attack, and other types of attacks.

Table and Figures | Reference | Related Articles | Metrics
Cited: Crossref(1) WebOfScience(4)
Iterative Android automated testing
Yi ZHONG, Mengyu SHI, Youran XU, Chunrong FANG, Zhenyu CHEN
Front. Comput. Sci.    2023, 17 (5): 175212-null.   https://doi.org/10.1007/s11704-022-1658-8
Abstract   HTML   PDF (9737KB)

With the benefits of reducing time and workforce, automated testing has been widely used for the quality assurance of mobile applications (APPs). Compared with automated testing, manual testing can achieve higher coverage in complex interactive Activities. And the effectiveness of manual testing is highly dependent on the user operation process (UOP) of experienced testers. Based on the UOP, we propose an iterative Android automated testing (IAAT) method that automatically records, extracts, and integrates UOPs to guide the test logic of the tool across the complex Activity iteratively. The feedback test results can train the UOPs to achieve higher coverage in each iteration. We extracted 50 UOPs and conducted experiments on 10 popular mobile APPs to demonstrate IAAT’s effectiveness compared with Monkey and the initial automated tests. The experimental results show a noticeable improvement in the IAAT compared with the test logic without human knowledge. Under the 60 minutes test time, the average code coverage is improved by 13.98% to 37.83%, higher than the 27.48% of Monkey under the same conditions.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(2)
Representation learning via an integrated autoencoder for unsupervised domain adaptation
Yi ZHU, Xindong WU, Jipeng QIANG, Yunhao YUAN, Yun LI
Front. Comput. Sci.    2023, 17 (5): 175334-null.   https://doi.org/10.1007/s11704-022-1349-5
Abstract   HTML   PDF (4841KB)

The purpose of unsupervised domain adaptation is to use the knowledge of the source domain whose data distribution is different from that of the target domain for promoting the learning task in the target domain. The key bottleneck in unsupervised domain adaptation is how to obtain higher-level and more abstract feature representations between source and target domains which can bridge the chasm of domain discrepancy. Recently, deep learning methods based on autoencoder have achieved sound performance in representation learning, and many dual or serial autoencoder-based methods take different characteristics of data into consideration for improving the effectiveness of unsupervised domain adaptation. However, most existing methods of autoencoders just serially connect the features generated by different autoencoders, which pose challenges for the discriminative representation learning and fail to find the real cross-domain features. To address this problem, we propose a novel representation learning method based on an integrated autoencoders for unsupervised domain adaptation, called IAUDA. To capture the inter- and inner-domain features of the raw data, two different autoencoders, which are the marginalized autoencoder with maximum mean discrepancy (mAEMMD) and convolutional autoencoder (CAE) respectively, are proposed to learn different feature representations. After higher-level features are obtained by these two different autoencoders, a sparse autoencoder is introduced to compact these inter- and inner-domain representations. In addition, a whitening layer is embedded for features processed before the mAEMMD to reduce redundant features inside a local area. Experimental results demonstrate the effectiveness of our proposed method compared with several state-of-the-art baseline methods.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(5)
Software approaches for resilience of high performance computing systems: a survey
Jie JIA, Yi LIU, Guozhen ZHANG, Yulin GAO, Depei QIAN
Front. Comput. Sci.    2023, 17 (4): 174105-.   https://doi.org/10.1007/s11704-022-2096-3
Abstract   HTML   PDF (8331KB)

With the scaling up of high-performance computing systems in recent years, their reliability has been descending continuously. Therefore, system resilience has been regarded as one of the critical challenges for large-scale HPC systems. Various techniques and systems have been proposed to ensure the correct execution and completion of parallel programs. This paper provides a comprehensive survey of existing software resilience approaches. Firstly, a classification of software resilience approaches is presented; then we introduce major approaches and techniques, including checkpointing, replication, soft error resilience, algorithm-based fault tolerance, fault detection and prediction. In addition, challenges exposed by system-scale and heterogeneous architecture are also discussed.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Exploiting blockchain for dependable services in zero-trust vehicular networks
Min HAO, Beihai TAN, Siming WANG, Rong YU, Ryan Wen LIU, Lisu YU
Front. Comput. Sci.    2024, 18 (2): 182805-null.   https://doi.org/10.1007/s11704-023-2495-0
Abstract   HTML   PDF (13863KB)

The sixth-generation (6G) wireless communication system is envisioned be cable of providing highly dependable services by integrating with native reliable and trustworthy functionalities. Zero-trust vehicular networks is one of the typical scenarios for 6G dependable services. Under the technical framework of vehicle-and-roadside collaboration, more and more on-board devices and roadside infrastructures will communicate for information exchange. The reliability and security of the vehicle-and-roadside collaboration will directly affect the transportation safety. Considering a zero-trust vehicular environment, to prevent malicious vehicles from uploading false or invalid information, we propose a malicious vehicle identity disclosure approach based on the Shamir secret sharing scheme. Meanwhile, a two-layer consortium blockchain architecture and smart contracts are designed to protect the identity and privacy of benign vehicles as well as the security of their private data. After that, in order to improve the efficiency of vehicle identity disclosure, we present an inspection policy based on zero-sum game theory and a roadside unit incentive mechanism jointly using contract theory and subjective logic model. We verify the performance of the entire zero-trust solution through extensive simulation experiments. On the premise of protecting the vehicle privacy, our solution is demonstrated to significantly improve the reliability and security of 6G vehicular networks.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Incorporating metapath interaction on heterogeneous information network for social recommendation
Yanbin JIANG, Huifang MA, Xiaohui ZHANG, Zhixin LI, Liang CHANG
Front. Comput. Sci.    2024, 18 (1): 181302-null.   https://doi.org/10.1007/s11704-022-2438-1
Abstract   HTML   PDF (5772KB)

Heterogeneous information network (HIN) has recently been widely adopted to describe complex graph structure in recommendation systems, proving its effectiveness in modeling complex graph data. Although existing HIN-based recommendation studies have achieved great success by performing message propagation between connected nodes on the defined metapaths, they have the following major limitations. Existing works mainly convert heterogeneous graphs into homogeneous graphs via defining metapaths, which are not expressive enough to capture more complicated dependency relationships involved on the metapath. Besides, the heterogeneous information is more likely to be provided by item attributes while social relations between users are not adequately considered. To tackle these limitations, we propose a novel social recommendation model MPISR, which models MetaPath Interaction for Social Recommendation on heterogeneous information network. Specifically, our model first learns the initial node representation through a pretraining module, and then identifies potential social friends and item relations based on their similarity to construct a unified HIN. We then develop the two-way encoder module with similarity encoder and instance encoder to capture the similarity collaborative signals and relational dependency on different metapaths. Extensive experiments on five real datasets demonstrate the effectiveness of our method.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Quantifying predictability of sequential recommendation via logical constraints
En XU, Zhiwen YU, Nuo LI, Helei CUI, Lina YAO, Bin GUO
Front. Comput. Sci.    2023, 17 (5): 175612-null.   https://doi.org/10.1007/s11704-022-2223-1
Abstract   HTML   PDF (3055KB)

The sequential recommendation is a compelling technology for predicting users’ next interaction via their historical behaviors. Prior studies have proposed various methods to optimize the recommendation accuracy on different datasets but have not yet explored the intrinsic predictability of sequential recommendation. To this end, we consider applying the popular predictability theory of human movement behavior to this recommendation context. Still, it would incur serious bias in the next moment measurement of the candidate set size, resulting in inaccurate predictability. Therefore, determining the size of the candidate set is the key to quantifying the predictability of sequential recommendations. Here, different from the traditional approach that utilizes topological constraints, we first propose a method to learn inter-item associations from historical behaviors to restrict the size via logical constraints. Then, we extend it by 10 excellent recommendation algorithms to learn deeper associations between user behavior. Our two methods show significant improvement over existing methods in scenarios that deal with few repeated behaviors and large sets of behaviors. Finally, a prediction rate between 64% and 80% has been obtained by testing on five classical datasets in three domains of the recommender system. This provides a guideline to optimize the recommendation algorithm for a given dataset.

Table and Figures | Reference | Supplementary Material | Related Articles | Metrics
Cited: WebOfScience(3)