|
A revisit to MacKay algorithm and its application to deep network compression
Chune LI, Yongyi MAO, Richong ZHANG, Jinpeng HUAI
Front. Comput. Sci.. 2020, 14 (4): 144304-.
https://doi.org/10.1007/s11704-019-8390-z
An iterative procedure introduced in MacKay’s evidence framework is often used for estimating the hyperparameter in empirical Bayes. Together with the use of a particular form of prior, the estimation of the hyperparameter reduces to an automatic relevance determination model, which provides a soft way of pruning model parameters. Despite the effectiveness of this estimation procedure, it has stayed primarily as a heuristic to date and its application to deep neural network has not yet been explored. This paper formally investigates the mathematical nature of this procedure and justifies it as a well-principled algorithm framework, which we call the MacKay algorithm. As an application, we demonstrate its use in deep neural networks, which have typically complicated structure with millions of parameters and can be pruned to reduce the memory requirement and boost computational efficiency. In experiments, we adopt MacKay algorithm to prune the parameters of both simple networks such as LeNet, deep convolution VGG-like networks, and residual netowrks for large image classification task. Experimental results show that the algorithm can compress neural networks to a high level of sparsity with little loss of prediction accuracy, which is comparable with the state-of-the-art.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Adaptive sparse and dense hybrid representation with nonconvex optimization
Xuejun WANG, Feilong CAO, Wenjian WANG
Front. Comput. Sci.. 2020, 14 (4): 144306-.
https://doi.org/10.1007/s11704-019-7200-y
Sparse representation has been widely used in signal processing, pattern recognition and computer vision etc. Excellent achievements have been made in both theoretical researches and practical applications. However, there are two limitations on the application of classification. One is that sufficient training samples are required for each class, and the other is that samples should be uncorrupted. In order to alleviate above problems, a sparse and dense hybrid representation (SDR) framework has been proposed, where the training dictionary is decomposed into a class-specific dictionary and a non-class-specific dictionary. SDR puts constraint on the coefficients of class-specific dictionary. Nevertheless, it over-emphasizes the sparsity and overlooks the correlation information in class-specific dictionary, which may lead to poor classification results. To overcome this disadvantage, an adaptive sparse and dense hybrid representation with nonconvex optimization (ASDR-NO) is proposed in this paper. The trace norm is adopted in class-specific dictionary, which is different from general approaches. By doing so, the dictionary structure becomes adaptive and the representationability of the dictionary will be improved. Meanwhile, a nonconvex surrogate is used to approximate the rank function in dictionary decomposition in order to avoid a suboptimal solution of the original rank minimization, which can be solved by iteratively reweighted nuclear norm (IRNN) algorithm. Extensive experiments conducted on benchmark data sets have verified the effectiveness and advancement of the proposed algorithm compared with the state-of-the-art sparse representation methods.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
A novel classifier for multivariate instance using graph class signatures
Parnika PARANJAPE, Meera DHABU, Parag DESHPANDE
Front. Comput. Sci.. 2020, 14 (4): 144307-.
https://doi.org/10.1007/s11704-019-8263-5
Applications like identifying different customers from their unique buying behaviours, determining ratingsof a product given by users based on different sets of features, etc. require classification using class-specific subsets of features. Most of the existing state-of-the-art classifiers for multivariate data use complete feature set for classification regardless of the different class labels. Decision tree classifier can produce class-wise subsets of features. However, none of these classifiers model the relationship between features which may enhance classification accuracy. We call the class-specific subsets of features and the features’ interrelationships as class signatures. In this work, we propose to map the original input space of multivariate data to the feature space characterized by connected graphs as graphs can easily model entities, their attributes, and relationships among attributes. Mostly, entities are modeled using graphs, where graphs occur naturally, for example, chemical compounds. However, graphs do not occur naturally in multivariate data. Thus, extracting class signatures from multivariate data is a challenging task. We propose some feature selection heuristics to obtain class-specific prominent subgraph signatures. We also propose two variants of class signatures based classifier namely: 1) maximum matching signature (gMM), and 2) score and size of matched signatures (gSM). The effectiveness of the proposed approach on real-world and synthetic datasets has been studied and compared with other established classifiers. Experimental results confirm the ascendancy of the proposed class signatures based classifier on most of the datasets.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
An efficient multipath routing schema in multi-homing scenario based on protocol-oblivious forwarding
Pufang MA, Jiali YOU, Jinlin WANG
Front. Comput. Sci.. 2020, 14 (4): 144501-.
https://doi.org/10.1007/s11704-019-8397-5
With the advent of 5G, multi-homing will be an increasingly common scenario, which is expected to increase transmission rates, improve transmission reliability, and reduce costs for users. However, the current routing methods are unable to fully utilize the resources of networks to achieve high-performance data transmission for multi-homed devices. In the current routing mechanism, there is only one destination address in the packet forwarded to the multihomed host. Thus, the packet is difficult to adjust its path on the fly according to the status of the network to achieve better performance. In this paper, we present an efficient routing schema in multi-homing scenario based on protocoloblivious forwarding (POF). In the proposed schema, the packet forwarded to the multi-homed host carries multiple destination addresses to obtain the ability of switching the transmission path; meanwhile, the router dynamically adjusts the path of the packet through the perception of the networkstatus. Experimental results show that our schema could utilize the alternative paths properly and significantly improve the transmission efficiency.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Diversification on big data in query processing
Meifan ZHANG, Hongzhi WANG, Jianzhong LI, Hong GAO
Front. Comput. Sci.. 2020, 14 (4): 144607-.
https://doi.org/10.1007/s11704-019-8324-9
Recently, in the area of big data, some popular applications such as web search engines and recommendation systems, face the problem to diversify results during query processing. In this sense, it is both significant and essential to propose methods to deal with big data in order to increase the diversity of the result set. In this paper, we firstly define the diversity of a set and the ability of an element to improve the overall diversity. Based on these definitions, we propose a diversification framework which has good performance in terms of effectiveness and efficiency. Also, this framework has theoretical guarantee on probability of success. Secondly, we design implementation algorithms based on this framework for both numerical and string data. Thirdly, for numerical and string data respectively, we carry out extensive experiments on real data to verify the performance of our proposed framework, and also perform scalability experiments on synthetic data.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Practical continuous leakage-resilient CCA secure identity-based encryption
Yanwei ZHOU, Bo YANG
Front. Comput. Sci.. 2020, 14 (4): 144804-.
https://doi.org/10.1007/s11704-019-8140-2
Leakage of private information including private keys of user has become a threat to the security of computing systems. It has become a common security requirement that a cryptographic scheme should withstand various leakage attacks. In the real life, an adversary can break the security of cryptography primitive by performing continuous leakage attacks. Although, some research on the leakage-resilient cryptography had been made, there are still some remaining issued in previous attempts. The identity-based encryption (IBE) constructions were designed in the bounded-leakage model, and might not be able to meet their claimed security under the continuous-leakage attacks. In the real applications, the leakage is unbounded. That is, a practical cryptography scheme should keep its original security in the continuous leakage setting. The previous continuous leakageresilient IBE schemes either only achieve chosen-plaintext attacks security or the chosen-ciphertext attacks (CCA) security is proved in the selective identity model. Aiming to solve these problems, in this paper, we show how to construct the continuous leakage-resilient IBE scheme, and the scheme’s adaptive CCA security is proved in the standard model based on the hardness of decisional bilinear Diffie-Hellman exponent assumption. For any adversary, all elements in the ciphertext are random, and an adversary cannot obtain any leakage on the private key of user from the corresponding given ciphertext. Moreover, the leakage parameter of our proposal is independent of the plaintext space and has a constant size.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
A topology and risk-aware access control framework for cyber-physical space
Yan CAO, Zhiqiu HUANG, Yaoshen YU, Changbo KE, Zihao WANG
Front. Comput. Sci.. 2020, 14 (4): 144805-.
https://doi.org/10.1007/s11704-019-8454-0
Cyber-physical space is a spatial environment that integrates the cyber world and the physical world, aiming to provide an intelligent environment for users to conduct their day-to-day activities. The interplay between the cyber space and physical space proposes specific security requirements that are not captured by traditional access control frameworks. On one hand, the security of the physical space and the cyber space should be both concerned in the cyber-physical space. On the other hand, the bad results caused by failure in providing secure policy enforcementmay directly affect the controlled physical world. In this paper, we propose an effective access control framework for the cyber-physical space. Firstly, a topology-aware access control (TAAC) model is proposed. It can express the cyber access control, the physical access control, and the interaction access control simultaneously. Secondly, a risk assessment approach is proposed for the policy enforcement phase. It is used to evaluate the user behavior and ensures that the suspicious behaviors executed by authorized users can be handled correctly. Thirdly, we propose a role activation algorithm to ensure that the objects are accessed only by legal and honest users. Finally, we evaluate our approach by using an illustrative example and the performance analysis. The results demonstrate the feasibility of our approach.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Zero-pole cancellation for identity-based aggregators: a constant-size designated verifier-set signature
E CHEN, Yan ZHU, Changlu LIN, Kewei LV
Front. Comput. Sci.. 2020, 14 (4): 144806-.
https://doi.org/10.1007/s11704-019-8320-0
In this paper we present a designated verifier-set signature (DVSS), in which the signer allows to designate many verifiers rather than one verifier, and each designated verifier can verify the validity of signature by himself. Our research starts from identity-based aggregator (IBA) that compresses a designated set of verifier’s identities to a constantsize random string in cryptographic space. The IBA is constructed by mapping the hash of verifier’s identity into zero or pole of a target curve, and extracting one curve’s point as the result of aggregation according to a specific secret. Considering the different types of target curves, these two IBAs are called as zeros-based aggregator and poles-based aggregator, respectively. Based on them, we propose a practical DVSS scheme constructed from the zero-pole cancellation method which can eliminate the same elements between zeros-based aggregator and poles-based aggregator. Due to this design, our DVSS scheme has some distinct advantages: (1) the signature supporting arbitrary dynamic verifiers extracted from a large number of users; and (2) the signature with short and constant length. We rigorously prove that our DVSS scheme satisfies the security properties: correctness, consistency, unforgeability and exclusivity.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
A survey of current trends in computational predictions of protein-protein interactions
Yanbin WANG, Zhuhong YOU, Liping LI, Zhanheng CHEN
Front. Comput. Sci.. 2020, 14 (4): 144901-.
https://doi.org/10.1007/s11704-019-8232-z
Proteomics become an important research area of interests in life science after the completion of the human genome project. This scientific is to study the characteristics of proteins at the large-scale data level, and then gain a holistic and comprehensive understanding of the process of disease occurrence and cell metabolism at the protein level. A key issue in proteomics is how to efficiently analyze the massive amounts of protein data produced by high-throughput technologies. Computational technologies with low-cost and short-cycle are becoming the preferred methods for solving some important problems in post-genome era, such as protein-protein interactions (PPIs). In this review, we focus on computational methods for PPIs detection and show recent advancements in this critical area from multiple aspects. First, we analyze in detail the several challenges for computational methods for predicting PPIs and summarize the available PPIs data sources. Second, we describe the stateof-the-art computational methods recently proposed on this topic. Finally, we discuss some important technologies that can promote the prediction of PPI and the development of computational proteomics.
References |
Supplementary Material |
Related Articles |
Metrics
|
15 articles
|