|
Semantic theories of programs with nested interrupts
Yanhong HUANG,Jifeng HE,Huibiao ZHU,Yongxin ZHAO,Jianqi SHI,Shengchao QIN
Front. Comput. Sci.. 2015, 9 (3): 331-345.
https://doi.org/10.1007/s11704-015-3251-x
In the design of dependable software for embedded and real-time operating systems, time analysis is a crucial but extremely difficult issue, the challenge of which is exacerbated due to the randomness and nondeterminism of interrupt handling behaviors. Thus research into a theory that integrates interrupt behaviors and time analysis seems to be important and challenging. In this paper, we present a programming language to describe programs with interrupts that is comprised of two essential parts: main program and interrupt handling programs. We also explore a timed operational semantics and a denotational semantics to specify the meanings of our language. Furthermore, a strategy of deriving denotational semantics from the timed operational semantics is provided to demonstrate the soundness of our operational semantics by showing the consistency between the derived denotational semantics and the original denotational semantics.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Efficient image representation for object recognition via pivots selection
Bojun XIE,Yi LIU,Hui ZHANG,Jian YU
Front. Comput. Sci.. 2015, 9 (3): 383-391.
https://doi.org/10.1007/s11704-015-4182-7
Patch-level features are essential for achieving good performance in computer vision tasks. Besides wellknown pre-defined patch-level descriptors such as scaleinvariant feature transform (SIFT) and histogram of oriented gradient (HOG), the kernel descriptor (KD) method [1] offers a new way to “grow-up” features from a match-kernel defined over image patch pairs using kernel principal component analysis (KPCA) and yields impressive results. In this paper, we present efficient kernel descriptor (EKD) and efficient hierarchical kernel descriptor (EHKD), which are built upon incomplete Cholesky decomposition. EKD automatically selects a small number of pivot features for generating patch-level features to achieve better computational efficiency. EHKD recursively applies EKD to form image-level features layer-by-layer. Perhaps due to parsimony, we find surprisingly that the EKD and EHKD approaches achieved competitive results on several public datasets compared with other state-of-the-art methods, at an improved efficiency over KD.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
A hybrid biometric identification framework for high security applications
Xuzhou LI,Yilong YIN,Yanbin NING,Gongping YANG,Lei PAN
Front. Comput. Sci.. 2015, 9 (3): 392-401.
https://doi.org/10.1007/s11704-014-4070-1
Research on biometrics for high security applications has not attracted as much attention as civilian or forensic applications. Limited research and deficient analysis so far has led to a lack of general solutions and leaves this as a challenging issue. This work provides a systematic analysis and identification of the problems to be solved in order to meet the performance requirements for high security applications, a double low problem. A hybrid ensemble framework is proposed to solve this problem. Setting an adequately high threshold for each matcher can guarantee a zero false acceptance rate (FAR) and then use the hybrid ensemble framework makes the false reject rate (FRR) as low as possible. Three experiments are performed to verify the effectiveness and generalization of the framework. First, two fingerprint verification algorithms are fused. In this test only 10.55% of fingerprints are falsely rejected with zero false acceptance rate, this is significantly lower than other state of the art methods. Second, in face verification, the framework also results in a large reduction in incorrect classification. Finally, assessing the performance of the framework on a combination of face and gait verification using a heterogeneous database show this framework can achieve both 0% false rejection and 0% false acceptance simultaneously.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Extracting viewer interests for automated bookmarking in video-on-demand services
Yang ZHAO,Ye TIAN,Yong LIU
Front. Comput. Sci.. 2015, 9 (3): 415-430.
https://doi.org/10.1007/s11704-014-3490-2
Video-on-demand (VoD) services have become popular on the Internet in recent years. In VoD, it is challenging to support the VCR functionality, especially the jumps, while maintaining a smooth streaming quality. Previous studies propose to solve this problem by predicting the jump target locations and prefetching the contents. However, through our analysis on traces from a real-world VoD service, we find that it would be fundamentally difficult to improve a viewer’s VCR experience by simply predicting his future jumps, while ignoring the intentions behind these jumps. Instead of the prediction-based approach, in this paper, we seek to support the VCR functionality by bookmarking the videos. There are two key techniques in our proposed methodology. First, we infer and differentiate viewers’ intentions in VCR jumps by decomposing the interseek times, using an expectation-maximization (EM) algorithm, and combine the decomposed inter-seek times with the VCR jumps to compute a numerical interest score for each video segment. Second, based on the interest scores, we propose an automated video bookmarking algorithm. The algorithm employs the time-series change detection techniques of CUSUMandMB-GT, and bookmarks videos by detecting the abrupt changes on their interest score sequences. We evaluate our proposed techniques using real-world VoD traces from dozens of videos. Experimental results suggest that with our methods, viewers’ interests within a video can be precisely extracted, and we can position bookmarks on the video’s highlight events accurately. Our proposed video bookmarking methodology does not require any knowledge on video type, contents, and semantics, and can be applied on various types of videos.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
Memory bandwidth optimization of SpMV on GPGPUs
Chenggang Clarence YAN,Hui YU,Weizhi XU,Yingping ZHANG,Bochuan CHEN,Zhu TIAN,Yuxuan WANG,Jian YIN
Front. Comput. Sci.. 2015, 9 (3): 431-441.
https://doi.org/10.1007/s11704-014-4127-1
It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. General purpose GPU (GPGPU) provides high computing ability and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more efficiently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we propose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format. With the blocking method, the corresponding part of vector x can be reused in the GPU cache, so the time to access the global memory for vector x is reduced heavily. Experiments are carried out on three GPU platforms, GeForce 9800 GX2, GeForce GTX 480, and Tesla K40. Experimental results show that both new methods can efficiently improve the utilization of GPU memory bandwidth and the performance of the GPU.
References |
Related Articles |
Metrics
|
|
A balanced decomposition approach to real-time visualization of large vector maps in CyberGIS
Mingqiang GUO,Ying HUANG,Zhong XIE
Front. Comput. Sci.. 2015, 9 (3): 442-455.
https://doi.org/10.1007/s11704-014-3498-7
With the dramatic development of spatial data infrastructure, CyberGIS has become significant for geospatial data sharing. Due to the large number of concurrent users and large volume of vector data, CyberGIS faces a great challenge in how to improve performance. The real-time visualization of vector maps is themost common function in Cyber-GIS applications, and it is time-consuming especially when the data volume becomes large. So, how to improve the efficiency of visualization of large vector maps is still a significant research direction for GIScience scientists. In this research, we review the existing three optimization strategies, and determine that the third category strategy (i.e., parallel optimization) is appropriate for the real-time visualization of large vector maps. One of the key issues of parallel optimization is how to decompose the real-time visualization tasks into balanced sub tasks while taking into consideration the spatial heterogeneous characteristics. We put forward some rules that the decomposition should conform to, and design a real-time visualization framework for large vector maps. We focus on a balanced decomposition approach that can assure efficiency and effectiveness. Considering the spatial heterogeneous characteristic of vector data, we use a “horizontal grid, vertical multistage” approach to construct a spatial point distribution information grid. The load balancer analyzes the spatial characteristics of the map requests and decomposes the real-time viewshed into multiple balanced sub viewsheds. Then, all the sub viewsheds are distributed to multiple server nodes to be executed in parallel, so as to improve the realtime visualization efficiency of large vector maps. A group of experiments have been conducted by us. The analysis results demonstrate that the approach proposed in this research has the ability of balanced decomposition, and it is efficient and effective for all geometry types of vector data.
References |
Related Articles |
Metrics
|
|
Irradiance regression for efficient final gathering in global illumination
Xuezhen HUANG,Xin SUN,Zhong REN,Xin TONG,Baining GUO,Kun ZHOU
Front. Comput. Sci.. 2015, 9 (3): 456-465.
https://doi.org/10.1007/s11704-014-4211-6
Photon mapping is widely used for global illumination rendering because of its high computational efficiency. But its efficiency is still limited, mainly by the intensive sampling required in final gathering, a process that is critical for removing low frequency artifacts of density estimation. In this paper, we propose a method to predict the final gathering estimation with direct density estimation, thereby achieving high quality global illumination by photon mapping with high efficiency. We first sample the irradiance of a subset of shading points by both final gathering and direct radiance estimation. Then we use the samples as a training set to predict the final gathered irradiance of other shading points through regression. Consequently, we are able to achieve about three times overall speedup compared with straightforward final gathering in global illumination computation with the same rendering quality.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
A hierarchical ontology context model for work-based learning
Chuantao YIN,Bingxue ZHANG,Betrand DAVID,Zhang XIONG
Front. Comput. Sci.. 2015, 9 (3): 466-473.
https://doi.org/10.1007/s11704-015-4200-4
Context modelling involves a) characterizing a situation with related information, and b) dealing and storing the information in a computer-understandable form. It is the keystone to enable a system to possess the perception capacity and adapt its functionality properly for different situations. However, a context model focusing on the characteristics of work-based learning is not well studied by pioneering researchers. For addressing this issue, in this work we firstly analyze several existing context models to identify the essentials of context modelling, whereby a hierarchical ontology context model is proposed to characterize work-based learning. Subsequently, we present the application of the proposed model in work-based learning scenario to provide adapted learning supports to professionals. Hence, this work has significance in both theory and practice.
References |
Supplementary Material |
Related Articles |
Metrics
|
|
A sequential model of bargaining in logic programming
Wu CHEN,Dongmo ZHANG,Maonian WU
Front. Comput. Sci.. 2015, 9 (3): 474-484.
https://doi.org/10.1007/s11704-015-3308-x
This paper proposes a sequential model of bargaining specifying reasoning processes of an agent behind bargaining procedures. We encode agents’ background knowledge, demands, and bargaining constraints in logic programs and represent bargaining outcomes in answer sets. We assume that in each bargaining situation, each agent has a set of goals to achieve, which are normally unachievable without an agreement among all the agents who are involved in the bargaining. Through an alternating-offers procedure, an agreement among bargaining agents may be reached by abductive reasoning. We show that the procedure converges to a Nash equilibrium if each agent makes rational offers/counteroffers in each round. In addition, the sequential model also has a number of desirable properties, such as mutual commitments, individual rationality, satisfactoriness, and honesty.
References |
Supplementary Material |
Related Articles |
Metrics
|
14 articles
|