Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

, Volume 9 Issue 3

For Selected: View Abstracts Toggle Thumbnails
RESEARCH ARTICLE
Semantic theories of programs with nested interrupts
Yanhong HUANG,Jifeng HE,Huibiao ZHU,Yongxin ZHAO,Jianqi SHI,Shengchao QIN
Front. Comput. Sci.. 2015, 9 (3): 331-345.  
https://doi.org/10.1007/s11704-015-3251-x

Abstract   PDF (424KB)

In the design of dependable software for embedded and real-time operating systems, time analysis is a crucial but extremely difficult issue, the challenge of which is exacerbated due to the randomness and nondeterminism of interrupt handling behaviors. Thus research into a theory that integrates interrupt behaviors and time analysis seems to be important and challenging. In this paper, we present a programming language to describe programs with interrupts that is comprised of two essential parts: main program and interrupt handling programs. We also explore a timed operational semantics and a denotational semantics to specify the meanings of our language. Furthermore, a strategy of deriving denotational semantics from the timed operational semantics is provided to demonstrate the soundness of our operational semantics by showing the consistency between the derived denotational semantics and the original denotational semantics.

References | Supplementary Material | Related Articles | Metrics
A novel strategy for automatic test data generation using soft computing technique
Priyanka CHAWLA,Inderveer CHANA,Ajay RANA
Front. Comput. Sci.. 2015, 9 (3): 346-363.  
https://doi.org/10.1007/s11704-014-3496-9

Abstract   PDF (822KB)

Software testing is one of the most crucial and analytical aspect to assure that developed software meets prescribed quality standards. Software development process invests at least 50% of the total cost in software testing process. Optimum and efficacious test data design of software is an important and challenging activity due to the nonlinear structure of software. Moreover, test case type and scope determines the quality of test data. To address this issue, software testing tools should employ intelligence based soft computing techniques like particle swarm optimization (PSO) and genetic algorithm (GA) to generate smart and efficient test data automatically. This paper presents a hybrid PSO and GA based heuristic for automatic generation of test suites. In this paper, we described the design and implementation of the proposed strategy and evaluated our model by performing experiments with ten container classes from the Java standard library. We analyzed our algorithm statistically with test adequacy criterion as branch coverage. The performance adequacy criterion is taken as percentage coverage per unit time and percentage of faults detected by the generated test data. We have compared our work with the heuristic based upon GA, PSO, existing hybrid strategies based on GA and PSO and memetic algorithm. The results showed that the test case generation is efficient in our work.

References | Supplementary Material | Related Articles | Metrics
Verifying specifications with associated attributes in graph transformation systems
Yu ZHOU,Yankai HUANG,Ou WEI,Zhiqiu HUANG
Front. Comput. Sci.. 2015, 9 (3): 364-374.  
https://doi.org/10.1007/s11704-015-4290-4

Abstract   PDF (506KB)

Graph transformation is an important modeling and analysis technique widely applied in software engineering. The attributes can naturally conceptualize the properties of the modeled artifacts. However, the lack of support for the specification of such attribute correspondence by ordinary propositional temporal logics hampers its further application during verification. Different from the theoretical investigations on quantified second order logics, we propose a practical and light-weight approach for the verification of this kind of temporal specifications with associated attributes. Particularly, we apply our approach and extend the generalpurpose graph transformation modeling tool: Groove. Moreover, symmetry reduction techniques are exploited to reduce the number of states. Experiments with performance evaluations complement our discussion and demonstrate the feasibility and efficiency of our approach.

References | Supplementary Material | Related Articles | Metrics
Hybrid fusion and interpolation algorithm with near-infrared image
Xiaoyan LUO,Jun ZHANG,Qionghai DAI
Front. Comput. Sci.. 2015, 9 (3): 375-382.  
https://doi.org/10.1007/s11704-014-4230-3

Abstract   PDF (663KB)

Silicon-based digital cameras can record visible and near-infrared (NIR) information, in which the full color visible image (RGB) must be restored from color filter array (CFA) interpolation. In this paper, we propose a unified framework for CFA interpolation and visible/NIR image combination. To obtain a high quality color image, the traditional color interpolation from raw CFA data is improved at each pixel, which is constrained by the corresponding monochromatic NIR image in gradient difference. The experiments indicate the effectiveness of this hybrid scheme to acquire joint color and NIR information in real-time, and show that this hybrid process can generate a better color image when compared to treating interpolation and fusion separately.

References | Supplementary Material | Related Articles | Metrics
Efficient image representation for object recognition via pivots selection
Bojun XIE,Yi LIU,Hui ZHANG,Jian YU
Front. Comput. Sci.. 2015, 9 (3): 383-391.  
https://doi.org/10.1007/s11704-015-4182-7

Abstract   PDF (420KB)

Patch-level features are essential for achieving good performance in computer vision tasks. Besides wellknown pre-defined patch-level descriptors such as scaleinvariant feature transform (SIFT) and histogram of oriented gradient (HOG), the kernel descriptor (KD) method [1] offers a new way to “grow-up” features from a match-kernel defined over image patch pairs using kernel principal component analysis (KPCA) and yields impressive results.

In this paper, we present efficient kernel descriptor (EKD) and efficient hierarchical kernel descriptor (EHKD), which are built upon incomplete Cholesky decomposition. EKD automatically selects a small number of pivot features for generating patch-level features to achieve better computational efficiency. EHKD recursively applies EKD to form image-level features layer-by-layer. Perhaps due to parsimony, we find surprisingly that the EKD and EHKD approaches achieved competitive results on several public datasets compared with other state-of-the-art methods, at an improved efficiency over KD.

References | Supplementary Material | Related Articles | Metrics
A hybrid biometric identification framework for high security applications
Xuzhou LI,Yilong YIN,Yanbin NING,Gongping YANG,Lei PAN
Front. Comput. Sci.. 2015, 9 (3): 392-401.  
https://doi.org/10.1007/s11704-014-4070-1

Abstract   PDF (603KB)

Research on biometrics for high security applications has not attracted as much attention as civilian or forensic applications. Limited research and deficient analysis so far has led to a lack of general solutions and leaves this as a challenging issue. This work provides a systematic analysis and identification of the problems to be solved in order to meet the performance requirements for high security applications, a double low problem. A hybrid ensemble framework is proposed to solve this problem. Setting an adequately high threshold for each matcher can guarantee a zero false acceptance rate (FAR) and then use the hybrid ensemble framework makes the false reject rate (FRR) as low as possible. Three experiments are performed to verify the effectiveness and generalization of the framework. First, two fingerprint verification algorithms are fused. In this test only 10.55% of fingerprints are falsely rejected with zero false acceptance rate, this is significantly lower than other state of the art methods. Second, in face verification, the framework also results in a large reduction in incorrect classification. Finally, assessing the performance of the framework on a combination of face and gait verification using a heterogeneous database show this framework can achieve both 0% false rejection and 0% false acceptance simultaneously.

References | Supplementary Material | Related Articles | Metrics
Pedestrian detection algorithm based on video sequences and laser point cloud
Hui LI,Yun LIU,Shengwu XIONG,Lin WANG
Front. Comput. Sci.. 2015, 9 (3): 402-414.  
https://doi.org/10.1007/s11704-014-3413-2

Abstract   PDF (1482KB)

Pedestrian detection is a critical problem in the field of computer vision. Although most existing algorithms are able to detect pedestrians well in controlled environments, it is often difficult to achieve accurate pedestrian detection from video sequences alone, especially in pedestrian-intensive scenes wherein pedestrians may cause mutual occlusion and thus incomplete detection. To surmount these difficulties, this paper presents pedestrian detection algorithm based on video sequences and laser point cloud. First, laser point cloud is interpreted and classified to separate pedestrian data and vehicle data. Then a fusion of video image data and laser point cloud data is achieved by calibration. The region of interest after fusion is determined using feature information contained in video image and three-dimensional information of laser point cloud to remove false detection of pedestrian and thus to achieve pedestrian detection in intensive scenes. Experimental verification and analysis in video sequences demonstrate that fusion of two data improves the performance of pedestrian detection and has better detection results.

References | Supplementary Material | Related Articles | Metrics
Extracting viewer interests for automated bookmarking in video-on-demand services
Yang ZHAO,Ye TIAN,Yong LIU
Front. Comput. Sci.. 2015, 9 (3): 415-430.  
https://doi.org/10.1007/s11704-014-3490-2

Abstract   PDF (852KB)

Video-on-demand (VoD) services have become popular on the Internet in recent years. In VoD, it is challenging to support the VCR functionality, especially the jumps, while maintaining a smooth streaming quality. Previous studies propose to solve this problem by predicting the jump target locations and prefetching the contents. However, through our analysis on traces from a real-world VoD service, we find that it would be fundamentally difficult to improve a viewer’s VCR experience by simply predicting his future jumps, while ignoring the intentions behind these jumps.

Instead of the prediction-based approach, in this paper, we seek to support the VCR functionality by bookmarking the videos. There are two key techniques in our proposed methodology. First, we infer and differentiate viewers’ intentions in VCR jumps by decomposing the interseek times, using an expectation-maximization (EM) algorithm, and combine the decomposed inter-seek times with the VCR jumps to compute a numerical interest score for each video segment. Second, based on the interest scores, we propose an automated video bookmarking algorithm. The algorithm employs the time-series change detection techniques of CUSUMandMB-GT, and bookmarks videos by detecting the abrupt changes on their interest score sequences. We evaluate our proposed techniques using real-world VoD traces from dozens of videos. Experimental results suggest that with our methods, viewers’ interests within a video can be precisely extracted, and we can position bookmarks on the video’s highlight events accurately. Our proposed video bookmarking methodology does not require any knowledge on video type, contents, and semantics, and can be applied on various types of videos.

References | Supplementary Material | Related Articles | Metrics
Memory bandwidth optimization of SpMV on GPGPUs
Chenggang Clarence YAN,Hui YU,Weizhi XU,Yingping ZHANG,Bochuan CHEN,Zhu TIAN,Yuxuan WANG,Jian YIN
Front. Comput. Sci.. 2015, 9 (3): 431-441.  
https://doi.org/10.1007/s11704-014-4127-1

Abstract   PDF (525KB)

It is an important task to improve performance for sparse matrix vector multiplication (SpMV), and it is a difficult task because of its irregular memory access. General purpose GPU (GPGPU) provides high computing ability and substantial bandwidth that cannot be fully exploited by SpMV due to its irregularity. In this paper, we propose two novel methods to optimize the memory bandwidth for SpMV on GPGPU. First, a new storage format is proposed to exploit memory bandwidth of GPU architecture more efficiently. The new storage format can ensure that there are as many non-zeros as possible in the format which is suitable to exploit the memory bandwidth of the GPU. Second, we propose a cache blocking method to improve the performance of SpMV on GPU architecture. The sparse matrix is partitioned into sub-blocks that are stored in CSR format. With the blocking method, the corresponding part of vector x can be reused in the GPU cache, so the time to access the global memory for vector x is reduced heavily. Experiments are carried out on three GPU platforms, GeForce 9800 GX2, GeForce GTX 480, and Tesla K40. Experimental results show that both new methods can efficiently improve the utilization of GPU memory bandwidth and the performance of the GPU.

References | Related Articles | Metrics
A balanced decomposition approach to real-time visualization of large vector maps in CyberGIS
Mingqiang GUO,Ying HUANG,Zhong XIE
Front. Comput. Sci.. 2015, 9 (3): 442-455.  
https://doi.org/10.1007/s11704-014-3498-7

Abstract   PDF (990KB)

With the dramatic development of spatial data infrastructure, CyberGIS has become significant for geospatial data sharing. Due to the large number of concurrent users and large volume of vector data, CyberGIS faces a great challenge in how to improve performance. The real-time visualization of vector maps is themost common function in Cyber-GIS applications, and it is time-consuming especially when the data volume becomes large. So, how to improve the efficiency of visualization of large vector maps is still a significant research direction for GIScience scientists. In this research, we review the existing three optimization strategies, and determine that the third category strategy (i.e., parallel optimization) is appropriate for the real-time visualization of large vector maps. One of the key issues of parallel optimization is how to decompose the real-time visualization tasks into balanced sub tasks while taking into consideration the spatial heterogeneous characteristics. We put forward some rules that the decomposition should conform to, and design a real-time visualization framework for large vector maps. We focus on a balanced decomposition approach that can assure efficiency and effectiveness. Considering the spatial heterogeneous characteristic of vector data, we use a “horizontal grid, vertical multistage” approach to construct a spatial point distribution information grid. The load balancer analyzes the spatial characteristics of the map requests and decomposes the real-time viewshed into multiple balanced sub viewsheds. Then, all the sub viewsheds are distributed to multiple server nodes to be executed in parallel, so as to improve the realtime visualization efficiency of large vector maps. A group of experiments have been conducted by us. The analysis results demonstrate that the approach proposed in this research has the ability of balanced decomposition, and it is efficient and effective for all geometry types of vector data.

References | Related Articles | Metrics
Irradiance regression for efficient final gathering in global illumination
Xuezhen HUANG,Xin SUN,Zhong REN,Xin TONG,Baining GUO,Kun ZHOU
Front. Comput. Sci.. 2015, 9 (3): 456-465.  
https://doi.org/10.1007/s11704-014-4211-6

Abstract   PDF (653KB)

Photon mapping is widely used for global illumination rendering because of its high computational efficiency. But its efficiency is still limited, mainly by the intensive sampling required in final gathering, a process that is critical for removing low frequency artifacts of density estimation. In this paper, we propose a method to predict the final gathering estimation with direct density estimation, thereby achieving high quality global illumination by photon mapping with high efficiency. We first sample the irradiance of a subset of shading points by both final gathering and direct radiance estimation. Then we use the samples as a training set to predict the final gathered irradiance of other shading points through regression. Consequently, we are able to achieve about three times overall speedup compared with straightforward final gathering in global illumination computation with the same rendering quality.

References | Supplementary Material | Related Articles | Metrics
A hierarchical ontology context model for work-based learning
Chuantao YIN,Bingxue ZHANG,Betrand DAVID,Zhang XIONG
Front. Comput. Sci.. 2015, 9 (3): 466-473.  
https://doi.org/10.1007/s11704-015-4200-4

Abstract   PDF (842KB)

Context modelling involves a) characterizing a situation with related information, and b) dealing and storing the information in a computer-understandable form. It is the keystone to enable a system to possess the perception capacity and adapt its functionality properly for different situations. However, a context model focusing on the characteristics of work-based learning is not well studied by pioneering researchers. For addressing this issue, in this work we firstly analyze several existing context models to identify the essentials of context modelling, whereby a hierarchical ontology context model is proposed to characterize work-based learning. Subsequently, we present the application of the proposed model in work-based learning scenario to provide adapted learning supports to professionals. Hence, this work has significance in both theory and practice.

References | Supplementary Material | Related Articles | Metrics
A sequential model of bargaining in logic programming
Wu CHEN,Dongmo ZHANG,Maonian WU
Front. Comput. Sci.. 2015, 9 (3): 474-484.  
https://doi.org/10.1007/s11704-015-3308-x

Abstract   PDF (382KB)

This paper proposes a sequential model of bargaining specifying reasoning processes of an agent behind bargaining procedures. We encode agents’ background knowledge, demands, and bargaining constraints in logic programs and represent bargaining outcomes in answer sets. We assume that in each bargaining situation, each agent has a set of goals to achieve, which are normally unachievable without an agreement among all the agents who are involved in the bargaining. Through an alternating-offers procedure, an agreement among bargaining agents may be reached by abductive reasoning. We show that the procedure converges to a Nash equilibrium if each agent makes rational offers/counteroffers in each round. In addition, the sequential model also has a number of desirable properties, such as mutual commitments, individual rationality, satisfactoriness, and honesty.

References | Supplementary Material | Related Articles | Metrics
Research on self-adaptive decision-making mechanism for competition strategies in robot soccer
Haobin SHI,Lincheng XU,Lin ZHANG,Wei PAN,Genjiu XU
Front. Comput. Sci.. 2015, 9 (3): 485-494.  
https://doi.org/10.1007/s11704-015-4210-7

Abstract   PDF (907KB)

In the robot soccer competition platform, the current confrontation decision-making system suffers from difficulties in optimization and adaptability. Therefore, we propose a new self-adaptive decision-making (SADM) strategy. SADM compensates for the restrictions of robot physical movement control by updating the task assignment and role assignment module using situation assessment techniques. It designs a self-adaptive role assignment model that assists the soccer robot in adapting to competition situations similar to how humans adapt in real time. Moreover, it also builds an accurate motion model for the robot in order to improve the competition ability of individual robot soccer. Experimental results show that SADM can adapt quickly and positively to new competition situations and has excellent performance in actual competition.

References | Supplementary Material | Related Articles | Metrics
14 articles