Frontiers of Computer Science

ISSN 2095-2228

ISSN 2095-2236(Online)

CN 10-1014/TP

Postal Subscription Code 80-970

2018 Impact Factor: 1.129

   Online First

Administered by

, Volume 4 Issue 2

For Selected: View Abstracts Toggle Thumbnails
RESEARCH ARTICLE
Documenting and verifying systems assembled from components
Zhiying LIU, David Lorge PARNAS, Baltasar Trancon y WIDEMANN
Front Comput Sci Chin. 2010, 4 (2): 151-161.  
https://doi.org/10.1007/s11704-010-0026-2

Abstract   HTML   PDF (192KB)

This paper presents an approach to the problem of documenting the design of a network of components and verifying that its structure is complete and consistent, (i.e., that the components, functioning together, will satisfy the requirements of the complete product), before the components are implemented. Our approach differs from others in that both hardware and software components are viewed as hardware-like devices in which an output value can change instantaneously when input values change and all components operate synchronously rather than in sequence. We define what we mean by completeness and consistency and illustrate how the documents can be used to verify a design before it is implemented.

Figures and Tables | References | Related Articles | Metrics
A concern-based approach to generating formal requirements specifications
Ying JIN, Jing ZHANG, Weiping HAO, Pengfei MA, Yan ZHANG, Haiyan ZHAO, Hong MEI
Front Comput Sci Chin. 2010, 4 (2): 162-172.  
https://doi.org/10.1007/s11704-010-0151-y

Abstract   HTML   PDF (638KB)

Document driven requirements analysis, as proposed by Prof. David Parnas, which has had some success in practice, focuses on creating concise and complete formal requirements documents to serve as references for formal verification, software design, implementation, testing, inspection, and so on. However, at present large number of requirements documents are still written in natural languages. Therefore, generating formal requirements specification from informal textual requirements description has become a big challenge. In this paper, a concern-based approach to generating formal requirements specification from textual requirements document is proposed, which applies separation of concerns during requirements analysis and utilizes concerns and their relationships to bridge the gap between textual requirements statements and formal requirements documentation. A tool suite has been developed for supporting our approach, and a case study has been performed to illustrate the process of our approach. Results indicate that our approach facilitates guiding the process of formal requirements documentation with concerns and their relationships.

Figures and Tables | References | Related Articles | Metrics
On the computation of quotients and factors of regular languages
Mircea MARIN, Temur KUTSIA
Front Comput Sci Chin. 2010, 4 (2): 173-184.  
https://doi.org/10.1007/s11704-010-0154-8

Abstract   HTML   PDF (213KB)

Quotients and factors are important notions in the design of various computational procedures for regular languages and for the analysis of their logical properties. We propose a new representation of regular languages, by linear systems of language equations, which is suitable for the following computations: language reversal, left quotients and factors, right quotients and factors, and factor matrices. We present algorithms for the computation of all these notions, and indicate an application of the factor matrix to the computation of solutions of a particular language reconstruction problem. The advantage of these algorithms is that they all operate only on linear systems of language equations, while the design of the same algorithms for other representations often require translation to other representations.

References | Related Articles | Metrics
EDITORIAL
Financial information processing and development of emerging financial markets
Shuo BAI, Shouyang WANG, Lean YU, Aoying ZHOU
Front Comput Sci Chin. 2010, 4 (2): 185-186.  
https://doi.org/10.1007/s11704-010-0502-8

Abstract   HTML   PDF (58KB)
Related Articles | Metrics
RESEARCH ARTICLE
Modeling default risk via a hidden Markov model of multiple sequences
Wai-Ki CHING, Ho-Yin LEUNG, Zhenyu WU, Hao JIANG
Front Comput Sci Chin. 2010, 4 (2): 187-195.  
https://doi.org/10.1007/s11704-010-0501-9

Abstract   HTML   PDF (273KB)

Default risk in commercial lending is one of the major concerns of the creditors. In this article, we introduce a new hidden Markov model (HMM) with multiple observable sequences (MHMM), assuming that all the observable sequences are driven by a common hidden sequence, and utilize it to analyze default data in a network of sectors. Efficient estimation method is then adopted to estimate the model parameters. To further illustrate the advantages of MHMM, we compare the hidden risk state process obtained by MHMM with that from the traditional HMMs using credit default data. We then consider two applications of our MHMM. The calculation of two important risk measures: Value-at-risk (VaR) and expected shortfall (ES) and the prediction of global risk state. We first compare the performance of MHMM and HMM in the calculation of VaR and ES in a portfolio of default-prone bonds. A logistic regression model is then considered for the prediction of global economic risk using our MHMM with default data. Numerical results indicate our model is effective for both applications.

Figures and Tables | References | Related Articles | Metrics
Developing an SVM-based ensemble learning system for customer risk identification collaborating with customer relationship management
Lean YU, Shouyang WANG, Kin Keung LAI
Front Comput Sci Chin. 2010, 4 (2): 196-203.  
https://doi.org/10.1007/s11704-010-0508-2

Abstract   HTML   PDF (240KB)

In this study, we propose a support vector machine (SVM)-based ensemble learning system for customer relationship management (CRM) to help enterprise managers effectively manage customer risks from the risk aversion perspective. This system differs from the classical CRM for retaining and targeting profitable customers; the main focus of the proposed SVM-based ensemble learning system is to identify high-risk customers in CRM for avoiding possible loss. To build an effective SVM-based ensemble learning system, the effects of ensemble members’ diversity, ensemble member selection and different ensemble strategies on the performance of the proposed SVM-based ensemble learning system are each investigated in a practical CRM case. Through experimental analysis, we find that the Bayesian-based SVM ensemble learning system with diverse components and choose from space selection strategy show the best performance over various testing samples.

Figures and Tables | References | Related Articles | Metrics
A class of life insurance reserve model and risk analysis in a stochastic interest rate environment
Niannian JIA, Changqing JIA, Wei QIU
Front Comput Sci Chin. 2010, 4 (2): 204-211.  
https://doi.org/10.1007/s11704-010-0512-6

Abstract   HTML   PDF (251KB)

Actuarial theory in a stochastic interest rate environment is an active research area in life insurance; business and life insurance reserves are one of the key contents in actuarial theory. In this study, an interest force accumulation function model with a Gauss process and a Poisson process is proposed as the basis for the reserve model. With the proposed model, the net premium reserve model, which is based on the semi-continuous variable payment life insurance policy, is approximated. Based on this reserve model, the future loss variance model is proposed and the risk, which is caused by drawing on the reserve, is analyzed and evaluated. Subsequently, assuming a uniform distribution of death (UDD) the reserve and future loss variance models are also provided. Finally, a numerical example is presented for illustration and verification purposes. Using the numerical calculation, the relationships between reserve, future loss variance and model parameters are analyzed. The conclusions are a good fit to real life insurance practices.

Figures and Tables | References | Related Articles | Metrics
N-person credibilistic strategic game
Rui LIANG, Yueshan YU, Jinwu GAO, Zhi-Qiang LIU
Front Comput Sci Chin. 2010, 4 (2): 212-219.  
https://doi.org/10.1007/s11704-010-0511-7

Abstract   HTML   PDF (160KB)

This paper enlarges the scope of fuzzy-payoff game to n-person form from the previous two-person form. Based on credibility theory, three credibilistic approaches are introduced to model the behaviors of players in different decision situations. Accordingly, three new definitions of Nash equilibrium are proposed for n-person credibilistic strategic game. Moreover, existence theorems are proved for further research into credibilistic equilibrium strategies. Finally, two numerical examples are given to illustrate the significance of credibilistic equilibria in practical strategic games.

Figures and Tables | References | Related Articles | Metrics
research-article
Corporate financial distress diagnosis model and application in credit rating for listing firms in China
Ling ZHANG, Edward I.ALTMAN, Jerome YEN
Front Comput Sci Chin. 2010, 4 (2): 220-236.  
https://doi.org/10.1007/s11704-010-0505-5

Abstract   HTML   PDF (252KB)

With the enforcement of the removal system for distressed firms and the new Bankruptcy Law in China’s securities market in June 2007, the development of the bankruptcy process for firms in China is expected to create a huge impact. Therefore, identification of potential corporate distress and offering early warnings to investors, analysts, and regulators has become important. There are very distinct differences, in accounting procedures and quality of financial documents, between firms in China and those in the western world. Therefore, it may not be practical to directly apply those models or methodologies developed elsewhere to support identification of such potential distressed situations. Moreover, localized models are commonly superior to ones imported from other environments.

Based on the Z-score, we have developed a model called ZChina score to support identification of potential distress firms in China. Our four-variable model is similar to the Z”-score four-variable version, Emerging Market Scoring Model, developed in 1995. We found that our model was robust with a high accuracy. Our model has forecasting range of up to three years with 80 percent accuracy for those firms categorized as special treatment (ST); ST indicates that they are problematic firms. Applications of our model to determine a Chinese firm’s Credit Rating Equivalent are also demonstrated.

Figures and Tables | References | Related Articles | Metrics
RESEARCH ARTICLE
Evaluation of mutual funds using multi-dimensional information
Xiujuan ZHAO, Jianmin SHI
Front Comput Sci Chin. 2010, 4 (2): 237-253.  
https://doi.org/10.1007/s11704-010-0503-7

Abstract   HTML   PDF (249KB)

To make better use of mutual fund information for decision-making we propose a coned-context, data envelopment analysis (DEA) model with expected shortfall (ES) modeled under an asymmetric Laplace distribution in order to measure risk when evaluating performance of mutual funds. Unlike traditional models, this model not only measures the attractiveness of mutual funds relative to the performance of other funds, but also takes the decision makers’ preferences and expert knowledge/judgment into full consideration. The model avoids unsatisfying and impractical outcomes that sometimes occur with traditional measures and it also provides more management information for decision-making. Determining input and output variables is obviously very important in DEA evaluation. Using statistical tests and theoretical analysis, we demonstrate that ES under an asymmetric Laplace distribution is reliable and we therefore propose the model as a major risk measure for mutual funds. At the same time, we consider a fund’s performance over different time horizons (e.g., one, three and five years) in order to determine the persistence of fund performance. Using the coned-context DEA model with ES value under an asymmetric Laplace distribution, we also present the results of an empirical study of mutual funds in China, which provides significant insights into management of mutual funds. This analysis suggests that the coned context measure will help investors to select the best fund and fund managers in order to identify the funds with the most potential.

Figures and Tables | References | Related Articles | Metrics
Neural network methods for forecasting turning points in economic time series: an asymmetric verification to business cycles
Dabin ZHANG, Lean YU, Shouyang WANG, Haibin XIE
Front Comput Sci Chin. 2010, 4 (2): 254-262.  
https://doi.org/10.1007/s11704-010-0506-4

Abstract   HTML   PDF (233KB)

This paper examines the relevance of various financial and economic indicators in forecasting business cycle turning points using neural network (NN) models. A three-layer feed-forward neural network model is used to forecast turning points in the business cycle of China. The NN model uses 13 indicators of economic activity as inputs and produces the probability of a recession as its output. Different indicators are ranked in terms of their effectiveness of predicting recessions in China. Out-of-sample results show that some financial and economic indicators, such as steel output, M2, Pig iron yield, and the freight volume of the entire society are useful for predicting recession in China using neural networks. The asymmetry of business cycle can be verified using our NN method.

Figures and Tables | References | Related Articles | Metrics
Partially funded public pension, human capital and endogenous growth
Zaigui YANG
Front Comput Sci Chin. 2010, 4 (2): 271-279.  
https://doi.org/10.1007/s11704-010-0510-8

Abstract   HTML   PDF (126KB)

Employing an endogenous growth model with human capital, this paper investigates China’s partially funded public pension system. We examine the effects of the firm’s contribution rate and individual’s contribution rate on the per capita income growth rate, population growth rate, saving rate, education expense ratio and net wealth transfer ratio. Raising the firm’s contribution rate increases the per capita income growth rate, saving rate, net wealth transfer ratio and education expense ratio, whereas it decreases the population growth rate. Raising the individual’s contribution rate decreases the per capita income growth rate, saving rate and education expense ratio, whereas it increases the population growth rate, but has no effect on the net wealth transfer ratio. Integrating the important essential policies and the current economic goals of China and balancing the large difference between the effects of the two contribution rates on the endogenous variables raising the individual’s contribution rate by a large margin and the firm’s contribution rate by a little has more advantages than disadvantages. The real contribution rates can be raised as long as the government verifies the number of employees and payroll in practice.

Figures and Tables | References | Related Articles | Metrics
REVIEW ARTICLE
Knowledge discovery through directed probabilistic topic models: a survey
Ali DAUD, Juanzi LI, Lizhu ZHOU, Faqir MUHAMMAD
Front Comput Sci Chin. 2010, 4 (2): 280-301.  
https://doi.org/10.1007/s11704-009-0062-y

Abstract   HTML   PDF (409KB)

Graphical models have become the basic framework for topic based probabilistic modeling. Especially models with latent variables have proved to be effective in capturing hidden structures in the data. In this paper, we survey an important subclass Directed Probabilistic Topic Models (DPTMs) with soft clustering abilities and their applications for knowledge discovery in text corpora. From an unsupervised learning perspective, “topics are semantically related probabilistic clusters of words in text corpora; and the process for finding these topics is called topic modeling”. In topic modeling, a document consists of different hidden topics and the topic probabilities provide an explicit representation of a document to smooth data from the semantic level. It has been an active area of research during the last decade. Many models have been proposed for handling the problems of modeling text corpora with different characteristics, for applications such as document classification, hidden association finding, expert finding, community discovery and temporal trend analysis. We give basic concepts, advantages and disadvantages in a chronological order, existing models classification into different categories, their parameter estimation and inference making algorithms with models performance evaluation measures. We also discuss their applications, open challenges and future directions in this dynamic area of research.

Figures and Tables | References | Related Articles | Metrics
RESEARCH ARTICLE
ID-based authenticated group key agreement from bilinear maps
Xixiang LV, Hui LI
Front Comput Sci Chin. 2010, 4 (2): 302-307.  
https://doi.org/10.1007/s11704-009-0063-x

Abstract   HTML   PDF (128KB)

Authenticated group key agreement (GKA) is an important cryptographic mechanism underlying many collaborative and distributed applications. Recently, identity (ID)-based authenticated GKA has been increasingly researched because of the authentication and simplicity of the ID-based cryptosystem. However, there are two disadvantages with this kind of mechanism: 1) the private key escrow is inherent and 2) the Private Key Generator (PKG) must send client private keys over secure channels, making private key’s distribution difficult. The two disadvantages, particularly secure channels, may be unacceptable for secure group communications application. Fortunately, we can avoid both of them. In this paper, with bilinear maps on ECC, we present a new authenticated group key agreement protocol that does not require secure channels. The basic idea is the usual way of circumventing escrow: double key and double encryption (verification). The secret key of a user is generated by a key generation center (KGC) and the user collaboratively. Each of them has “half” of the secret information about the secret key of the user, and there is no secret key distribution problem. In addition, the computation cost of the protocol is very low because the main computation is binary addition.

Figures and Tables | References | Related Articles | Metrics
14 articles