|
Formal verification of synchronous data-flow program transformations toward certified compilers
Van Chan NGO, Jean-Pierre TALPIN, Thierry GAUTIER, Paul Le GUERNIC, Loïc BESNARD
Front. Comput. Sci.. 2013, 7 (5): 598-616.
https://doi.org/10.1007/s11704-013-3910-8
Translation validation was invented in the 90’s by Pnueli et al. as a technique to formally verify the correctness of code generators. Rather than certifying the code generator or exhaustively qualifying it, translation validators attempt to verify that program transformations preserve semantics. In this work, we adopt this approach to formally verify that the clock semantics and data dependence are preserved during the compilation of the Signal compiler. Translation validation is implemented for every compilation phase from the initial phase until the latest phase where the executable code is generated, by proving the transformation in each phase of the compiler preserves the semantics. We represent the clock semantics, the data dependence of a program and its transformed counterpart as first-order formulas which are called clock models and synchronous dependence graphs (SDGs), respectively. We then introduce clock refinement and dependence refinement relations which express the preservations of clock semantics and dependence, as a relation on clock models and SDGs, respectively. Our validator does not require any instrumentation or modification of the compiler, nor any rewriting of the source program.
References |
Related Articles |
Metrics
|
|
Multi-threaded code generation from Signal program to OpenMP
Kai HU, Teng ZHANG, Zhibin YANG
Front. Comput. Sci.. 2013, 7 (5): 617-626.
https://doi.org/10.1007/s11704-013-3906-4
The use of multi-core processors will become a trend in safety critical systems. For safe execution of multithreaded code, automatic code generation from formal specification is a desirable method. Signal, a synchronous language dedicated for the functional description of safety critical systems, provides soundness semantics for deterministic concurrency. Although sequential code generation of Signal has been implemented in Polychrony compiler, deterministic multi-threaded code generation strategy is still far from mature. Moreover, existing code generation methods use certain multi-thread library, which limits the cross platform executions. OpenMP is an application program interface (API) standard for parallel programming, supported by several mainstream compilers from different platforms. This paper presents a methodology translating Signal program to OpenMP-based multi-threaded C code. First, the intermediate representation of the core syntax of Signal using synchronous guarded actions is defined. Then, according to the compositional semantics of Signal equations, the Signal program is synthesized to dependency graph (DG). After parallel tasks are extracted from dependency graph, the Signal program can be finally translated into OpenMP-based C code which can be executed on multiple platforms.
References |
Related Articles |
Metrics
|
|
Exploring system architectures in AADL via Polychrony and SynDEx
Huafeng YU, Yue MA, Thierry GAUTIER, Loïc BESNARD, Jean-Pierre TALPIN, Paul Le GUERNIC, Yves SOREL
Front. Comput. Sci.. 2013, 7 (5): 627-649.
https://doi.org/10.1007/s11704-013-2307-z
Architecture analysis & design language (AADL) has been increasingly adopted in the design of embedded systems, and corresponding scheduling and formal verification have been well studied. However, little work takes code distribution and architecture exploration into account, particularly considering clock constraints, for distributed multi-processor systems. In this paper, we present an overview of our approach to handle these concerns, together with the associated toolchain, AADL-Polychrony-SynDEx. First, in order to avoid semantic ambiguities of AADL, the polychronous/multiclock semantics of AADL, based on a polychronous model of computation, is considered. Clock synthesis is then carried out in Polychrony, which bridges the gap between the polychronous semantics and the synchronous semantics of SynDEx. The same timing semantics is always preserved in order to ensure the correctness of the transformations between different formalisms. Code distribution and corresponding scheduling is carried out on the obtained SynDEx model in the last step, which enables the exploration of architectures originally specified in AADL. Our contribution provides a fast yet efficient architecture exploration approach for the design of distributed real-time and embedded systems. An avionic case study is used here to illustrate our approach.
References |
Related Articles |
Metrics
|
|
A comparative study of two formal semantics of the SIGNAL language
Zhibin YANG, Jean-Paul BODEVEIX, Mamoun FILALI
Front Comput Sci. 2013, 7 (5): 673-693.
https://doi.org/10.1007/s11704-013-3908-2
SIGNAL is a part of the synchronous languages family, which are broadly used in the design of safety-critical real-time systems such as avionics, space systems, and nuclear power plants. There exist several semantics for SIGNAL, such as denotational semantics based on traces (called trace semantics), denotational semantics based on tags (called tagged model semantics), operational semantics presented by structural style through an inductive definition of the set of possible transitions, operational semantics defined by synchronous transition systems (STS), etc. However, there is little research about the equivalence between these semantics. In this work, we would like to prove the equivalence between the trace semantics and the tagged model semantics, to get a determined and precise semantics of the SIGNAL language. These two semantics have several different definitions respectively, we select appropriate ones and mechanize them in the Coq platform, the Coq expressions of the abstract syntax of SIGNAL and the two semantics domains, i.e., the trace model and the tagged model, are also given. The distance between these two semantics discourages a direct proof of equivalence. Instead, we transformthem to an intermediate model, which mixes the features of both the trace semantics and the tagged model semantics. Finally, we get a determined and precise semantics of SIGNAL.
References |
Related Articles |
Metrics
|
|
Algorithms for checking channel passing in web service choreography
Hongli YANG, Chao CAI, Liyang PENG, Xiangpeng ZHAO, Zongyan QIU, Shengchao QIN
Front. Comput. Sci.. 2013, 7 (5): 710-728.
https://doi.org/10.1007/s11704-013-1238-z
Web service choreography describes global models of service interactions among a set of participants. For an interaction to be executed, the participants must know the required channel(s) used in the interaction, otherwise the execution will get stuck. Since channels are composed dynamically, the initial channel set of each participant is often insufficient to meet the requirements. It is the responsibility of the participants to pass required channels owned (known) by one to others. Since service choreography may involve many participants and complex channel constraints, it is hard for designers to specify channel passing in a choreography exactly as required. We address the problem of checking whether a service choreography lacks channels or has redundant channels, and how to automatically generate channel passing based on interaction flows of the service choreography in the case of channel absence. Concretely, we propose a simple language Chorc, a channel interaction sub-language for modeling the channel passing aspect of service choreography. Based on the formal operational semantics of Chorc, the algorithms for static checking of service choreography and generating channel passing are also studied, and the complexity results of algorithms are discussed. Moreover, some illustrated service choreography examples are presented to show how to formalize and analyze service choreography with channel passing in Chorc.
References |
Related Articles |
Metrics
|
|
An ACO-RFD hybrid method to solve NP-complete problems
Pablo RABANAL, Ismael RODRÍGUEZ, Fernando RUBIO
Front. Comput. Sci.. 2013, 7 (5): 729-744.
https://doi.org/10.1007/s11704-013-2302-4
In this paper we hybridize ant colony optimization (ACO) and river formation dynamics (RFD), two related swarm intelligence methods. In ACO, ants form paths (problem solutions) by following each other’s pheromone trails and reinforcing trails at best paths until eventually a single path is followed. On the other hand, RFD is based on copying how drops form rivers by eroding the ground and depositing sediments. In a rough sense, RFD can be seen as a gradient-oriented version of ACO. Several previous experiments have shown that the gradient orientation of RFD makes this method solve problems in a different way as ACO. In particular, RFD typically performs deepersearches, which in turn makes it find worse solutions than ACO in the first execution steps in general, though RFD solutions surpass ACO solutions after some more time passes. In this paper we try to get the best features of both worlds by hybridizing RFD and ACO. We use a kind of ant-drophybrid and consider both pheromone trails and altitudes in the environment. We apply the hybrid method, as well as ACO and RFD, to solve two NP-hard problems where ACO and RFD fit in a different manner: the traveling salesman problem (TSP) and the problem of the minimum distances tree in a variable-cost graph (MDV). We compare the results of each method and we analyze the advantages of using the hybrid approach in each case.
References |
Related Articles |
Metrics
|
|
Dimensionality reduction with adaptive graph
Lishan QIAO, Limei ZHANG, Songcan CHEN
Front Comput Sci. 2013, 7 (5): 745-753.
https://doi.org/10.1007/s11704-013-2234-z
Graph-based dimensionality reduction (DR) methods have been applied successfully in many practical problems, such as face recognition, where graphs play a crucial role in modeling the data distribution or structure. However, the ideal graph is, in practice, difficult to discover. Usually, one needs to construct graph empirically according to various motivations, priors, or assumptions; this is independent of the subsequent DR mapping calculation. Different from the previous works, in this paper, we attempt to learn a graph closely linked with the DR process, and propose an algorithm called dimensionality reduction with adaptive graph (DRAG), whose idea is to, during seeking projection matrix, simultaneously learn a graph in the neighborhood of a prespecified one. Moreover, the pre-specified graph is treated as a noisy observation of the ideal one, and the square Frobenius divergence is used to measure their difference in the objective function. As a result, we achieve an elegant graph update formula which naturally fuses the original and transformed data information. In particular, the optimal graph is shown to be a weighted sum of the pre-defined graph in the original space and a new graph depending on transformed space. Empirical results on several face datasets demonstrate the effectiveness of the proposed algorithm.
References |
Related Articles |
Metrics
|
|
Reinforcement learning models for scheduling in wireless networks
Kok-Lim Alvin YAU, Kae Hsiang KWONG, Chong SHEN
Front Comput Sci. 2013, 7 (5): 754-766.
https://doi.org/10.1007/s11704-013-2291-3
The dynamicity of available resources and network conditions, such as channel capacity and traffic characteristics, have posed major challenges to scheduling in wireless networks. Reinforcement learning (RL) enables wireless nodes to observe their respective operating environment, learn, and make optimal or near-optimal scheduling decisions. Learning, which is the main intrinsic characteristic of RL, enables wireless nodes to adapt to most forms of dynamicity in the operating environment as time goes by. This paper presents an extensive review on the application of the traditional and enhanced RL approaches to various types of scheduling schemes, namely packet, sleep-wake and task schedulers, in wireless networks, as well as the advantages and performance enhancements brought about by RL. Additionally, it presents how various challenges associated with scheduling schemes have been approached using RL. Finally, we discuss various open issues related to RL-based scheduling schemes in wireless networks in order to explore new research directions in this area. Discussions in this paper are presented in a tutorial manner in order to establish a foundation for further research in this field.
References |
Related Articles |
Metrics
|
13 articles
|