Table work introduces a method based on model

Table of contentsBackgroundIntroductionReasons To use Model CheckingHow to perform Model CheckingAlgorithms of Model CheckingBenefits of Model CheckingConclusionABSTRACTModel checking is a technique for exhaustively searching the model’s state space for possible errors. Testing is a common method for enhancing the quality of a software product by checking for errors in program executions sampled according to some criterion called coverage criterion. Testing is a costly process especially if it is not supported by an appropriate method (and tool) for generating test suites, i.e. sets of test cases that fulfil a specific coverage criterion. This work introduces a method based on model checking, that supports generation of test cases for coverage based unit testing. As a proof of concept, we provide results obtained from the developed prototype tool support. Our method relies on automatically deriving from the source code a SPIN model with injected breakpoints. Test cases are obtained as counterexamples of violated Linear TemporalLogic (LTL) formulae that are automatically produced based on the selected coverage criterion.BACKGROUNDINTRODUCTIONIn theory, software applications can be designed either top-down or bottom-up. It has been debated for decades which of these two methods, applied in isolation, would be better. In practice, the two methods are often two i.e. GERARD J. HOLZMANN and MARGARET H. SMITHused in combination. An industrial software design project normally starts with a top-down system design phase, where requirements and constraints are explored and defined. This is followed by a bottom-up phase, where developers try to meet the stated objectives. The design is then concluded with a system validation phase, where testers probe the code to check if thetop-down and the bottom-up phases meet in the middle. If code were developed purely top-down, we could apply systematic design refinement techniques to guide the process, to secure that each new level preserves all top-level requirements and continues to satisfy all constraints. This assumes, however, that the requirements and constraints do not change much, or that one would be willing to return to first design phase each time they do. It also assumes that we can successfully carry through refinement proofs from top-level design to implementation level code. At present, these assumptions are not very realistic for larger applications.A verification problem is typically presented to a formal methods person in the same way that it is presented to a conventional software tester. In many cases a detailed implementation of the design exists, and is available. The requirements for the design are often harder to find. Often, the requirements are not accurately documented and have to be reconstructed by the tester or verifier. If the problem is sufficiently important, a verification expert can undertake to formalize the requirements, and then to construct a formal model of the code that can be shown to either satisfy or to violate those requirements. The formal methods expert can make mistakes both inthe formalization of the requirements and in the formalization of the code. The effort to construct and to check the accuracy of a formal model for any application of substance can be significant, often consuming weeks or months. Only in rare cases, where design and development have ceased, and no maintenance is performed, will the code remain unchanged in the period that model construction and verification is attempted. This has several important implications:– It is difficult to perform this type of verification during the design and development phases, when verification results are most beneficial. For this to be possible, the effort to construct an abstract model must be considerably smaller than the development effort itself.– When verification results become available, chances are that they apply only to older versions of the code, which may no longer be of interest to the designers.– Manually constructed models can themselves be flawed, possibly hidingreal errors and often introducing artificial errors. The latter can be detected and corrected, the former cannot.REASONS TO USE MODEL CHECKINGHow to perform Model CheckingMODEL EXTRACTION FROM CODEFor any software artifact there are at least two types of entities that are of interest when it comes to verification: the final version of the design requirements and the ultimate code that implements them. It is the task of the verifier to prove that the latter conforms to the former. There can be many hurdles to overcome before the verification itself can start. First, the requirements may be partially unstated (i.e., incomplete) or they may be available only in their original form, before the long process started in which idealized requirements meet real-world constraints and are true-ed up. Ideally, the requirements are captured in a sufficiently formal notation, so that it will be possible to check them directly for their logical consistency,independently of the implementation. In practice, we often have to interpret and disambiguate informally stated textual requirements. A second hurdle to overcome is that the correctness properties of the full implementation are almost certainly intractable. In general, only the correctness properties of a subset of all possible software applications can have decidable properties (cf. the general unsolvability of the haltingproblem). One such subset, of particular interest for verification, is the set of finite state programs. The hurdle to overcome, then, is to devise a way to map a given program into an element of this subset by some well-defined process. That well-defined process is called abstraction, and the result of this process can be called a verification model.A purely top-down design process starts with a high-level abstraction that can trivially be proven correct. By a disciplined process of refinement the designer then tries to obtain an implementation level description of the application that preserves the essence of the high-level abstraction. Specific types of functionality may need to be proven at each refinement level, inaddition to the preservation of properties across levels. The process requires sufficient care and discipline that examples of its use in large-scale applications are scarce.Verification During Code DevelopmentThe alternative that we will explore here is to allow the design and coding phases to proceed normally, but to devise a method that can extract a verification model from the current version of the code at each point in time. The model can then be shown to either satisfy or to violate the current expression of all correctness requirements. Requirements and code are thus developed hand in hand. In some cases, evidence of a violation of a requirement will necessitate a refinement of the requirements. In other cases, it will call for an adjustment of the code.At this point you need to write code that will successfully pass the test. The code written at this stage will not be 100% final, you will improve it later stages. Data and ControlWe will assume in what follows that the application describes a dynamic system of asynchronously executed processes, communicating via message passing and/or via access to shared data. That is, we specifically target the verification of distributed systems software for this work. For each process there is a piece of code in the implementation that describes itsbehavior. We would like to extract a model from each such piece of code, by applying predefined, carefully selected, abstractions. At the minimum, we have to extract the control-structure from the code. We only need a minimal parser for the language in which the code is written to extract such a control-flow skeleton. For applications written in C, for instance, the parser needs to know that statements are terminated by semicolons, and it needs to know the syntax of selection and iteration constructs such as if, while, until, goto, case, and switch, but it needs to know very little else about the language. In particular, the text of statements(assignments, conditions, function calls, etc.) can remain uninterpreted at this point.In PROMELA, the specification language used by SPIN, message send and receive operations are applied to channels (i.e., message queues). These operations typically include only those message parameters that are deemed relevant to the correctness properties of the application. Sequence numbers and congestion control information, for instance, would normally qualify forinclusion in such a mapping, but random user data would not.Both the source code and the code from the PROMELA model represent all message passing primitives in a specific, but distinct, notation. This consistent and context independent representation style in both the detailed and the abstracted version of the code allows for a particularly simple and elegant method of conversion. All that is needed is the definition of a lookup table that is populated with the required conversions.A critical observation at this point is, however, that we can use the same discipline to define an abstraction for also the remainder of all source Model TemplatesTo produce the final model in PROMELA, every abstract data object must be formally declared, defining its type and initial value. This can be done with the help of a model template that contains the required data declarations and an outline of the required process declarations, to be filled in with detailed behavior specifications by the model extractor.chan Q(2 = 1 of ( mtype, bit } ;active 2) proctype state(}( mtype op, parl;bit aeqno; byte who;chan qin, qout;who • pid;qin • Q pidl;qout • Ql-_pid);Figure 1. A Sample Model TemplateA sample model template is shown in Figure 1. In this case the template is for a simple protocol system. The template defines the containers for two processes, and a number of local data objects that are used in the abstracted version of the code (not further detailed here). The modelextractor replaces the placeholder @P with the abstracted behavior that generates from the souce code.Checking AbstractionsThe adequacy of the abstractions defined manually in the lookup tables can be checked as follows. We can check, for instance, that all updates of data objects referenced at least once in the abstract model are represented in the abstraction. We can also check that all data dependency relations are properly preserved by the abstraction. If, for instance, data object D is partof the abstraction, we can check that all data objects on which D depends in the source code are also be represented, and so on recursively. A check for these types of completeness can trivially be automated. One step further, one could also consider methods for mechanically deriving the lookup table contents from a given body of requirements. Algorithms For Model Checkingstate space enumeration symbolic state space enumeration abstract interpretation symbolic simulation symbolic trajectory evaluation symbolic executionBenefits of Model CheckingGeneral approach with applications to hardware verification, software engineering, multi-agent systems, communication protocols, embedded systems and so forth.Supports partial verification: a design can be verified against a partial specification by considering only a subset of all requirements, This can result in improved efficiency, since one can restrict the validation to checking only the most relevant requirements while ignoring the checking of less important though possibly computationally expensive requirements.Case studies have shown that the incorporation of model checking in the design process does not delay this process more than using simulation and testing. For several case studies the use of model checking has led to shorter development times. In addition, due to several advanced techniques, model checkers are able to deal with rather large state spaces (an example with 101300 states has been reported in the literature).Model checkers can potentially be routinely used by system designers with as much ease as they use for instance compilers, this is basically due to the fact that model checking does not require a high degree of interaction with the user.Rapidly increasing interest of the industry jobs are ordered where skills in applying model checkers are required industry is building their own model checkers .e.g. Siemens and Lucent Technologies, have initiated their own research groups on model checking .e.g. Intel and Cadence or is using existing tools (Fujitsu, Dutch Railways to mention a few) finally the first model checkers are becoming commercially available. Sound and interesting mathematical foundations: modeling, semantics, concurrency theory, logic and automata theory, data structures graph, algorithms, etc. constitute the basis of model checkingCONCLUSIONWe introduced a method for generation of test cases in coverage-base unit testing. The method relies on the transformation of the source code to a SPIN model with injected breakpoints. The breakpoints differentiate the program’s basic blocks and at the same time provide sufficient information for the generation of test cases by model checking simple LTL formulae. Automation has been achieved not only in the program to model transformation, but also in post-processing the breakpoints of the model program for the production of a test suite for some preferable coverage criterion. We provided results obtained from the prototype tool support.Our work demonstrates the feasibility of automated generation of test suites by the use of model checking. Many of the problems encountered in related work have been successfullyaddressed, in an attempt to provide a higher degree of automation in unit testing. We experienced problems like for example the need for fine tuning the source to model abstraction in a way that eliminates the possibility of state space explosion, while at the same time the model retains thenecessary fidelity for the generation of input values representing complete definitions of test cases. The program’s execution environment and its role in the source to model transformationis an additional problem that has to be properly addressed. Last but not least, an interesting future research prospect is the automated generation of test cases for selected program variables, based on the well-known dataflow coverage criteria.