Package nz.ac.waikato.modeljunit

Overview

See:
          Description

Interface Summary
FsmModel Interface for FSM models for model-based testing.
ModelListener An interface for objects that listen for model events.
 

Class Summary
AbstractListener An implementation of ModelListener that ignores all events.
AllRoundTester  
GraphListener This ModelListener builds a graph of the observed parts of the model.
GreedyTester Test a system by making greedy walks through an EFSM model of the system.
ListenerFactory This singleton object defines all the pre-defined model listeners (and coverage metrics).
LookaheadTester A test generator that looks N-levels ahead in the graph.
Model This class is a wrapper around a user-supplied EFSM model.
ModelTestCase Deprecated. Use one of the subclasses of Tester instead.
RandomTester Test a system by making random walks through an EFSM model of the system.
ResultExtractor This class runs several random and greedyRandom walks and outputs them to a text file
StopOnFailureListener An implementation of ModelListener that throws an exception when the first test failure is detected.
Tester An abstract superclass for all the test generation algorithms.
Transition A transition represents a triple (StartState,Action,EndState).
TransitionPair A transition pair is a pair of transitions (incoming,outgoing).
VerboseListener An implementation of ModelListener that prints event messages to the Model's getOutput() stream.
 

Exception Summary
TestFailureException Exceptions related to failed tests.
 

Error Summary
FsmException Exceptions related to malformed Finite State Machines.
 

Annotation Types Summary
Action Indicates that the annotated method is a transition of an FSM.
 

Package nz.ac.waikato.modeljunit Description

Overview

ModelJUnit is a Java library for model-based testing. The basic idea is that you write an abstract model of your system under test (SUT), then you generate lots of tests from that model. ModelJUnit is usually used to do online (on-the-fly) testing, where the tests are executed on the SUT as they are generated.

Some advantages of model-based testing are that it can be quicker than writing a test suite by hand, can give systematic coverage of the behaviours of the model and can make it easier to support requirements evolution (update the model and regenerate the tests).

How to do Model-Based Testing

There are four basic steps to using model-based testing:

  1. Design a model of your SUT. In ModelJUnit, this model is written as a Java class that implements the FsmModel interface. The current state of the SUT is modelled by the private data variables of this class and the operations of the SUT are modelled by the action methods of this class. For example, see the SimpleSet model.
  2. Connect your Model to your SUT. Add a pointer to an SUT object into your model class. Then add some 'adaptor' code to the action methods of your model so that each action method tests one or more of the SUT operations. This means that as the automatic test generator traverses your model, it will be calling SUT operations and checking their results. For example, see the SimpleSetWithAdaptor and SmartSetAdaptor classes, which add adaptor code to the SimpleSet model.
  3. Generate some Tests. Create a test suite by passing an instance of your model to one of the Tester classes, for example, GreedyTester. Then you can generate online (on-the-fly) test sequences of varying length using its generate(N) method. You can also connect various model coverage metrics to the tester so that you can see how thoroughly the model has been tested. See the examples in the examples package for more detail.
  4. Analyze any Test Failures. Differences between your model and your SUT are reported as test failures. You should analyze these to determine if they are caused by an SUT error, or a model/adaptor error.

Test Generation Algorithms

Here is a brief overview of most of the test generation algorithms that ModelJUnit provides. These are all subclasses of the Tester class. Many of the test generation algorithms use randomness to explore the graph of the model, but by default the random number generator is usually created with a fixed seed so that the test generation will be predictable and the same test results will be obtained each session. You can use the Tester.setRandom(Random) method in the Tester class if you want to use different seeds. For example, tester.setRandom(new Random()) will make tester generate different tests (which may expose different bugs) in each test generation session. (If you do this, I suggest that you print out the seed, since it can be quite annoying to see a test failure, and then not know how to reproduce that failure later.)

RandomTester does a random walk around the graph. At each state it randomly chooses one of the enabled transitions. GreedyTester is a subclass of RandomTester that also does a random walk, but gives preference to the unexplored transitions. This means that it covers the transitions of the model more quickly than RandomTester, then its behaviour becomes identical to RandomTester once all transitions out of each state have been tested.

LookaheadTester is like GreedyTester, but more sophisticated, because it looks ahead several transitions to see where there are unexplored areas and then tries to go towards those areas. It allows the lookahead depth and several other parameters to be set to give some control over the search.

AllRoundTester can be used as a wrapper around any other test generation algorithm. It terminates each generated test sequence once a given number of loops are detected (one by default). So it is helpful for generating all-round-trips test sets.


marku@cs.waikato.ac.nz
Last modified: Sat Dec 15 22:13:29 NZDT 2007



Copyright © 2009 ModelJUnit Project. All Rights Reserved.