Project Assignment

The practical part of the course will be done in the form of mini-projects performed in groups. Each group will consist of 3 persons (exceptionally a group of 2 might be allowed). Each group will choose a software system to be tested (among those proposed by the teachers, or proposed by the group).

Besides the lectures presenting the theoretical content of the course serving as background knowledge to work on the mini-projects, there will be some tutorials on specific topics. Also, there will be meetings to discuss the progress of each group on their work on the mini-projects as well as few mandatory meetings with each group. These meetings will happen in the slots marked as “Group work and consultation” (see the “Lectures” tab in the menu above). The course assistants will communicate the specific meeting dates in advance to each group.

Introduction

You’ll take the role of a Quality Assurance team for a software project. Your team will have to do the following:

  • build/install the project
  • do some exploratory testing
  • write some test script
  • model (part of) the software
  • and finally, generate more tests from your model and write an adapter to execute those tests on the SUT

This mini-project is split into different assignments. You’ll have to submit the result of each assignment to the [fire] system. All submissions follow a common scheme. For each, you will have to submit the artifacts developed in the assignment (test cases, scripts, etc.) and additionally a report describing these artifacts. Check the assignment descriptions for details. In addition, one assignment require you to present your findings to the class. The presentation doesn’t have to be submitted.

The remainder of this page first lists important dates and deadlines regarding the assignments. This is followed by descriptions of each assignment in sections. You can access the sections via the menu to the left, as well.

Project Description

The project that we will be working with this year is IntelliJ-IDEA. It is an open-source integrated development environment (IDE) written in Java. Throughout the course this program will serve as our System Under Test (SUT). We will split up functionalities among the groups. So each group is going to test specific parts of the IDE’s source code and the respective functionality.

The source code can be found here.

The main source of documentation can be found here.

Important dates

Forming groups

First, you need to form groups of 3 students. If you cannot find other people to join, ask on the [forum]. If this doesn’t work either, contact the lab assistant. Groups cannot be more than 3 people and groups of less than 3 people will only be accepted under exceptional circumstances (contact the lab assistant if you think you qualify).

Important

All 3 members of your group must create an account in [fire]. The first one to register creates a new group and shares the group password with the other two.

To complete this task, you must create a (possibly empty) submission of the “Forming groups” assignment.

Deadline: Tuesday March 22

Distribution of the System Under Test (SUT)

On Wednesday March 23, we will provide each group with a specific SUT through the Fire system.

We will provide you with a detailed system description of the system when the course starts.

It is not decided yet, if the SUTs will be distributed by the course responsible or if a poll will be published in order for the student groups to choose.

Important

No fire submission in this part.

Assignment 1: Exploratory and Automated Testing

Part 1:

Ensure that you are able to build the SUT from the source code and install it on your machines. Each member of your group has to get familiar with the project.

Part 2:

Start doing exploratory testing. That is

“a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

Test as many functions as you can and try to come up with difficult or unusual use cases that the software might not handle well. Read the documentation and check that the software behave as described.

Take some notes:

  • was the SUT easy to install?
  • is the documentation up to date?
  • is the documentation complete?
  • did you find some corner cases that aren’t properly handled?

Important

No fire submission for the first two parts.

Part 3:

Write 5 test cases as JUnit scripts against your SUT. Your test cases should test different functionalities of the SUT and be able to run without manual intervention. In particular, they should be able to decide automatically whether the test is a success or a failure.

You’ll need to find a way for your tests to control your SUT.

Part 4:

Inject 5 faults into the existing system. A simple example would be changing the instruction var = var + 1 into var = var - 1 or if(var > 5) into if(var < 5). Be creative!

The injected faults should make the previously created automated tests fail. Take notes on what you changed and how it influenced your tests.

Important

Submit the following documents to [fire]:

  • The test script source files

  • A report containing the following:

    • A description of the tested functionalities
    • A description (in English) of your tests
    • A description of the injected faults and their influence on the tests
    • An explanation about how you control your SUT from your test scripts
    • Your findings and comments

Deadline: Friday April 15

Redistribution of the SUT

On Wednesday April 20, we will provide each group with a specific SUT.

We will provide you with a detailed system description of the system when the course starts.

Assignment 2: Modeling

In this assignment, you will create a model of your SUT that you will then use in the last assignment to generate tests. Your model should be an extended finite state machine that will allow you to generate interesting, arbitrary long tests sequences against your software under test. Your EFSM must have about 20 inputs, at least one internal variable and be able to generate infinite test sequences.

Your model doesn’t need to cover the whole SUT, in particular it has many functionalities, you can model only part of them. Start by modeling the most basic functionalities and add more details progressively until you reach the desired number of inputs.

Important

Submit the following documents to [fire]:

  • A report containing the following:

    • The specifications (in English) of the behavior of the modeled part of your SUT
    • The list of system inputs in your model with their description
    • A description of the state space of your model
    • The transition table
    • A graphical representation of your model

See [ROS2000] for an example of how you can format your report (only the modeling part, not the implementation part).

Deadline: Friday May 6

Assignment 3: Test generation and execution

In this assignment, you will implement the model that you developed in Assignment 2: Modeling using ModelJUnit.

Start by implementing the model alone by implementing the interface FsmModel, then connect each action in your model to your SUT in a separate adapter class (See the ModelJUnit tutorial for an example).

ModelJUnit offers several testing strategies and coverage metrics. Read more about them in the ModelJUnit documentation (Testing strategies: AllRoundTester, GreedyTester, LookaheadTester, RandomTester ; Coverage metrics: ActionCoverage, StateCoverage, TransitionCoverage, TransitionPairCoverage).

Before running the tests generator, try to think about which strategy will work best? Which metric do you think is the most interesting to evaluate your model?

Generate tests using your model with the different testers available in ModelJUnit. Collect coverage metrics about the generated tests, report the collected value a table like the following.

  ActionCoverage StateCoverage TransitionCoverage TransitionPairCoverage
AllRoundTester        
GreedyTester        
LookaheadTester        
RandomTester        

Note

You might need to experiment with the size of the generated test sequence: obviously if your model has 30 transitions, ModelJUnit can’t possibly cover it with a 20-steps test!

Report the test length you are using.

Important

Submit the following documents to [fire]:

  • The java source files for your model and your adapter
  • A report addressing the following:
    • difficulties and remarks about the implementation of your model and adapter using ModelJUnit
    • how did you implement reset?
    • an evaluation of how well the model created in Assignment 2 fit, what needed to be changed, and why (abstraction, functionality, added/removed/changed transitions/states)
    • an explanation, in your own words, of each test generation strategy available in ModelJUnit (AllRoundTester, GreedyTester, LookaheadTester, RandomTester)
    • an explanation, in your own words, of each coverage metric available in ModelJUnit (ActionCoverage, StateCoverage, TransitionCoverage, TransitionPairCoverage)
    • the above coverage table
    • discussion and conclusion

Deadline: Friday May 20

Presentation: Final Presentation

Information regarding the final presentation will be added when the course starts.

The presentations are on Wednesday May 25. Attendance is mandatory. To pass the labs, you need to take part in the presentation session. If you are not able to come, you need to notify us at least 24 hours beforehand.

Note

The order of the presentations will be chosen at random, and we’ll check your attendance at the end of the session.

Each group must give an 8 minute presentation, reporting on your results from Assignment 2 and 3. After each presentation, there is some time for the audience to ask questions.

As a guideline, you can structure the presentationo as:

  • 1 slide title-page, introductions
  • 1-2 slide describing your part of the SUT
  • 1-2 slide presenting what you tested (including your EFSM)
  • 2 slides reflections and findings

The presentation session will start at 10:15, both slots before and after lunch are mandatory. After the presentation session is finished there are opportunities for further consultation regarding the final report.

Final Report

The final report is intended as a high-level document, summarizing your work in the assignments. In a normal testing process, a report is the usual outcome. Therefore, when writing the report, imagine that you are reporting to the manager of the fictional QA team that you are a part of.

Important

Submit your report to [fire], which should:

  • Include a brief description SUT
  • Describe your model of the SUT
  • Describe what is tested in the unit tests already in IntelliJ
  • Describe what value, on top of the already existing unit tests, you’ve added with your model-based tests
  • Describe your findings, was the SUT correct?

Guidline 3 pages (excluding images)

Deadline: Thursday May 26