Project Assignment¶
The practical part of the course will be done in the form of mini-projects performed in groups. Each group will consist of 3 persons (exceptionally a group of 2 might be allowed).
Each group will be assigned a number of mandatory meetings to discuss the progress of the group on their project work. These meetings will happen in the slots marked as “Group work and consultation” (see Lectures). The course assistants will communicate the specific meeting dates in advance to each group. Even if your group is not scheduled for a meeting, you can attend these sessions to work on your project together with your group or get help from the teaching assistants.
Introduction¶
You’ll take the role of a Quality Assurance team for a software project. Your team will have to do the following:
- build/install the project
- do some exploratory testing
- write a few unit tests
- model (part of) the software
- generate tests from your model and write an adapter to execute those tests on the SUT
This project is split into different assignments. You’ll have to submit the result of each assignment to the [fire] system. All submissions follow a common scheme. For each, you will have to submit the artifacts developed in the assignment (test cases, scripts, etc.). Check the assignment descriptions for details. In addition, you are required to present your findings to the class by the end of the course, see Lectures for time and date.
The remainder of this page first lists important dates and deadlines regarding the assignments. This is followed by descriptions of each assignment in sections. You can access the sections via the menu to the left, as well.
SUT Description¶
The project that we will be working with this year is IntelliJ-IDEA. It is an open-source integrated development environment (IDE) written in Java. Throughout the course this program will serve as our System Under Test (SUT). We will not work with the entire SUT, but a specific functionality.
The source code can be found here. The main source of documentation can be found here.
Functionality to test¶
We will be looking at the Viewing Reference Information functionality this year. This functionality lets you view API documentation for any part of the code in the current document.
Important dates¶
- Deadline for Forming groups: Wednesday March 22
- Deadline for Assignment 1: Exploratory and Automated Testing: Wednesday April 7
- Deadline for Assignment 2: Modeling: Wednesday April 26
- Deadline for Assignment 3: Controlling IntelliJ with ModelJUnit: Wednesday May 3
- Deadline for Assignment 4: Implementing your model: Monday May 15
- Presentation Final Presentation: Wednesday May 17
- Deadline for Final Report: Monday May 22
Forming groups¶
First, you need to form groups of 3 students. This is best done in the classroom, during the breaks of lectures or at the tutorial session. If you cannot be there physically to find other people to form a group with, ask on the [forum]. If this doesn’t work either, contact the lab assistant. Groups cannot be more than 3 people and groups of less than 3 people will only be accepted under exceptional circumstances (contact the lab assistant if you think you qualify).
Important
All 3 members of your group must create an account in [fire]. The first one to register creates a new group and shares the group password with the other two.
To complete this task, you must create a submission of the “Forming groups” assignment (it can be empty).
Deadline: Wednesday March 22
Assignment 1: Exploratory and Automated Testing¶
Part 1:
Ensure that you are able to build the SUT from the source code and install it on your machines. Each member of your group has to get familiar with the project and should be able to compile it on their machine. Don’t rely on one specific computer for all the work, as this creates a single point of failure in case one of you gets sick, or loses their backpack, etc.
Part 2:
Start doing exploratory testing. That is
“a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”
Test as many functions as you can and try to come up with difficult or unusual use cases that the software might not handle well. Read the documentation and check that the software behave as described.
Take some notes:
- was the SUT easy to install?
- is the documentation up to date?
- is the documentation complete?
- did you find some corner cases that aren’t properly handled?
Part 3:
Write 5 test cases as JUnit scripts against your SUT. Your test cases should test different methods of the SUT and be able to run without manual intervention. In particular, they should be able to decide automatically whether the test is a success or a failure.
Part 4:
Inject 5 faults into the existing system, which each cause a single unit
test from Part 3 to fail. A simple example would be changing
the instruction var = var + 1
into var = var - 1
or
if(var > 5)
into if(var < 5)
. Be creative!
Make a comment close to the modified part with notes on what you changed, how it worked originally, and how the changes influenced your tests.
Important
Submit the following four artifacts to [fire]:
Your notes from part 2
The test code for you JUnit tests. If you use multiple files submit a zipfile
The modified sources: which where modified to cause a test to fail. If you use multiple files submit a zipfile
A report in PDF format containing the following:
- A description of the tested methods
- A description (in English) of your tests
- A description of the injected faults and their influence on the tests (you need to describe where the files are located in the original project)
- Your findings and comments
Deadline: Wednesday April 7
Assignment 2: Modeling¶
In this assignment, you will create a model of the SUT functionality you will be using in the last assignment to generate tests. Your model should be an extended finite state machine that will allow you to generate interesting, arbitrary long tests sequences against your software under test. Your EFSM must have about 20 inputs, at least one internal variable and be able to generate infinite test sequences.
Your model doesn’t need to cover the whole part of the SUT that we handle, you can model only part of them as long as you adhere to the above criteria. Start by modeling the most basic parts and add more details progressively until you reach the desired number of inputs.
Important
Submit the following artifact to [fire]:
A report in PDF format containing the following:
- The specifications (in English) of the behavior of the modeled part of your SUT
- The list of system inputs in your model with their description
- A description of the state space of your model
- The transition table
- A graphical representation of your model
See [ROS2000] for an example of how you can format your report (only the modeling part, not the implementation part).
Deadline: Wednesday April 26
Assignment 3: Controlling IntelliJ with ModelJUnit¶
In this assignment, we create an adapter with which we can control the SUT. You will be provided with a model to generate test, and will be able to see how to use Model-Based Testing with IntelliJ. In Assignment 4: Implementing your model, you will extend your adapter to execute tests for the model you did in Assignment 2: Modeling.
The model is provided to you in it’s entirety, it models a small part of the SUT (not a part of the viewing reference information functionality).
Your task is to complete an adapter class
com.intellij.codeInsight.completion.CodeCompletionAdapter
, the basic
structure of the adapter is provided to you, and your code should go between
the <student code></student code>
markers. The adapter class should be used
by the provided test program to test the SUT with the given model. You also
need to provide a graphical representation of the ModelJUnit model.
Note
To get the provided sources, you can either use this zip file
, or install them via git. From a clone of the official IntelliJ repository, you can add it as
git remote add mbt-17 https://github.com/zut/intellij-community.git git fetch mbt-17 git cherry-pick mbt-17/lab3-starterCode
Warning
git cherry-pick
will add files into your local copy of the repository. Make sure to commit all your pending changes before running the above. It could also be a good idea to start a new branch at this stage, before cherry-picking.
Note
The structure of the provided files is
.
├── java
│ └── java-tests
│ ├── java-tests.iml
│ ├── model-tests
│ │ ├── com
│ │ │ └── intellij
│ │ │ └── codeInsight
│ │ │ └── completion
│ │ │ └── CodeCompletionAdapter.java
│ │ └── se
│ │ └── chalmers
│ │ └── dat261
│ │ ├── adapter
│ │ │ └── BaseAdapter.java
│ │ ├── CodeCompletionTest.java
│ │ └── model
│ │ └── CodeCompletionModel.java
│ └── testData
│ └── model-based
│ └── CodeCompletion.java
└── lib
└── modeljunit-2.5-jar-with-dependencies.jar
(15 directories, 7 files)
The interesting parts of the files are java/java-tests/model-tests
,
where both the test program, the provided model, and a stub of the adapter
resides. Modifications to modeljunit-2.5-jar-with-dependencies.jar
and
java-tests.iml
simply adds modeljunit as a dependency. The file
testData/model-based/CodeCompletion.java
is just to provide scaffolding
to start the tests.
Important
Submit the following three artifacts to [fire]:
- The adapter source files. If you use multiple files, submit a zip file. The adapter must enable all tests in
se.chalmers.dat261.CodeCompletionTest
to pass- An EFSM of the CodeCompletionModel, a picture or a PDF file.
Deadline: Wednesday May 3
Assignment 4: Implementing your model¶
In this assignment, you will implement the model that you developed in Assignment 2: Modeling using ModelJUnit.
Start by implementing the model alone by implementing the interface
FsmModel
, then connect each action in your model to through your
adapter from Assignment 3: Controlling IntelliJ with ModelJUnit, extending it as needed.
ModelJUnit offers several testing strategies and coverage metrics. Read more about them in the ModelJUnit documentation (Testing strategies: AllRoundTester, GreedyTester, LookaheadTester, RandomTester ; Coverage metrics: ActionCoverage, StateCoverage, TransitionCoverage, TransitionPairCoverage).
Before running the tests generator, try to think about which strategy will work best? Which metric do you think is the most interesting to evaluate your model?
Generate tests using your model with the different testers available in ModelJUnit. Collect coverage metrics about the generated tests, report the collected value a table like the following.
ActionCoverage | StateCoverage | TransitionCoverage | TransitionPairCoverage | |
AllRoundTester | ||||
GreedyTester | ||||
LookaheadTester | ||||
RandomTester |
Note
You might need to experiment with the size of the generated test sequence: obviously if your model has 30 transitions, ModelJUnit can’t possibly cover it with a 20-steps test!
Report the test length you are using.
Important
Submit the following two artifacts to [fire]:
- The java source files for your model and your adapter as a zip file
- A report in PDF format addressing the following:
- difficulties and remarks about the implementation of your model and adapter using ModelJUnit
- how did you implement reset?
- an evaluation of how well the model created in Assignment 2 fit, what needed to be changed, and why (abstraction, functionality, added/removed/changed transitions/states)
- an explanation, in your own words, of each test generation strategy available in ModelJUnit (AllRoundTester, GreedyTester, LookaheadTester, RandomTester)
- an explanation, in your own words, of each coverage metric available in ModelJUnit (ActionCoverage, StateCoverage, TransitionCoverage, TransitionPairCoverage)
- the above coverage table
- discussion and conclusion
Deadline: Monday May 15
Final Presentation¶
The presentations are on Wednesday May 17. Attendance is mandatory. To pass the labs, you need to take part in the presentation session. If you are not able to come, you need to notify us at least 24 hours beforehand.
Note
The order of the presentations will be chosen at random, and we’ll check your attendance at the end of the session.
Each group must give an 8 minute presentation, reporting on your results from Assignment 2 and 4. After each presentation, there is some time for the audience to ask questions.
As a guideline, you can structure the presentationo as:
- 1 slide title-page, introductions
- 1-2 slide describing your part of the SUT
- 1-2 slide presenting what you tested (including your EFSM)
- 2 slides reflections and findings
The presentation session will start at 10:15, both slots before and after lunch are mandatory. After the presentation session is finished there are opportunities for further consultation regarding the final report.
Final Report¶
The final report is intended as a high-level document, summarizing your work in the assignments. In a normal testing process, a report is the usual outcome. Therefore, when writing the report, imagine that you are reporting to the manager of the fictional QA team that you are a part of.
Important
Submit your report to [fire], which should:
- Include a brief description SUT
- Describe your model of the SUT
- Describe what is tested in the unit tests already in IntelliJ
- Describe what value, on top of the already existing unit tests, you’ve added with your model-based tests
- Describe your findings, was the SUT correct?
Guidline 3 pages (excluding images)
Deadline: Monday May 22