.. |deadline1| replace:: **Tuesday March 22** .. |deadline2| replace:: **Wednesday March 23** .. |deadline3| replace:: **Friday April 15** .. |deadlineRedistribution| replace:: **Wednesday April 20** .. |deadline4| replace:: **Friday May 6** .. |deadline5| replace:: **Friday May 20** .. |presentation2| replace:: **Wednesday May 25** .. |deadline6| replace:: **Thursday May 26** .. toctree:: :hidden: ================== Project Assignment ================== The practical part of the course will be done in the form of mini-projects performed in groups. Each group will consist of 3 persons (exceptionally a group of 2 might be allowed). Each group will choose a software system to be tested (among those proposed by the teachers, or proposed by the group). Besides the lectures presenting the theoretical content of the course serving as background knowledge to work on the mini-projects, there will be some tutorials on specific topics. Also, there will be meetings to discuss the progress of each group on their work on the mini-projects as well as few mandatory meetings with each group. These meetings will happen in the slots marked as "Group work and consultation" (see the "Lectures" tab in the menu above). The course assistants will communicate the specific meeting dates in advance to each group. Introduction ============ You'll take the role of a Quality Assurance team for a software project. Your team will have to do the following: - build/install the project - do some exploratory testing - write some test script - model (part of) the software - and finally, generate more tests from your model and write an adapter to execute those tests on the SUT This mini-project is split into different assignments. You'll have to submit the result of each assignment to the [fire]_ system. All submissions follow a common scheme. For each, you will have to submit the artifacts developed in the assignment (test cases, scripts, etc.) and additionally a report describing these artifacts. Check the assignment descriptions for details. In addition, one assignment require you to present your findings to the class. The presentation doesn't have to be submitted. The remainder of this page first lists important dates and deadlines regarding the assignments. This is followed by descriptions of each assignment in sections. You can access the sections via the menu to the left, as well. Project Description =================== The project that we will be working with this year is `IntelliJ-IDEA `_. It is an open-source integrated development environment (IDE) written in Java. Throughout the course this program will serve as our System Under Test (SUT). We will split up functionalities among the groups. So each group is going to test specific parts of the IDE's source code and the respective functionality. The source code can be found `here `_. The main source of documentation can be found `here `_. Important dates =============== * Deadline for :ref:`groups`: |deadline1| * Deadline for :ref:`exploratory`: |deadline3| * Deadline for :ref:`modeling`: |deadline4| * Deadline for :ref:`testGenAndExec`: |deadline5| * Presentation :ref:`finalPresentation`: |presentation2| * Deadline for :ref:`finalReport`: |deadline6| .. _groups: Forming groups ============== First, you need to form groups of 3 students. If you cannot find other people to join, ask on the [forum]_. If this doesn't work either, contact the lab assistant. Groups cannot be more than 3 people and groups of less than 3 people will only be accepted under exceptional circumstances (contact the lab assistant if you think you qualify). .. IMPORTANT:: All 3 members of your group must create an account in [fire]_. The first one to register creates a new group and shares the group password with the other two. To complete this task, you **must** create a (possibly empty) submission of the "Forming groups" assignment. **Deadline:** |deadline1| .. _sut: Distribution of the System Under Test (SUT) =========================================== On |deadline2|, we will provide each group with a specific SUT through the Fire system. We will provide you with a detailed system description of the system when the course starts. It is not decided yet, if the SUTs will be distributed by the course responsible or if a poll will be published in order for the student groups to choose. .. NOTE:: **Contributing back:** All the projects suggested above are open source projects with a public issue tracker. While it is not mandatory for this course that you contribute back to the project, writing a bug report can be a great exercise and the developer would most likely appreciate feedback from your testing effort. Be friendly and polite and if you have any doubts, ask the course assistant for help in crafting your issue report. .. IMPORTANT:: No fire submission in this part. .. _exploratory: Assignment 1: Exploratory and Automated Testing =============================================== **Part 1:** Ensure that you are able to build the SUT from the source code and install it on your machines. Each member of your group has to get familiar with the project. **Part 2:** Start doing `exploratory testing`_. That is .. _exploratory testing: https://en.wikipedia.org/wiki/Exploratory_testing "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project." Test as many functions as you can and try to come up with difficult or unusual use cases that the software might not handle well. Read the documentation and check that the software behave as described. Take some notes: - was the SUT easy to install? - is the documentation up to date? - is the documentation complete? - did you find some corner cases that aren't properly handled? .. IMPORTANT:: No fire submission for the first two parts. **Part 3:** Write 5 test cases as JUnit scripts against your SUT. Your test cases should test different functionalities of the SUT and be able to run without manual intervention. In particular, they should be able to decide automatically whether the test is a success or a failure. You'll need to find a way for your tests to control your SUT. **Part 4:** Inject 5 faults into the existing system. A simple example would be changing the instruction :code:`var = var + 1` into :code:`var = var - 1` or :code:`if(var > 5)` into :code:`if(var < 5)`. `Be creative!` The injected faults should make the previously created automated tests fail. Take notes on what you changed and how it influenced your tests. .. IMPORTANT:: Submit the following documents to [fire]_: - **The test script** source files - **A report** containing the following: - A description of the tested functionalities - A description (in English) of your tests - A description of the injected faults and their influence on the tests - An explanation about how you control your SUT from your test scripts - Your findings and comments **Deadline**: |deadline3| .. _sut2: Redistribution of the SUT ========================= On |deadlineRedistribution|, we will provide each group with a specific SUT. We will provide you with a detailed system description of the system when the course starts. .. _modeling: Assignment 2: Modeling ====================== In this assignment, you will create a model of your SUT that you will then use in the last assignment to generate tests. Your model should be an *extended finite state machine* that will allow you to generate interesting, arbitrary long tests sequences against your software under test. Your EFSM must have about 20 inputs, at least one internal variable and be able to generate infinite test sequences. Your model doesn't need to cover the whole SUT, in particular it has many functionalities, you can model only part of them. Start by modeling the most basic functionalities and add more details progressively until you reach the desired number of inputs. .. IMPORTANT:: Submit the following documents to [fire]_: - **A report** containing the following: - The specifications (in English) of the behavior of the modeled part of your SUT - The list of system inputs in your model with their description - A description of the state space of your model - The transition table - A graphical representation of your model See [ROS2000]_ for an example of how you can format your report (only the modeling part, not the implementation part). **Deadline:** |deadline4| .. _testGenAndExec: Assignment 3: Test generation and execution =========================================== In this assignment, you will implement the model that you developed in :ref:`modeling` using ModelJUnit_. Start by implementing the model alone by implementing the interface ``FsmModel``, then connect each action in your model to your SUT in a separate adapter class (See the :doc:`../tutorials/modeljunit-tutorial` for an example). ModelJUnit offers several testing strategies and coverage metrics. Read more about them in the `ModelJUnit documentation`_ (*Testing strategies:* AllRoundTester_, GreedyTester_, LookaheadTester_, RandomTester_ ; *Coverage metrics:* ActionCoverage_, StateCoverage_, TransitionCoverage_, TransitionPairCoverage_). Before running the tests generator, try to think about which strategy will work best? Which metric do you think is the most interesting to evaluate your model? Generate tests using your model with the different testers available in ModelJUnit. Collect coverage metrics about the generated tests, report the collected value a table like the following. +-----------------+----------------+---------------+--------------------+------------------------+ | | ActionCoverage | StateCoverage | TransitionCoverage | TransitionPairCoverage | +-----------------+----------------+---------------+--------------------+------------------------+ | AllRoundTester | | | | | +-----------------+----------------+---------------+--------------------+------------------------+ | GreedyTester | | | | | +-----------------+----------------+---------------+--------------------+------------------------+ | LookaheadTester | | | | | +-----------------+----------------+---------------+--------------------+------------------------+ | RandomTester | | | | | +-----------------+----------------+---------------+--------------------+------------------------+ .. NOTE:: You might need to experiment with the size of the generated test sequence: obviously if your model has 30 transitions, ModelJUnit can't possibly cover it with a 20-steps test! *Report the test length you are using*. .. IMPORTANT:: Submit the following documents to [fire]_: - **The java source files** for your model and your adapter - **A report** addressing the following: - difficulties and remarks about the implementation of your model and adapter using ModelJUnit - how did you implement reset? - an evaluation of how well the model created in Assignment 2 fit, what needed to be changed, and why (abstraction, functionality, added/removed/changed transitions/states) - an explanation, in your own words, of each test generation strategy available in ModelJUnit (AllRoundTester, GreedyTester, LookaheadTester, RandomTester) - an explanation, in your own words, of each coverage metric available in ModelJUnit (ActionCoverage, StateCoverage, TransitionCoverage, TransitionPairCoverage) - the above coverage table - discussion and conclusion **Deadline:** |deadline5| .. _ModelJUnit documentation: ../_static/modeljunit2_docs/index.html .. _AllRoundTester: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/AllRoundTester.html .. _GreedyTester: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/GreedyTester.html .. _LookaheadTester: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/LookaheadTester.html .. _RandomTester: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/RandomTester.html .. _ActionCoverage: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/coverage/ActionCoverage.html .. _StateCoverage: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/coverage/StateCoverage.html .. _TransitionCoverage: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/coverage/TransitionCoverage.html .. _TransitionPairCoverage: ../_static/modeljunit2_docs/nz/ac/waikato/modeljunit/coverage/TransitionPairCoverage.html .. _finalPresentation: Presentation: Final Presentation ============================================= Information regarding the final presentation will be added when the course starts. The presentations are on |presentation2|. Attendance is mandatory. To pass the labs, you need to take part in the presentation session. If you are not able to come, you need to notify us at least 24 hours beforehand. .. NOTE:: The order of the presentations will be chosen at random, and we'll check your attendance at the end of the session. Each group must give an 8 minute presentation, reporting on your results from Assignment 2 and 3. After each presentation, there is some time for the audience to ask questions. As a guideline, you can structure the presentationo as: * 1 slide title-page, introductions * 1-2 slide describing your part of the SUT * 1-2 slide presenting what you tested (including your EFSM) * 2 slides reflections and findings The presentation session will start at 10:15, both slots before and after lunch are mandatory. After the presentation session is finished there are opportunities for further consultation regarding the final report. .. _finalReport: Final Report ============ The final report is intended as a high-level document, summarizing your work in the assignments. In a normal testing process, a report is the usual outcome. Therefore, when writing the report, imagine that you are reporting to the manager of the fictional QA team that you are a part of. .. IMPORTANT:: Submit your report to [fire]_, which should: - Include a brief description SUT - Describe your model of the SUT - Describe what is tested in the unit tests already in IntelliJ - Describe what value, on top of the already existing unit tests, you've added with your model-based tests - Describe your findings, was the SUT correct? Guidline 3 pages (excluding images) **Deadline:** |deadline6| Useful links ============ .. [ROS2000] Rosaria, Steven, and Harry Robinson. “Applying Models in Your Testing Process.” Information and Software Technology 42, no. 12 (2000): 815–24. .. _ModelJUnit: http://www.cs.waikato.ac.nz/~marku/mbt/modeljunit/