This page is maintained for older versions of Spira only. The latest documentation can be found at: https://spiradoc.inflectra.com

Spira 4.2 User Manual Help Viewer

1. Introduction
2. Functionality Overview
3. User/Project Management
4. Requirements Management
5. Test Case Management
6. Incident Tracking
7. Release Management
8. Task Tracking
9. Resource Tracking
10. Document Management
11. Reports Center
12. Source Code
13. Planning Board
14. Mobile Access
Search:
1. Introduction
2. Functionality Overview
3. User/Project Management
4. Requirements Management
5. Test Case Management
6. Incident Tracking
7. Release Management
8. Task Tracking
9. Resource Tracking
10. Document Management
11. Reports Center
12. Source Code
13. Planning Board
14. Mobile Access

5.4. Execute Test Case(s)

This section describes how a tester can follow the steps defined for a series of test cases and record what actually happened in the process. In addition, recorded failures of test cases can be used to automatically generate new incidents that will be added to the incident tracking module (see section 6).

You start test case execution in SpiraTeam by either selecting test cases or test sets on their respective page(s) and then clicking the <Execute> button, or by clicking the “Execute” link on the test cases / test sets listed on your personalized home page under “My Test Cases” or “My Test Sets”. If you execute a test set then the values of the selected release and custom list properties for the test run are automatically populated from the test set, whereas if you directly execute a test case itself, those values can be chosen by the tester.

Regardless of the route taken to launch the test execution module, the first screen that will be displayed will look like the following:

Before actually executing the test scripts, you need to select the release (and optionally the specific build) of the system that you will be testing against and any test run custom properties that have been defined by the project owner. This ensures that the resulting test runs and incidents are associated with the correct release of the system, and that the test runs are mapped to the appropriate custom properties (e.g. operating system, platform, browser, etc.).

If you have not configured any releases for the project, then the release drop-down list will be disabled and the test runs/incidents will not be associated with any particular release. If the test run was launched from a test set, the release and any list custom properties will be pre-populated from the test set itself and will not be changeable on this screen (unless they weren’t set by the test set).

Once you have chosen the appropriate release name and/or custom properties, click the <Next> button to begin executing test steps:

The screen is divided up into four main elements:

  • The left-hand navigation pane contains the list of test cases and test steps for the currently executing test case. You can click on the various links to move between the test cases and/or test steps. In addition, each test case and test step has a colored square next to the name that indicates its status (green = “Passed”, yellow = “Blocked”, orange = “Caution”, red = “Failed”, gray = “Not Run”) in the current test run. If any of the steps are marked as “Failed”, ”Blocked”, or “Caution” then the overall test case is marked with that status; if all the test steps passed, then the overall test case is marked as “Passed”; any other case results in the test case being marked as “Not Run”.
  • The main pane displays the details of the test case together with the current test step. As the tester you would read the name and description of the test case, then read the description of the test step, carry out the instructions on the system you are testing, and then compare the results with those listed as expected. As described below, depending on how the actual system responds, you will use the buttons on the page to record what actually happened.
  • Below the main pane there are two optional sections. The first one allows you to log an incident in the system associated with the test step. For failures this will typically be used to log a bug relating to the failure. However even if you pass a step you can still log an incident, which may be useful for logging non-critical cosmetic items that are not serious enough for a failure to be recorded. This tab also displays any pre-existing incidents that were associated with the test step being viewed.
  • The second tab displays a list of attachments that are related to the current test case and/or test step. This list initially contains any documents that have been attached to either the test case in general or the test step in particular. However as you perform the testing, you can attach additional documents to this list that are relevant to the test results (e.g. screenshots of an error page); these attached documents will be associated with both the test run itself and any incidents that are created.

If the expected results are indeed observed, then you simply need to click the <Pass> button to mark the test step as passed, and advance to the next test step, or if all the steps have passed, you can click <Pass All> to pass all the steps at once. This is illustrated in the screen shot below:

This will change the icon in the left-hand navigation bar into a green square with a check mark in it. Once all the test steps have passed, you will be automatically be taken to the first step in the next test; if it is the last test case being executed, the <Finish> button will be displayed instead.

If the actual results differ from those expected, you need to enter a description of the result observed and click the <Fail>, <Blocked> or <Caution> button; this is illustrated in the screen-shot below:

Unlike the <Pass> button, if you don’t enter a description of the actual result, the system will display an error message and re-prompt you again for input. In the case of a failure, both the individual test step and the overall test case will be marked with a red square containing a cross. Similarly, in the case of a blocked test case, they will be marked with a yellow square, and in the case of a caution, they will be marked with an orange square. Once marking a test step as <Failed>, <Caution>, or <Blocked>, you will automatically be taken to the first test step in the next test case; or if it is the last test case being executed, the <Finish> button will be displayed instead.

In addition to logging the failure, you can optionally choose to have the failure automatically result in the creation of a new incident. This is achieved by opening the Incidents section and entering a name, type, priority, severity (and any custom properties) for the new incident before clicking the <Fail/Caution/Blocked> button:

The other information needed for the new incident is automatically populated from the test step details. The newly created incident will also be linked to the test step, allowing traceability from within the incidents module. The functionality for managing incidents is described in more detail in section 6.

If you need to attach documents to the test run (e.g. screenshots of the error message), you just need to open the Attachments section and then choose the option to upload the necessary documents, attach the appropriate URLs, or paste in the appropriate screen capture.

Note that the entire test run is saved once you first start execution, so you can always step away from your computer and then resume testing at a later date by locating the test run on your ‘My Page’ under ‘My Pending Test Runs’ and choosing to resume testing.

If you click the [Pause] button, you can resume the paused test run by going to the ‘My Pending Test Runs’ section of your ‘My Page’ and click ‘Resume’.