Introduction

The TestRunner public module in Rapise introduces a streamlined way to orchestrate the execution of multiple test sets. This is highly beneficial for scheduling daily (nightly) test set runs or executing a large batch of tests on-demand.

This tutorial focuses purely on the test execution capabilities of the TestRunner module. It will guide you through installing the module, grouping multiple test sets, executing them, and automatically triggering reruns for test sets that failed or did not run.


Prerequisites & Installation

Before creating your orchestration logic, you must include the TestRunner module in your Rapise testing framework.

  1. Open your main testing framework in Rapise.

  2. Navigate to the Files view or Object Tree.

  3. Right-click on the Modules / Pages folder.

  4. Select Import Public Module > TestRunner.

Note: In previous versions of Rapise, installing the TestRunner module required you to additionally install the AiTester module. Starting with Rapise 9, the AI testing engine is built-in. Therefore, there is no need to manually install AiTester—your TestRunner module is ready to work right out of the box.


Step 1: Orchestrating Multiple Test Sets

The most effective way to handle daily runs is to create a dedicated master test that triggers multiple Spira.RunTestSet actions.

  1. Create a new test case within your framework (e.g., named Nightly Orchestrator).

  2. Open the RVL (Rapise Visual Language) editor for this test.

  3. Organize your test sets into logical groups using RVL Sheets (Tabs). For example, you can create one sheet for "API Tests" and another for "UI Tests".

  4. In each sheet, use the Spira.RunTestSet method to trigger your test sets:

    • Type: Action

    • Object: Spira

    • Action: RunTestSet

    • Parameter (nameOrId): The Name or ID of the Spira Test Set you want to execute.

    • Parameter (projectNameOrId): (Optional) The name or ID of the Spira project containing the Test Set.

When this master test is executed, Rapise will communicate with Spira to schedule the designated test sets to run as soon as possible.


Step 2: Rerunning Failed and Pending Test Sets

When executing a large batch of test sets, environmental issues or temporary outages might cause some test sets to fail or block them from running at all. You can use the TestRunner module to seamlessly find and rerun these specific test sets.

The TestRunner.DoRerunFailedAndNotRun action parses an RVL file, identifies all the test sets mentioned in its sheets, checks their current-day execution status, and immediately schedules a rerun for any test set that failed or hasn't executed yet.

  1. In your RVL script, add a new step after your initial execution flow (or in a separate validation script).

  2. Create an Action with the following parameters:

    • Type: Action

    • Object: TestRunner

    • Action: DoRerunFailedAndNotRun

    • Parameter: Provide the path to the RVL file containing your original Spira.RunTestSet calls (e.g., %WORKDIR%\Main.rvl.xlsx).

Example RVL Workflow:

  1. Spira.RunTestSet ("Web Login Tests")

  2. Spira.RunTestSet ("Checkout Flow")

  3. [Wait/Sleep logic to allow execution to finish]

  4. TestRunner.DoRerunFailedAndNotRun ("%WORKDIR%\Main.rvl.xlsx")

By pointing the action to your master RVL file, TestRunner automatically figures out which of those test sets need a second chance and handles the re-dispatching.


Step 3: Triggering Daily and On-Demand Executions

Once your orchestrator test is built using Spira and TestRunner methods, you can decide how and when to run it:

  • On-Demand Execution: To launch your whole suite on-demand, simply open the Nightly Orchestrator test in Rapise and click the Play button.

  • Daily / Nightly Execution: Save the Nightly Orchestrator test back to Spira. Go to Spira's Test Sets tab and schedule this specific orchestrator test to run at a designated time (e.g., 2:00 AM) using RapiseLauncher.

When RapiseLauncher picks up the scheduled orchestrator test, it will act as a domino effect: the single master run will trigger all the subsequent test sets grouped in your RVL sheets, evaluate the final results, and rerun any failures—completely unattended.