The relevant API endpoints are in the Manual Test Run section. The steps to create test runs manually through the API are as follows:

  1. Create a Test Run shell with a POST request. The body you send is an array of Test Case IDs you want to instantiate test runs for. For example:  "[ 290, 283, 304]". The response will be an array of “shell” bodies, which can be populated as needed - see the PUT body example below. Note: there is also an endpoint for creating a test run shell for a single Test Set

  2. Populate the test run shell object with the relevant data for the test run - each test step supports an “ActualResult” and an "ExecutionStatusId". The shell is prepopulated with ExecutionStatusId’s of 3, which means not run - this must be changed. Please ensure you read and follow the specific points below about how to correctly populate the shell to ensure the test run is created properly. 

    1. Correctly set the test run step ExecutionStatusId values. The possible values are below:

      • 1 = Failed 

      • 2 = Passed

      • 3 = Not Run (the creation will fail if a test run's steps are 100% not run or a combination of not run, passed, and/or Not Applicable)

      • 4 = Not Applicable

      • 5 = Blocked

      • 6 = Caution

      • Any other value will block test run creation with an error of “Database foreign key violation occurred”. 

    2. Exclude the test run ExecutionStatusId.  At the test run level (above the test step), there is an overall ExecutionStatusId field. We recommend deleting this property and only using ExecutionStatusId on individual steps. The application will calculate the proper execution status based on the values set for the steps.

    3. Make sure to set a test runs EndDate. Test Runs without an end date value provided will not be able to be accessed properly through the application. 

  3. PUT the updated test run into Spira: once you have populated the data you want contained within the test run(s), create the new test run(s) by making this PUT request to this endpoint


A JSON example of the data you will send to the PUT api endpoint in step 3 will look as follows - shown with a single test run: 



        "TestRunId": 0,

        "Name": "Sample Test Case",

        "TestCaseId": 290,

        "TestRunTypeId": 1,

        "TesterId": 1,

        "ReleaseId": 155,

        "TestSetId": null,

        "TestSetTestCaseId": null,

        "StartDate": "2022-10-14T15:07:47.932Z",

        "EndDate": “2022-10-14T15:08:47.932Z”,

        "BuildId": null,

        "EstimatedDuration": null,

        "ActualDuration": null,

        "TestConfigurationId": null,

        "ProjectId": 24,

        "ArtifactTypeId": 5,

        "ConcurrencyDate": "2022-10-14T15:07:47.932Z",

        "CustomProperties": null,

        "IsAttachments": false,

        "Tags": null,

"TestRunSteps": [


                "TestRunStepId": 0,

                "TestRunId": 0,

                "TestStepId": 314,

                "TestCaseId": 290,

                "ExecutionStatusId": 2,

                "Position": 1,

                "Description": "Description",

                "ExpectedResult": "Works as expected.",

                "SampleData": null,

                "ActualResult": "Actual result",

                "ActualDuration": null,

                "StartDate": null,

                "EndDate": null





The TestRunId and TestRunStepId(s) will be populated upon completion - these should be 0 both in the shell you get from step 1, and in the body you send in step 3.