1. Model your hardware & sensors in SpiraTest

A. Create custom fields / artifacts for hardware context

In SpiraTest, you can add custom properties to requirements, test cases, and test runs. Use these to capture the hardware/IoT context you currently track in Excel, for example:

On Test Cases or Test Runs:

  • Device ID / Serial Number

  • ESP32 Board Type (e.g., WROOM, WROVER, custom PCB)

  • Sensor Type (e.g., temperature, IMU, pressure)

  • Sensor Location (board position or channel)

  • Firmware Version

  • Hardware Revision

  • Environment (lab, field, thermal chamber, etc.)

That way every test run in Spira includes the same metadata you’re currently putting in spreadsheet columns.


2. Convert your spreadsheet structure into Test Cases

Right now your spreadsheet probably looks something like:

DeviceSensorTest NameConditionExpected ResultMeasuredPass/FailNotes

You can map that to SpiraTest like this:

A. Create Test Cases for each type of sensor test

Example test cases:

  • “TEMP-001: Temperature sensor accuracy at 25°C”

  • “TEMP-002: Temperature sensor accuracy at 0°C”

  • “ACC-010: Accelerometer noise characterization”

  • “PWR-005: Current draw at deep sleep”

Each test case has test steps that mirror what the tester does:

  1. Set chamber to 25°C and wait 10 minutes.

  2. Read sensor value from device (UART/Web, etc.).

  3. Record measured temperature.

  4. Compare to reference thermometer.

Use Expected Result for the acceptance criteria (e.g., “Measured temp within ±0.5°C of reference”).

B. Parameterize by device / sensor using Test Sets or custom fields

Instead of duplicating the test case for each device:

  • Use a Test Set called “ESP32 Sensor Qualification – Build 1.2” that includes all relevant sensor test cases.

  • Use custom properties on the Test Set or Test Run to tag:

    • Device Serial

    • Batch/lot

    • Board revision

    • Sensor part number

Each time you execute, you get a separate Test Run record with those values, instead of a new row in Excel.


3. Recording sensor readings inside SpiraTest

There are a few good patterns here depending on how much data you collect.

Option 1: Use test step fields for key numeric values

For small sets of measurements (e.g., min/max/average):

  • Add Custom Fields on Test Runs like:

    • Measured_Min

    • Measured_Max

    • Measured_Avg

    • Delta_vs_Spec

  • Or use the actual result field on each step to record the measured value:

    • Step 2 Expected: “Measured temp within ±0.5°C of 25°C”

    • Actual: “24.7°C (PASS)”

This keeps everything human-readable and reportable.

Option 2: Attach logs / CSVs for dense data

If you’re streaming lots of samples from the ESP32:

  • Export the raw measurements to a CSV or log file.

  • Attach the file to the Test Run in SpiraTest.

  • Store summary statistics (avg, stdev, pass/fail) in custom fields or step actuals.

  • This gives you both:

    • A high-level view for dashboards.

    • Raw data attached for later analysis.


4. Automating sensor tests & pushing results into SpiraTest

If you want to go beyond manual entry and have the ESP32 (or a test harness) push results automatically:

A. Use a PC-side test harness that talks to ESP32 and SpiraTest

Typical architecture:

  1. Test harness on a PC (Python script, C#, Node, etc.) communicates with the ESP32:

    • Serial (UART)

    • Wi-Fi / HTTP / MQTT

    • USB bridge

  2. The harness collects data, computes pass/fail.

  3. The harness calls the SpiraTest REST API to:

    • Create a Test Run for the relevant test case.

    • Set status (Pass/Fail/Blocked).

    • Fill in custom fields (firmware version, sensor readings, etc.).

    • Optionally attach raw logs.

So your workflow is:

Run test → harness talks to device → harness pushes results into Spira → engineers view metrics in Spira instead of Excel.

B. Use Rapise (if you’re also using Inflectra’s test automation tool)

If you’re using Rapise alongside Spira:

  • Build a Rapise test that:

    • Talks to your device (serial/TCP/REST/MQTT).

    • Captures measurements and checks thresholds.

  • Link the Rapise test to a SpiraTest automation test case.

  • When Rapise runs (manually or via CI), it automatically creates Test Runs in SpiraTest with Pass/Fail and logs attached.

This is great if you later want CI/CD style regression testing of your firmware and sensors.


5. Using SpiraTest reporting instead of manual spreadsheet pivots

Once your sensor tests live in SpiraTest, you can use built-in dashboards instead of handmade Excel charts.

Examples of useful reports:

  • Pass/Fail by sensor type (Are we seeing more failures on humidity vs temperature sensors?)

  • Defects/Incidents by hardware revision (Does Rev B have fewer sensor issues than Rev A?)

  • Test coverage by requirement (Which hardware requirements for sensors don’t have complete tests or have failing tests?)

  • Trend over time (Sensor accuracy or yield improving across builds?)

Because Test Runs, Requirements, and Incidents are all linked:

  • You can click from a failing sensor test → associated incident (bug) → which hardware build/ESP32 board/firmware caused it.

  • Management gets traceability; engineers get quick diagnostics.


6. Migration strategy: moving from spreadsheets to SpiraTest

Here’s a realistic way to transition without disrupting everything:

  1. Define the schema first

    • Decide what columns from your spreadsheet must become:

      • Custom fields (Device, Firmware, Sensor ID…)

      • Test Cases vs Test Steps

      • Test Set attributes

  2. Import existing tests

    • Use SpiraTest’s Excel import (or CSV → Excel) to bulk import:

      • Requirements (sensor performance specs).

      • Test Cases (each spreadsheet “test” becomes a Spira test case).

  3. Start with new builds only

    • For builds going forward, log all testing in SpiraTest only.

    • Keep the old spreadsheets as legacy reference, but stop adding new rows.

  4. Progressively automate

    • Start with manual test runs (enter key results by hand).

    • Then, for the most repetitive sensor tests, build a harness and push results into Spira via API / Rapise.


7. Concrete example: one sensor test end-to-end

Let’s say you want to test a temperature sensor on an ESP32 board at 0°C, 25°C, 50°C.

In SpiraTest:

  • Requirement:
    “REQ-123: Temperature sensor accuracy shall be within ±0.5°C from 0–50°C.”

  • Test Case: TC-456: Temperature sensor accuracy verification
    Steps:

    1. Set chamber to 0°C, wait 10 minutes.

    2. Read sensor output from ESP32.

    3. Compare to calibrated thermometer.

    4. Repeat for 25°C.

    5. Repeat for 50°C.

  • Custom fields on test run:

    • Device Serial = “ESP32-0007”

    • Firmware Version = “v1.3.2”

    • Board Rev = “B”

  • During execution:

    • Tester enters actual measured values in each step’s Actual Result:

      • “0°C test: 0.3°C vs 0.0°C (PASS)”

      • “25°C test: 25.4°C vs 25.0°C (PASS)”

      • “50°C test: 50.7°C vs 50.0°C (PASS)”

  • If out of spec:

    • Mark Test Run = Failed.

    • Raise an Incident linked to this test run and requirement.

Now that run becomes a permanent, searchable record in Spira, not just a row in some tab of an Excel file.


1) Suggested custom field configuration for ESP32 / sensor testing

Below is a practical starter schema you can implement in SpiraTest. You don’t have to use all of these — you can start with the core ones and expand later.

A. Custom Properties on Test Runs

These capture the specific instance of a hardware test (like columns in your spreadsheet):

  1. Device Serial Number

    • Type: Text

    • Workflow: Test Run

    • Example: ESP32-BOARD-0007

  2. ESP32 Module / Board Type

    • Type: List (single-select)

    • Values: ESP32-WROOM, ESP32-WROVER, Custom Board, Other

    • Workflow: Test Run

  3. Hardware Revision

    • Type: Text (or List if you have defined revs)

    • Example: Rev A, Rev B

    • Workflow: Test Run

  4. Firmware Version

    • Type: Text

    • Example: v1.3.2

    • Workflow: Test Run

  5. Sensor Type

    • Type: List (single-select)

    • Values: Temperature, Humidity, Pressure, IMU, Proximity, Light, Other

    • Workflow: Test Run

  6. Sensor Identifier / Channel

    • Type: Text

    • Example: TEMP1, I2C-0x48, ADC_CH3

    • Workflow: Test Run

  7. Test Environment

    • Type: List

    • Values: Lab Bench, Thermal Chamber, Field, EMI Chamber, Other

    • Workflow: Test Run

  8. Nominal Condition

    • Type: Text

    • Example: 25°C, 0°C, 50°C, 3.3V, etc.

    • Workflow: Test Run

  9. Measured Value (Primary)

    • Type: Decimal

    • Example: 24.7

    • Workflow: Test Run

  10. Measurement Units

    • Type: List

    • Values: °C, %RH, g, m/s², Pa, Lux, V, A, Custom

    • Workflow: Test Run

  11. Delta from Spec

    • Type: Decimal

    • Example: -0.3 (meaning -0.3°C from nominal)

    • Workflow: Test Run

  12. Aggregation (optional if you take multiple samples)

    • Measured_MinDecimal

    • Measured_MaxDecimal

    • Measured_AvgDecimal

  13. Batch / Lot ID

    • Type: Text

    • Example: BATCH-2025-11-ESP32-01

You can then build Test Run filters and dashboards based on any of these (e.g., show all temperature tests on Rev B boards with Firmware v1.3+).


B. Custom Properties on Test Cases (optional but useful)

These describe what kind of test it is, not the specific execution:

  1. Sensor Requirement ID

    • Type: Text

    • Example: REQ-SENS-001

  2. Test Category

    • Type: List

    • Values: Accuracy, Linearity, Drift, Noise, Power Consumption, Functional, Stress

  3. Applies To Hardware Revisions

    • Type: Text

    • Example: Rev A–C

2) Example JSON: pushing a test run into SpiraTest via REST API

Let’s imagine you have a Python harness that:

  • Talks to the ESP32 over serial or Wi-Fi,

  • Measures the temperature at 25°C,

  • Decides pass/fail,

  • Then posts the results into SpiraTest.

⚠️ Note: The exact REST endpoints/field names can vary by SpiraTest version and configuration, but this shows the shape of what you’d send. You’d adjust project_id, test_case_id, custom property IDs, and URLs for your actual Spira instance.

A. Example JSON payload for a test run

This is the conceptual body your harness would send (e.g. via HTTP POST):

{
  "ProjectId": 5,
  "TestCaseId": 123, 
  "ReleaseId": null,
  "TestSetId": 42,
  "ExecutionStatusId": 2,
  "TesterId": 7,
  "StartDate": "2025-12-03T10:15:00",
  "EndDate": "2025-12-03T10:16:30",
  "Description": "Automated ESP32 sensor accuracy test at 25°C.",
  "CustomProperties": [
    {
      "PropertyId": 101,
      "StringValue": "ESP32-BOARD-0007"        // Device Serial Number
    },
    {
      "PropertyId": 102,
      "IntegerValue": 1                        // ESP32-WROOM (list value)
    },
    {
      "PropertyId": 103,
      "StringValue": "Rev B"                   // Hardware Revision
    },
    {
      "PropertyId": 104,
      "StringValue": "v1.3.2"                  // Firmware Version
    },
    {
      "PropertyId": 105,
      "IntegerValue": 1                        // Sensor Type = Temperature
    },
    {
      "PropertyId": 106,
      "StringValue": "TEMP1"                   // Sensor Identifier / Channel
    },
    {
      "PropertyId": 107,
      "IntegerValue": 2                        // Test Environment = Thermal Chamber
    },
    {
      "PropertyId": 108,
      "StringValue": "25°C"                    // Nominal Condition
    },
    {
      "PropertyId": 109,
      "DecimalValue": 24.7                     // Measured_Value_Primary
    },
    {
      "PropertyId": 110,
      "IntegerValue": 1                        // Units = °C
    },
    {
      "PropertyId": 111,
      "DecimalValue": -0.3                     // Delta from Spec
    }
  ],
  "TestRunSteps": [
    {
      "TestStepId": 1,
      "ExecutionStatusId": 2,
      "ActualResult": "Measured 24.7°C vs nominal 25.0°C (Δ = -0.3°C, within ±0.5°C)."
    }
  ]
}

Where:

  • ExecutionStatusId is typically:

    • 1 = Not Run

    • 2 = Passed

    • 3 = Failed

    • (Exact IDs can vary by config; you’d confirm in your Spira instance.)

  • PropertyId values are the IDs of your custom properties (you configure these in Spira, then look up their IDs).

  • IntegerValue is used for list-type properties, StringValue for text, DecimalValue for numeric.


B. Example Python snippet (requests-based harness)

Here’s a minimal Python example of posting that JSON to SpiraTest:

import requests
from datetime import datetime

SPIRA_URL = "https://yourcompany.spiraservice.net"
PROJECT_ID = 5
TEST_CASE_ID = 123
TEST_SET_ID = 42

USERNAME = "your.spira.username"
API_KEY = "your-spira-api-key"

def post_esp32_temperature_run(device_serial, firmware, measured_temp, nominal=25.0):
    delta = measured_temp - nominal
    status_id = 2 if abs(delta) <= 0.5 else 3  # 2 = Passed, 3 = Failed (example)

    now = datetime.utcnow().isoformat(timespec="seconds")

    payload = {
        "ProjectId": PROJECT_ID,
        "TestCaseId": TEST_CASE_ID,
        "TestSetId": TEST_SET_ID,
        "ExecutionStatusId": status_id,
        "TesterId": None,  # Or a specific user ID if you want
        "StartDate": now,
        "EndDate": now,
        "Description": f"Automated ESP32 temperature test at {nominal}°C.",
        "CustomProperties": [
            {"PropertyId": 101, "StringValue": device_serial},        # Device Serial
            {"PropertyId": 104, "StringValue": firmware},             # Firmware Version
            {"PropertyId": 108, "StringValue": f"{nominal}°C"},       # Nominal Condition
            {"PropertyId": 109, "DecimalValue": measured_temp},       # Measured_Value_Primary
            {"PropertyId": 111, "DecimalValue": delta}                # Delta from Spec
        ],
        "TestRunSteps": [
            {
                "TestStepId": 1,
                "ExecutionStatusId": status_id,
                "ActualResult": (
                    f"Measured {measured_temp:.2f}°C vs nominal {nominal:.2f}°C "
                    f"(Δ = {delta:.2f}°C)."
                )
            }
        ]
    }

    url = f"{SPIRA_URL}/Services/v6_0/RestService.svc/projects/{PROJECT_ID}/test-runs"
    response = requests.post(url, json=payload, auth=(USERNAME, API_KEY))

    response.raise_for_status()
    return response.json()

# Example usage
if __name__ == "__main__":
    result = post_esp32_temperature_run(
        device_serial="ESP32-BOARD-0007",
        firmware="v1.3.2",
        measured_temp=24.7
    )
    print("Created test run:", result)
You’d plug this into your existing test harness that reads from the ESP32 (serial/Wi-Fi/etc.), then call post_esp32_temperature_run() with the real measured values.