1. Model your hardware & sensors in SpiraTest
A. Create custom fields / artifacts for hardware context
In SpiraTest, you can add custom properties to requirements, test cases, and test runs. Use these to capture the hardware/IoT context you currently track in Excel, for example:
On Test Cases or Test Runs:
Device ID / Serial Number
ESP32 Board Type (e.g., WROOM, WROVER, custom PCB)
Sensor Type (e.g., temperature, IMU, pressure)
Sensor Location (board position or channel)
Firmware Version
Hardware Revision
Environment (lab, field, thermal chamber, etc.)
That way every test run in Spira includes the same metadata you’re currently putting in spreadsheet columns.
2. Convert your spreadsheet structure into Test Cases
Right now your spreadsheet probably looks something like:
| Device | Sensor | Test Name | Condition | Expected Result | Measured | Pass/Fail | Notes |
|---|
You can map that to SpiraTest like this:
A. Create Test Cases for each type of sensor test
Example test cases:
“TEMP-001: Temperature sensor accuracy at 25°C”
“TEMP-002: Temperature sensor accuracy at 0°C”
“ACC-010: Accelerometer noise characterization”
“PWR-005: Current draw at deep sleep”
Each test case has test steps that mirror what the tester does:
Set chamber to 25°C and wait 10 minutes.
Read sensor value from device (UART/Web, etc.).
Record measured temperature.
Compare to reference thermometer.
Use Expected Result for the acceptance criteria (e.g., “Measured temp within ±0.5°C of reference”).
B. Parameterize by device / sensor using Test Sets or custom fields
Instead of duplicating the test case for each device:
Each time you execute, you get a separate Test Run record with those values, instead of a new row in Excel.
3. Recording sensor readings inside SpiraTest
There are a few good patterns here depending on how much data you collect.
Option 1: Use test step fields for key numeric values
For small sets of measurements (e.g., min/max/average):
This keeps everything human-readable and reportable.
Option 2: Attach logs / CSVs for dense data
If you’re streaming lots of samples from the ESP32:
Export the raw measurements to a CSV or log file.
Attach the file to the Test Run in SpiraTest.
Store summary statistics (avg, stdev, pass/fail) in custom fields or step actuals.
This gives you both:
4. Automating sensor tests & pushing results into SpiraTest
If you want to go beyond manual entry and have the ESP32 (or a test harness) push results automatically:
A. Use a PC-side test harness that talks to ESP32 and SpiraTest
Typical architecture:
Test harness on a PC (Python script, C#, Node, etc.) communicates with the ESP32:
Serial (UART)
Wi-Fi / HTTP / MQTT
USB bridge
The harness collects data, computes pass/fail.
The harness calls the SpiraTest REST API to:
Create a Test Run for the relevant test case.
Set status (Pass/Fail/Blocked).
Fill in custom fields (firmware version, sensor readings, etc.).
Optionally attach raw logs.
So your workflow is:
Run test → harness talks to device → harness pushes results into Spira → engineers view metrics in Spira instead of Excel.
B. Use Rapise (if you’re also using Inflectra’s test automation tool)
If you’re using Rapise alongside Spira:
Build a Rapise test that:
Link the Rapise test to a SpiraTest automation test case.
When Rapise runs (manually or via CI), it automatically creates Test Runs in SpiraTest with Pass/Fail and logs attached.
This is great if you later want CI/CD style regression testing of your firmware and sensors.
5. Using SpiraTest reporting instead of manual spreadsheet pivots
Once your sensor tests live in SpiraTest, you can use built-in dashboards instead of handmade Excel charts.
Examples of useful reports:
Pass/Fail by sensor type (Are we seeing more failures on humidity vs temperature sensors?)
Defects/Incidents by hardware revision (Does Rev B have fewer sensor issues than Rev A?)
Test coverage by requirement (Which hardware requirements for sensors don’t have complete tests or have failing tests?)
Trend over time (Sensor accuracy or yield improving across builds?)
Because Test Runs, Requirements, and Incidents are all linked:
6. Migration strategy: moving from spreadsheets to SpiraTest
Here’s a realistic way to transition without disrupting everything:
Define the schema first
Import existing tests
Start with new builds only
For builds going forward, log all testing in SpiraTest only.
Keep the old spreadsheets as legacy reference, but stop adding new rows.
Progressively automate
Start with manual test runs (enter key results by hand).
Then, for the most repetitive sensor tests, build a harness and push results into Spira via API / Rapise.
7. Concrete example: one sensor test end-to-end
Let’s say you want to test a temperature sensor on an ESP32 board at 0°C, 25°C, 50°C.
In SpiraTest:
Requirement:
“REQ-123: Temperature sensor accuracy shall be within ±0.5°C from 0–50°C.”
Test Case: TC-456: Temperature sensor accuracy verification
Steps:
Set chamber to 0°C, wait 10 minutes.
Read sensor output from ESP32.
Compare to calibrated thermometer.
Repeat for 25°C.
Repeat for 50°C.
Custom fields on test run:
During execution:
If out of spec:
Now that run becomes a permanent, searchable record in Spira, not just a row in some tab of an Excel file.
1) Suggested custom field configuration for ESP32 / sensor testing
Below is a practical starter schema you can implement in SpiraTest. You don’t have to use all of these — you can start with the core ones and expand later.
A. Custom Properties on Test Runs
These capture the specific instance of a hardware test (like columns in your spreadsheet):
Device Serial Number
ESP32 Module / Board Type
Type: List (single-select)
Values: ESP32-WROOM, ESP32-WROVER, Custom Board, Other
Workflow: Test Run
Hardware Revision
Firmware Version
Type: Text
Example: v1.3.2
Workflow: Test Run
Sensor Type
Type: List (single-select)
Values: Temperature, Humidity, Pressure, IMU, Proximity, Light, Other
Workflow: Test Run
Sensor Identifier / Channel
Test Environment
Type: List
Values: Lab Bench, Thermal Chamber, Field, EMI Chamber, Other
Workflow: Test Run
Nominal Condition
Type: Text
Example: 25°C, 0°C, 50°C, 3.3V, etc.
Workflow: Test Run
Measured Value (Primary)
Type: Decimal
Example: 24.7
Workflow: Test Run
Measurement Units
Type: List
Values: °C, %RH, g, m/s², Pa, Lux, V, A, Custom
Workflow: Test Run
Delta from Spec
Aggregation (optional if you take multiple samples)
Measured_Min – Decimal
Measured_Max – Decimal
Measured_Avg – Decimal
Batch / Lot ID
You can then build Test Run filters and dashboards based on any of these (e.g., show all temperature tests on Rev B boards with Firmware v1.3+).
B. Custom Properties on Test Cases (optional but useful)
These describe what kind of test it is, not the specific execution:
Sensor Requirement ID
Type: Text
Example: REQ-SENS-001
Test Category
Type: List
Values: Accuracy, Linearity, Drift, Noise, Power Consumption, Functional, Stress
Applies To Hardware Revisions
Type: Text
Example: Rev A–C
2) Example JSON: pushing a test run into SpiraTest via REST API
Let’s imagine you have a Python harness that:
Talks to the ESP32 over serial or Wi-Fi,
Measures the temperature at 25°C,
Decides pass/fail,
Then posts the results into SpiraTest.
⚠️ Note: The exact REST endpoints/field names can vary by SpiraTest version and configuration, but this shows the shape of what you’d send. You’d adjust project_id, test_case_id, custom property IDs, and URLs for your actual Spira instance.
A. Example JSON payload for a test run
This is the conceptual body your harness would send (e.g. via HTTP POST):
{
"ProjectId": 5,
"TestCaseId": 123,
"ReleaseId": null,
"TestSetId": 42,
"ExecutionStatusId": 2,
"TesterId": 7,
"StartDate": "2025-12-03T10:15:00",
"EndDate": "2025-12-03T10:16:30",
"Description": "Automated ESP32 sensor accuracy test at 25°C.",
"CustomProperties": [
{
"PropertyId": 101,
"StringValue": "ESP32-BOARD-0007" // Device Serial Number
},
{
"PropertyId": 102,
"IntegerValue": 1 // ESP32-WROOM (list value)
},
{
"PropertyId": 103,
"StringValue": "Rev B" // Hardware Revision
},
{
"PropertyId": 104,
"StringValue": "v1.3.2" // Firmware Version
},
{
"PropertyId": 105,
"IntegerValue": 1 // Sensor Type = Temperature
},
{
"PropertyId": 106,
"StringValue": "TEMP1" // Sensor Identifier / Channel
},
{
"PropertyId": 107,
"IntegerValue": 2 // Test Environment = Thermal Chamber
},
{
"PropertyId": 108,
"StringValue": "25°C" // Nominal Condition
},
{
"PropertyId": 109,
"DecimalValue": 24.7 // Measured_Value_Primary
},
{
"PropertyId": 110,
"IntegerValue": 1 // Units = °C
},
{
"PropertyId": 111,
"DecimalValue": -0.3 // Delta from Spec
}
],
"TestRunSteps": [
{
"TestStepId": 1,
"ExecutionStatusId": 2,
"ActualResult": "Measured 24.7°C vs nominal 25.0°C (Δ = -0.3°C, within ±0.5°C)."
}
]
}
Where:
ExecutionStatusId is typically:
PropertyId values are the IDs of your custom properties (you configure these in Spira, then look up their IDs).
IntegerValue is used for list-type properties, StringValue for text, DecimalValue for numeric.
B. Example Python snippet (requests-based harness)
Here’s a minimal Python example of posting that JSON to SpiraTest:
import requests
from datetime import datetime
SPIRA_URL = "https://yourcompany.spiraservice.net"
PROJECT_ID = 5
TEST_CASE_ID = 123
TEST_SET_ID = 42
USERNAME = "your.spira.username"
API_KEY = "your-spira-api-key"
def post_esp32_temperature_run(device_serial, firmware, measured_temp, nominal=25.0):
delta = measured_temp - nominal
status_id = 2 if abs(delta) <= 0.5 else 3 # 2 = Passed, 3 = Failed (example)
now = datetime.utcnow().isoformat(timespec="seconds")
payload = {
"ProjectId": PROJECT_ID,
"TestCaseId": TEST_CASE_ID,
"TestSetId": TEST_SET_ID,
"ExecutionStatusId": status_id,
"TesterId": None, # Or a specific user ID if you want
"StartDate": now,
"EndDate": now,
"Description": f"Automated ESP32 temperature test at {nominal}°C.",
"CustomProperties": [
{"PropertyId": 101, "StringValue": device_serial}, # Device Serial
{"PropertyId": 104, "StringValue": firmware}, # Firmware Version
{"PropertyId": 108, "StringValue": f"{nominal}°C"}, # Nominal Condition
{"PropertyId": 109, "DecimalValue": measured_temp}, # Measured_Value_Primary
{"PropertyId": 111, "DecimalValue": delta} # Delta from Spec
],
"TestRunSteps": [
{
"TestStepId": 1,
"ExecutionStatusId": status_id,
"ActualResult": (
f"Measured {measured_temp:.2f}°C vs nominal {nominal:.2f}°C "
f"(Δ = {delta:.2f}°C)."
)
}
]
}
url = f"{SPIRA_URL}/Services/v6_0/RestService.svc/projects/{PROJECT_ID}/test-runs"
response = requests.post(url, json=payload, auth=(USERNAME, API_KEY))
response.raise_for_status()
return response.json()
# Example usage
if __name__ == "__main__":
result = post_esp32_temperature_run(
device_serial="ESP32-BOARD-0007",
firmware="v1.3.2",
measured_temp=24.7
)
print("Created test run:", result)
You’d plug this into your existing test harness that reads from the ESP32 (serial/Wi-Fi/etc.), then call post_esp32_temperature_run() with the real measured values.