The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Mean Time To Recover.
Mean Time to Recover (MTTR) measures how quickly you restore normal service after a production incident, defined as the elapsed time from incident start or detection (e.g., alert fired, SLO breach began) to service restoration (impact ended/SLO back in compliance).
Compute it per service over a recent window as a distribution (median/P90 plus counts), using incident-management timestamps or monitoring data; include incidents tied to deployments as well as other causes unless you explicitly scope to change-related failures. MTTR highlights the effectiveness of detection, rollback/roll-forward, and on-call practices—short MTTR paired with a low Change Failure Rate indicates strong resilience and recovery discipline.
The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Change Failure Rate.
Change Failure Rate is the percentage of production deployments that cause a service degradation and require remediation—such as a rollback, hotfix, or incident—within a defined window. Compute it as: failures ÷ total successful production deployments (e.g., in the last 30 or 90 days), where “failure” is operationally defined up front (sev-1/2 incidents, rollbacks, emergency patches, feature flags forced off, etc.).
Measure and report it per service/team to avoid averaging away hotspots, and show both the rate and the underlying counts. Track alongside Mean Time to Recover (MTTR): a low CFR with fast restore times indicates healthy quality and recovery; a high CFR suggests issues in testing, change size, approvals, or release practices.
The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Deployment Frequency.
Deployment Frequency is how often your organization successfully deploys code to production, typically counted as the number of production releases per service (or product) over a standard interval (e.g., per day or per week). It reflects delivery cadence and should be normalized by system/team to avoid masking variation; only successful production deployments are counted, while rollbacks are excluded or tracked separately.
Report it as a time series (e.g., weekly counts and moving averages) and pair with Lead Time for Changes: elite teams ship many small releases frequently (often daily or more), while lower frequencies can signal batching, manual gates, or pipeline friction.
The DevOps Research and Assessment (DORA) metrics revolutionized how the software industry measures software organization performance and delivery capabilities. In this article we show you how you can use create a custom graph in Spira that displays the standard DORA Metric: Lead Time To Change inside Spira.
The Lead Time to Change measures how long it takes a code change to reach users, defined as the elapsed time from when a change is integrated (typically a PR is merged to the main branch) to when a successful production deployment that includes that change finishes. It captures the speed of your delivery pipeline.
Shorter lead times generally indicate smoother, more automated paths to production and tighter feedback loops, especially when paired with healthy deployment frequency; longer times can reveal bottlenecks in reviews, builds, approvals, or release practices.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the regression test efficiency.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the defect resolution time metric.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the software stability metric.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the defect detection rate.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the test case coverage metric.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the summary of test case reusability.
The Cost of Quality (CoQ) is about both product quality and process quality. As part of these considerations, quality assurance can focus on additional patterns. In this article, we will discuss how to graphically show the summary information on these metrics.
A customer wanted to get a report of the average test execution duration per test case in the system. Now the test case automatically gets updated with the most recent test execution duration when you run the test case. However, instead of the duration of the most recent run, we want the average of all the runs of the test case. That's where a custom report comes in handy.
A customer wanted to get some specific requirements' test coverage reports covering the following two key metrics:
You can just run the out of the box test case detailed report and manually filter the data in Excel, but using the power of Spira custom reporting, you can get exactly what you want in a single document.
A customer asked us if we could provide a report of all the test cases and test results across all projects and programs. In the future we plan on having built in screens for quality managers to be able to see the test results and test metrics across all projects without needing to run a report. However this report will give you the information in the meantime.