Rapise comes with powerful and flexible reporting capabilities that allow you to quickly and easily see the results of your testing activities. You can event customize the data being reported by adding instructions within your test script.
Each time you playback a test, Rapise automatically generates a report detailing the steps of the test, the data values used, and the outcome of each step:
The first row (with a white background) is used for Report Filtering. The rows below that each represent a step in the test. The rows with green text represent success; the rows with red text represent failure. You can reposition the columns by dragging and dropping the column names.
In addition to the standard report data, you can write to individual columns, create columns, and add data to the report by adding commands to your test script:
Report Filtering lets you specify criteria to filter your view of the test execution report. Rows that do not match your criteria are hidden.
You can filter the report view while the file is open. Directly above the first row of the report, there is a row of filter cells. Each one has a matching criteria button, a text-box to specify a filter value, a drop-down menu with predefined filter values, and a clear button.
The native format of the Rapise reports is XML with an open XML schema that can be programmatically parsed by other tools and systems. In addition Rapise can report in the lightweight plain text Perl TAP (Test Anything Protocol) format that is understood by many automation systems and build servers.
For management reporting, Rapise will let you quickly and easily export the formatted test reports into either Microsoft Excel or Adobe Acrobat PDF.
Being able to report on individual test cases is just the start, when you use Rapise in conjunction with our SpiraTest test management system you can feed results from Rapise into your enterprise reporting of test metrics.
When you use Rapise and SpiraTest together you can view the results of individual test executions, track metrics of testing per release, per iteration/sprint and also analyze trends across different platforms and technology combinations to elucidate trends and patterns of failure.