Wish you could manage your automated testing in the same environment as your manual testing? With RemoteLaunch for SpiraTest you can setup automated test cases in SpiraTest and schedule them against a master list of automation hosts for execution on either the local computer or remote machines. Available Plug-ins include: UFT, TestComplete, Ranorex, NeoLoad, and Selenium.
Rapise is our powerful test automation tool that is designed to work seamlessly with SpiraTest. Rapise can test web, mobile, and desktop applications as well as APIs. You can store, manage and version your Rapise test cases within SpiraTest and use RapiseLauncher to execute the tests on a globally distributed test lab.
Automated test scripts are a valuable way to perform regression testing on applications to ensure that new features or bug fixes don’t break existing functionality. You can use RemoteLaunch with SpiraTest to manage the automated testing process using different tools from different vendors, all reporting back into your central SpiraTest instance:
You can store and version your automated test scripts inside SpiraTest. SpiraTest supports a wide variety of test automation engines (both commercial and open-source) including Rapise, UFT, Ranorex, TestComplete and Selenium.
The automated test scripts managed in SpiraTest can be either executed on the local machine or scheduled for execution on a series of remote hosts. Using either Rapise or RemoteLaunch, you can manage an entire global test lab from a central SpiraTest server, with test sets being executed using a variety of different automation technologies 24/7.
You can organize the test cases into test sets, which are assigned to specific automation hosts for execution. You can either assign unique host names to each computer or use the same host name, in which case SpiraTest will simply use the first available machine.
With SpiraTest managing your automated testing you can see at a glance the execution status of each test set in one consolidated view:
When the automated tests fail, you can drill down to the individual test run to get the complete record of what passed/failed including any screenshots or log files captured: