August 12th, 2016 by inflectra
Sometimes we get the question from prospective or actual customers - how do I use SpiraTest when I don't have time to create test cases? How do I best use my experienced testers who don't want to be treated like automatons? This is a great question and one that we struggled with internally!
There is a danger when planning how you will test a system or piece of software that you get trapped into a single myopic approach. For example if your code is written to be easily unit tested then you will only do unit tests and think you are done. If you have a great keyword-driven functional testing framework that tests every screen in your application then there is a similar tendency to think "we're testing every screen, there can't be any bugs now?!".
Another problem we often run into is the fact that we have a new system that is being developed and we have some requirements and user stories, but no formal test cases, or we are upgrading an existing system that has no test cases already written, how on earth do we go about testing it and making sure we know what we've tested and what is left to test?
We ran into both these issues internally here during the development and testing of SpiraTeam 5.0. We had a good set of automated unit tests (NUnit in case you're wondering what we use) that run every day using Jenkins that report back into our own internal SpiraTeam instance. We have API tests written using Rapise (for REST) and C# (for SOAP) and we have automated UI functional tests developed using Rapise.
However our testers are very experienced with SpiraTeam (having worked here for the past 8-10 years) and don't need to write detailed test cases with step by step instructions. They know the system very well and would spend more time writing prescriptive test cases than the value we'd get from them. They find more issues by following their intuition and using their intelligence, experience and understanding to dig into the issues and cases that our automated tests will miss.
So the question we posted to our team was - how do we get the most value out of our human testers that results in the most time testing and finding issues and the least time documenting stuff that all of them already know and our automated tests are already catching and verifying. Furthermore, how do we do that and still know what is left to test and what requirements are covered and not covered.
Well the solution turned out to be: Session Based Testing. If you are not sure what this is, take a look at http://www.satisfice.com/sbtm/ which provides a great overview.
In our next blog entry we'll discuss our recommended approach for using SpiraTest to do session-based testing, with some real examples from our recent SpiraTeam 5 release...