Configurations: doubts about effectiveness

Friday, October 28, 2022
Avatar

Hello.

I have some doubts about the benefits of CONFIGURATIONS as they are actually managed in Spira.

Let's use a simple example: I want to test the home page of a WEB application that is returned after login. The page is different based on the user profile and I have 10 profiles.

I configured the CONFIGURATIONS based on 3 parameters: username, password and name of returned home page (the profile is implicitly meant by username/password). From all possible data combinations, I selected only the 10 meaningful ones.

At the same time, I wrote a Test Case that contains the same 3 parameters in the steps representing the login action and the check of the returned home page.

Then I linked the Test Case to a Test Set that contains only this Test Case (to simplify the context) and I run it. Correctly all the 10 instances were run with the proper parameters' values. 

Where is the problem?

1) Let's suppose that in my run all the instances are passed but the last one (failed): the execution statuses of the Test Set and the contained Test Case are both Failed. Let's suppose, instead, that all the test instances are failed but the last one (passed): the statuses of the Test Set and the contained Test Case are Passed. This does not make sense: in practice all the instances are treated like a multiple run of the same test, hence only the last execution status is considered about the Test Case and the result hides the truth

2) Because of the previous point, it is not possible to distinguish the different instances: in tab Test Runs, all the runs are reported with the same name. Some are Passed, some are Failed, but it is not possible to understand which case they refer to, unless I open each single Test Run details: that's the only way to read the parameters' values

3) What above impacts the reporting capabilities: in fact, I run 10 Test Cases, that are different, but Spira treats them as a unique Test Case, so the built-in reports show me an untrustworthy execution status and a lower number of executed Test Cases. If, instead, I base my reports on Test Runs, I have the problem to isolate the last set of executions from the previous ones. I mean, let's suppose I run the Test Set twice: how can I recognize and isolate the Test Runs related to the last run and ignore the Test Runs related to the first run? To me it seems to be impossible, at least at form level (I don't know when using a query in a custom report).

To come to an end: my opinion is that this feature would be very useful if managed in a different way (like other tools do): every configuration should be treated as a standalone artefact (or similar), with its own name, execution status and runs and the execution status of the Test Set and the related Test Case should be the result of all the linked Test Configurations.

 

Thanks,

Daniele

3 Replies
Saturday, October 29, 2022
Avatar
re: dterragni Friday, October 28, 2022

Hi Daniele

Thanks for the very insightful feedback on the Test Configurations feature.

You can always just create multiple Test Sets with different names and parameter values (vs. using Test Configurations) that is a more manual process.

I have logged the enhancement to somehow change the name of the recorded Test Run to make the correlation of results easier. Another option would be to auto-generate a new Test Set for each run. I have logged those both as options.

Regards

David

Wednesday, November 2, 2022
Avatar
re: inflectra.david Saturday, October 29, 2022

Thank you David.

At present I'm using a different workaround: based on the example I described, I designed a 'template' test case, containing parameters. This test case will never be linked to a test set: it will remain in a folder dedicated to 'template test cases'.

Then, I designed 10 test cases made of one step of type 'link' each: the link is to my template test case. While linking, I specified the desired combination of parameter values. This way I obtained 10 distinct test cases (different name, different sample data).

Then I linked my 10 test cases to one test set and executed it. The result is that I can count the effective test cases and I can distinguish them because of the name and sample data. I can also keep distinct their execution status and have a correct test set execution status as well.

Best Regards,

Daniele

Saturday, November 5, 2022
Avatar
re: dterragni Wednesday, November 2, 2022

Hi Daniele

Yes that is another approach we have suggested to customers, thanks for writing it here.

Regards
David

Spira Helps You Deliver Quality Software, Faster and With Lower Risk

And if you have any questions, please email or call us at +1 (202) 558-6885

 

Statistics
  • Started: Friday, October 28, 2022
  • Last Reply: Saturday, November 5, 2022
  • Replies: 3
  • Views: 474