<rss version="2.0" xmlns:a10="http://www.w3.org/2005/Atom"><channel><title>Inflectra Customer Forums: Configurations: doubts about effectiveness (Thread)</title><description> Hello.  I have some doubts about the benefits of CONFIGURATIONS as they are actually managed in Spira.  Lets use a simple example: I want to test the home page of a WEB application that is returned after login. The page is different based on the user profile and I have 10 profiles.  I configured the CONFIGURATIONS based on 3 parameters: username, password and name of returned home page (the profile is implicitly meant by username/password). From all possible data combinations, I selected only the 10 meaningful ones.  At the same time, I wrote a Test Case that contains the same 3 parameters in the steps representing the login action and the check of the returned home page.  Then I linked the Test Case to a Test Set that contains only this Test Case (to simplify the context) and I run it. Correctly all the 10 instances were run with the proper parameters values.   Where is the problem?  1) Lets suppose that in my run all the instances are passed but the last one (failed): the execution statuses of the Test Set and the contained Test Case are both Failed. Lets suppose, instead, that all the test instances are failed but the last one (passed): the statuses of the Test Set and the contained Test Case are Passed. This does not make sense: in practice all the instances are treated like a multiple run of the same test, hence only the last execution status is considered about the Test Case and the result hides the truth  2) Because of the previous point, it is not possible to distinguish the different instances: in tab Test Runs, all the runs are reported with the same name. Some are Passed, some are Failed, but it is not possible to understand which case they refer to, unless I open each single Test Run details: thats the only way to read the parameters values  3) What above impacts the reporting capabilities: in fact, I run 10 Test Cases, that are different, but Spira treats them as a unique Test Case, so the built-in reports show me an untrustworthy execution status and a lower number of executed Test Cases. If, instead, I base my reports on Test Runs, I have the problem to isolate the last set of executions from the previous ones. I mean, lets suppose I run the Test Set twice: how can I recognize and isolate the Test Runs related to the last run and ignore the Test Runs related to the first run? To me it seems to be impossible, at least at form level (I dont know when using a query in a custom report).  To come to an end: my opinion is that this feature would be very useful if managed in a different way (like other tools do): every configuration should be treated as a standalone artefact (or similar), with its own name, execution status and runs and the execution status of the Test Set and the related Test Case should be the result of all the linked Test Configurations.     Thanks,  Daniele </description><language>en-US</language><copyright>(C) Copyright 2006-2026 Inflectra Corporation.</copyright><managingEditor>support@inflectra.com</managingEditor><category domain="http://www.dmoz.org">/Computers/Software/Project_Management/</category><category domain="http://www.dmoz.org">/Computers/Software/Quality_Assurance/</category><generator>KronoDesk</generator><a10:contributor><a10:email>support@inflectra.com</a10:email></a10:contributor><a10:id>http://www.inflectra.com/kronodesk/forums/threads</a10:id><ttl>120</ttl><link>/Support/Forum/spirateam/issues-questions/2702.aspx</link><item><guid isPermaLink="false">threadId=2702</guid><author>Daniele Terragni (d.terragni@quence.it)</author><category domain="http://www.inflectra.com/kronodesk/thread/tag">configurations</category><title>Configurations: doubts about effectiveness</title><description> Hello.  I have some doubts about the benefits of CONFIGURATIONS as they are actually managed in Spira.  Lets use a simple example: I want to test the home page of a WEB application that is returned after login. The page is different based on the user profile and I have 10 profiles.  I configured the CONFIGURATIONS based on 3 parameters: username, password and name of returned home page (the profile is implicitly meant by username/password). From all possible data combinations, I selected only the 10 meaningful ones.  At the same time, I wrote a Test Case that contains the same 3 parameters in the steps representing the login action and the check of the returned home page.  Then I linked the Test Case to a Test Set that contains only this Test Case (to simplify the context) and I run it. Correctly all the 10 instances were run with the proper parameters values.   Where is the problem?  1) Lets suppose that in my run all the instances are passed but the last one (failed): the execution statuses of the Test Set and the contained Test Case are both Failed. Lets suppose, instead, that all the test instances are failed but the last one (passed): the statuses of the Test Set and the contained Test Case are Passed. This does not make sense: in practice all the instances are treated like a multiple run of the same test, hence only the last execution status is considered about the Test Case and the result hides the truth  2) Because of the previous point, it is not possible to distinguish the different instances: in tab Test Runs, all the runs are reported with the same name. Some are Passed, some are Failed, but it is not possible to understand which case they refer to, unless I open each single Test Run details: thats the only way to read the parameters values  3) What above impacts the reporting capabilities: in fact, I run 10 Test Cases, that are different, but Spira treats them as a unique Test Case, so the built-in reports show me an untrustworthy execution status and a lower number of executed Test Cases. If, instead, I base my reports on Test Runs, I have the problem to isolate the last set of executions from the previous ones. I mean, lets suppose I run the Test Set twice: how can I recognize and isolate the Test Runs related to the last run and ignore the Test Runs related to the first run? To me it seems to be impossible, at least at form level (I dont know when using a query in a custom report).  To come to an end: my opinion is that this feature would be very useful if managed in a different way (like other tools do): every configuration should be treated as a standalone artefact (or similar), with its own name, execution status and runs and the execution status of the Test Set and the related Test Case should be the result of all the linked Test Configurations.     Thanks,  Daniele </description><pubDate>Fri, 28 Oct 2022 09:03:49 -0400</pubDate><a10:updated>2025-02-28T14:49:19-05:00</a10:updated><link>/Support/Forum/spirateam/issues-questions/2702.aspx</link></item><item><guid isPermaLink="false">messageId=5789</guid><author>Daniele Terragni (d.terragni@quence.it)</author><title> Thank you David.  At present Im using a different workaround: based on the example I described, I d</title><description> Thank you David.  At present Im using a different workaround: based on the example I described, I designed a template test case, containing parameters. This test case will never be linked to a test set: it will remain in a folder dedicated to template test cases.  Then, I designed 10 test cases made of one step of type link each: the link is to my template test case. While linking, I specified the desired combination of parameter values. This way I obtained 10 distinct test cases (different name, different sample data).  Then I linked my 10 test cases to one test set and executed it. The result is that I can count the effective test cases and I can distinguish them because of the name and sample data. I can also keep distinct their execution status and have a correct test set execution status as well.  Best Regards,  Daniele </description><pubDate>Wed, 02 Nov 2022 15:50:00 -0400</pubDate><a10:updated>2022-11-02T15:50:00-04:00</a10:updated><link>/Support/Forum/spirateam/issues-questions/2702.aspx#reply5789</link></item><item><guid isPermaLink="false">messageId=5796</guid><author>David J (adam.sandman+support@inflectra.com)</author><title> Hi Daniele  Yes that is another approach we have suggested to customers, thanks for writing it here</title><description> Hi Daniele  Yes that is another approach we have suggested to customers, thanks for writing it here.  Regards David </description><pubDate>Sat, 05 Nov 2022 16:01:05 -0400</pubDate><a10:updated>2022-11-05T16:01:05-04:00</a10:updated><link>/Support/Forum/spirateam/issues-questions/2702.aspx#reply5796</link></item><item><guid isPermaLink="false">messageId=7077</guid><author>David J (adam.sandman+support@inflectra.com)</author><title> Hi Daniele (and for others reading this post)  We have just released a   new SpiraApp called TestCo</title><description> Hi Daniele (and for others reading this post)  We have just released a   new SpiraApp called TestConfigurations+   that solves this problem.  Thanks again for the product feedback.  Regards  David </description><pubDate>Sun, 05 Jan 2025 17:35:50 -0500</pubDate><a10:updated>2025-01-05T17:35:50-05:00</a10:updated><link>/Support/Forum/spirateam/issues-questions/2702.aspx#reply7077</link></item><item><guid isPermaLink="false">messageId=5786</guid><author>David J (adam.sandman+support@inflectra.com)</author><title> Hi Daniele  Thanks for the very insightful feedback on the Test Configurations feature.  You can al</title><description> Hi Daniele  Thanks for the very insightful feedback on the Test Configurations feature.  You can always just create multiple Test Sets with different names and parameter values (vs. using Test Configurations) that is a more manual process.  I have logged the enhancement to somehow change the name of the recorded Test Run to make the correlation of results easier. Another option would be to auto-generate a new Test Set for each run. I have logged those both as options.   [Update] We have released a new SpiraApp called  Test Configurations+  that implements this enhancement.   Regards  David </description><pubDate>Sat, 29 Oct 2022 16:09:11 -0400</pubDate><a10:updated>2025-01-05T17:36:58-05:00</a10:updated><link>/Support/Forum/spirateam/issues-questions/2702.aspx#reply5786</link></item></channel></rss>