Overview
The following diagram outlines the system architecture that is typically employed when you use the Cloud / SaaS version of Spira (hosted by Inflectra in AWS) and an on-premise installation of our Rapise test automation platform combined with artificial intelligence Large Language Models (LLMs) hosted inside a customer's Microsoft Azure account:

The diagram outlines the integration between both Spira and Azure OpenAI and Rapise and Azure OpenAI. We shall discuss each of these in turn next. However, before we do that, we should note that all communication between the different services (Spira, Rapise, OpenAI) are encrypted and secure using the HTTPS / TLS protocol.
In addition, when you use Azure OpenAI as your LLM, you have a choice of the various OpenAI models, including (but not limited to):
- GPT 3.5 Turbo
- GPT 4.0
- GPT 4.0 mini
Spira Integration with Azure OpenAI
As an alternative to using the built-in Inflectra.ai functionality, you can install and enable the Azure OpenAI SpiraApp in your instance of Spira. The detailed instructions can be found in our documentation.
Once this is enabled, users can ask the SpiraApp to read the current requirement, test cases, or other open Spira artifact, and using the connected OpenAI LLM, generate downstream artifacts such as test cases, test steps, risks, tasks and mitigations. The communication from the SpiraApp (which runs locally in the browser that is accessing Spira) is done directly from the Spira web server back-end to the OpenAI model using an HTTPS request initiated by Spira that is handed by the Azure OpenAI REST API. The authentication is done using an Azure Personal Access Token or equivalent API Key.
The only data that is sent from Spira to OpenAI is the name and description of the current artifact, and it's purely used in the prompt creation. None of the data in Spira is used for training the model, and it is completely transient, no data is persisted in the LLM during inference.
Rapise Integration with Azure OpenAI
Rapise supports connecting to multiple different AI LLM platforms, including Amazon Bedrock, OpenAI, Google Gemini, and Azure OpenAI. The detailed instructions can be found in our documentation.
Rapise is a desktop application that is typically installed in your environment (though it can also run on an Azure Virtual Machine (VM)) and includes its own lightweight AI server and embedded vector database. All of this lives securely inside your environment. The only data that is sent from Rapise to OpenAI is the generated system and user prompts. None of the data in Rapise is used for training the model, and it is completely transient, no data is persisted in the LLM during inference.