1. Define Snowflake Test Cases in SpiraTest

In SpiraTest, model Snowflake tests as data validation or analytics tests.

Test Case Structure

  • Test Type: Manual or Automated

  • Component: Snowflake / Data Warehouse

  • Steps:

    • SQL query to execute

    • Expected result (row count, checksum, aggregation value, schema match, etc.)

  • Test Attributes:

    • Warehouse

    • Database / Schema

    • Role

    • Environment (DEV / QA / PROD)

Example Test Step

Step: Execute aggregation query on SALES_FACT
Expected Result: SUM(total_sales) = 1,245,300

This creates traceability to requirements and data contracts.


2. Choose an Execution Mechanism

SpiraTest does not execute SQL directly, so Snowflake execution must be external.

Option A — Python + Snowflake Connector (Most Common)

Use:

  • snowflake-connector-python

  • pytest or unittest

  • SpiraTest REST API

If using  pytest or unittest you can use our pre-built connectors:

Execution Flow

  1. Test runner pulls test case parameters from SpiraTest (optional)

  2. Executes SQL against Snowflake

  3. Evaluates results

  4. Posts pass/fail + logs back to SpiraTest


Option B — dbt Tests Integrated with SpiraTest

If you use dbt:

  • Define tests (not_null, unique, custom SQL tests)

  • Execute dbt test

  • Parse results

  • Push execution status into SpiraTest using REST API (see Python code below)

This is ideal for analytics engineering teams.


Option C — CI/CD Orchestrated Execution

Execute Snowflake tests via:

SpiraTest remains the test management hub, not the runner. In this case you can use the matching pre-built pipeline integration listed above.


3. Reporting Results Back to SpiraTest

Automated Test Execution Upload

Use SpiraTest’s Automated Test Execution API.

Key Fields

  • testCaseId

  • executionStatus (Passed / Failed / Blocked)

  • actualResult

  • startTime / endTime

  • Attach SQL output or CSVs if needed

Result Mapping

Snowflake OutcomeSpiraTest Status
Query result matches expectedPassed
Mismatch / empty setFailed
Warehouse unavailableBlocked

4. Sample Python Execution Pattern (High-Level)

start_time = datetime.datetime.now(datetime.UTC)
cursor.execute(sql)
result = cursor.fetchall()
end_time = datetime.datetime.now(datetime.UTC)
asset_count = 0

if result == expected:
    status = "Passed"
else:
    status = "Failed"
    assert_count = 1

post_result_to_spira(
    test_case_id=123,
    status,
    "Snowflake Test 1",
    result,
    assert_count,
    result,
    start_time,
    end_time
)
 

This keeps Snowflake logic decoupled from SpiraTest.

A sample implementation of this function is as follows:

import requests
from datetime import datetime, timezone

SPIRA_BASE_URL = "https://yourcompany.spiraservice.net"
PROJECT_ID = 12
API_KEY = "YOUR_SPIRA_API_KEY"

SPIRA_STATUS_MAP = {
    "Passed": 2,
    "Failed": 1,
    "Blocked": 5,
    "Not Run": 3,
    "Caution": 4
}

def post_result_to_spira(
    test_case_id: int,
    status: str,
    test_name: str,
    message: str,
    assert_count: int,
    actual_result: str,
    start_date: datetime,
    end_date: datetime
):
    """
    Posts an automated test execution result to SpiraTest.

    :param test_case_id: Spira test case ID
    :param status: Passed | Failed | Blocked | Caution | Not Run
    :param actual_result: Execution output or assertion message
    :param message: Short message of output
    """

    url = (
        f"{SPIRA_BASE_URL}/Services/v7_0/RestService.svc/"
        f"projects/{PROJECT_ID}/test-runs/record"
    )

    headers = {
        "Authorization": f"Basic {API_KEY}",
        "Content-Type": "application/json"
    }

    payload = {
        "TestCaseId": test_case_id,
        "ExecutionStatusId": SPIRA_STATUS_MAP[status],
        "ActualResult": actual_result,
        "StartDate": start_date,
        "EndDate": end_date,
        "RunnerName": "Snowflake Python Runner",
        "TestRunFormatId" : 1,
        "RunnerTestName": test_name,
        "RunnerMessage": message,
        "RunnerStackTrace": actual_result,
        "RunnerAssertCount": assert_count
    }

    response = requests.post(url, json=payload, headers=headers)

    if response.status_code not in (200, 201):
        raise RuntimeError(
            f"Failed to post result to SpiraTest "
            f"(HTTP {response.status_code}): {response.text}"
        )

    return response.json()

5. Manual Execution (When Automation Isn’t Ready)

For exploratory or ad-hoc validation:

  1. Execute SQL in Snowflake UI

  2. Copy results

  3. Mark test step as Passed/Failed in SpiraTest

  4. Attach screenshots or CSV outputs

This is common during early data pipeline development.


6. Best Practices for Snowflake + SpiraTest

Test Design

  • Favor assertions, not raw queries

  • Test:

    • Row counts

    • Null handling

    • Referential integrity

    • Schema drift

    • Aggregation correctness

    • Incremental loads

Governance

  • Map Snowflake tests to:

    • Business requirements

    • Data SLAs

    • Regulatory controls (SOX, GxP, HIPAA)

Performance Testing

  • Capture:

    • Query duration

    • Warehouse credits used

  • Store metrics as custom properties or attachments


7. When to Add Rapise

If your Snowflake testing includes:

  • UI-driven BI tools (Tableau, Power BI, Looker)

  • End-to-end data validation

Use Rapise to:

  • Trigger Snowflake-backed dashboards

  • Validate rendered outputs

  • Push results automatically to SpiraTest


Summary Architecture

SpiraTest (Test Management)
        ↓
CI / Python / dbt Runner
        ↓
Snowflake Execution
        ↓
Results → SpiraTest API