Responsible AI Starts with Respect for Customer Data

April 21st, 2026 by Adam Sandman

jira

As highlighted in the recent article in The Register,  Atlassian's recently announced data contribution changes are a reminder of a hard truth in the AI era: not every vendor defines “responsible AI” the same way. Beginning August 17, 2026, Atlassian says it will begin using customer metadata and, depending on plan level and settings, in-app data from products like Jira and Confluence to improve apps and AI experiences for all customers. As described in the article,  these changes will apply across supported cloud products, with different defaults by tier and some exclusions for certain regulated or specially configured environment.

That policy may work for Atlassian. It is not the path Inflectra has chosen.

Inflectra’s Position Is Clear: Customer Data Should Not Be Repurposed for Shared Model Training

At Inflectra, we believe AI should help customers work faster, test smarter, and reduce risk without turning their operational data into shared training fuel. Our Responsible Artificial Intelligence Statement is explicit: we prioritize ethical standards, privacy, and security. In addition, our policy protects user data in accordance with applicable laws and regional restrictions; encrypts data at rest and in transit; and does not use user input or processed data for base AI model training. Our policy also states that strict internal controls are in place so sensitive, confidential, and personally identifiable information is not stored in AI systems in ways that expose it to misuse or unauthorized access.

Why This Distinction Matters for Enterprise Teams

For many organizations, project artifacts are not generic content. They are roadmaps, requirements, defects, release discussions, internal priorities, customer commitments, test evidence, and audit-sensitive records. Even when vendors describe contributed data as de-identified or aggregated, many enterprises still see governance, sovereignty, contractual, and trust issues whenever customer activity is repurposed to improve services for everyone else. As a comparison, Atlassian’s published materials say contributed data may include both metadata and in-app data, and that some contributed data may be retained for up to seven years.

AI Value Should Not Come at the Cost of Data Control

Our position is simpler: your data should be used to serve you, not mined to improve a shared model for the broader market. Our approach to AI is grounded in transparency, access control, privacy protection, and accountable governance. We use AI to enhance customer outcomes, but we do not believe customers should have to trade away control of their data to receive those benefits.

The Right Questions to Ask Any AI Software Vendor

This is especially important for organizations operating in regulated, security-conscious, or IP-sensitive environments. When teams evaluate AI-enabled application lifecycle management, testing, and quality platforms, they should be asking more than whether a feature is impressive. They should ask:

  • What data is being collected?
  • What is the default setting?
  • Can it be fully disabled?
  • How long is it retained?
  • Is it used for model training?
  • And who ultimately benefits from that use?

Atlassian’s new policy makes those questions impossible to ignore.

Inflectra’s Commitment to Responsible AI

At Inflectra, our answer is clear. We are committed to responsible AI that strengthens quality, compliance, and productivity without compromising customer trust. We believe AI should be powerful, practical, and privacy-respecting. Customers should get the benefits of innovation without wondering whether their own data is quietly being repurposed behind the scenes.

That is the standard we believe enterprise software vendors should meet. And it is the standard Inflectra intends to keep.


About the Author

Adam Sandman

Adam Sandman is a visionary entrepreneur and a respected thought leader in the enterprise software industry, currently serving as the CEO of Inflectra. He spearheads Inflectra’s suite of ALM and software testing solutions, from test automation (Rapise) to enterprise program management (SpiraPlan). Adam has dedicated his career to revolutionizing how businesses approach software development, testing, and lifecycle management.

Spira Helps You Deliver Quality Software, Faster and with Lower Risk.

Get Started with Spira for Free

And if you have any questions, please email or call us at +1 (202) 558-6885