Free for May: AI Visibility Audit — see how your site shows up in ChatGPT and Google's AI Overviews.Run yours →

← Trust Center

Trust Center · AI Policy

AI Policy

How OST builds, operates, and governs AI features inside the software we deliver. Customer data is not used to train general-purpose AI models. Specifics scoped per engagement.

Last reviewed: TBD · Pending legal review · OpenSource Technologies, Inc., a Pennsylvania corporation

Draft notice. This document is a structural draft pending legal review. The framework, sections, and OST's general approach are accurate. Specifics (jurisdictions, regulators, exact data-handling language, legal definitions, governing law) are determined per engagement and reviewed by counsel before any production deployment. Use the contact form for engagement-specific compliance questions.

Section 01

Our approach to AI

OST builds AI features into client software when there is a clear use case and a clear data-handling story. We do not deploy AI for novelty.

Our principles:

  • Customer use cases lead, not AI capabilities: AI is engineered into a platform because it solves a customer problem, not because the technology exists
  • Customer data stays in customer scope: Engagement data is not used to train general-purpose AI models. Period.
  • Configurable per organization policy: AI behavior, content guardrails, and data retention are configurable to match your governance posture
  • Auditable: AI feature activity is logged for review, debugging, and compliance demonstration
  • Honest about limitations: AI features are positioned as assistants and recommendations, not as authoritative or autonomous decision-makers, unless explicitly contracted otherwise

Section 02

AI features in our deliverables

OST has shipped or is shipping AI features in the following categories across active engagements:

  • Conversational AI assistants: Public-facing chatbots that answer eligibility, scheduling, and program questions. Live in our Council on Aging Martin County engagement.
  • Product discovery and recommendations: Conversational product discovery, AI-driven recommendations for e-commerce platforms
  • Content automation: Document analysis, structured data extraction (such as our automated OM-to-Property data extraction for commercial real estate)
  • Search and matching: Smart search, semantic matching, similarity ranking
  • Operational automation: Routing, triage, and workflow automation in customer service contexts

For engagement-specific AI scope, the contract and statement of work govern what gets built and how.

Section 03

Data used in AI features

AI features inherit the data scope of the engagement they live in. We do not introduce new data flows for AI without explicit contractual agreement.

What's typically used

  • Customer-provided content: The product catalog, knowledge base, FAQ, or program information that powers the AI feature
  • User input: Questions, queries, and interactions submitted by end users in real time
  • Feedback loops: Per-engagement, opt-in feedback that improves the feature for that engagement only

What's not used

  • Customer data is not contributed to general-purpose model training
  • End user data is not shared across engagements
  • PII is not exposed to AI systems beyond what the contract authorizes

Section 04

Training and learning policies

OST distinguishes between three patterns of AI use, each with its own data-handling treatment:

  1. Inference-only models (no learning): Most engagements use AI in inference mode. Customer data flows through the model to produce a response, then is not retained for model improvement. This is the default.
  2. Per-engagement fine-tuning: Some engagements benefit from a model fine-tuned on the engagement's domain content. The fine-tuned model is dedicated to that engagement only and not shared across customers.
  3. Aggregate, anonymized improvement: Where contractually agreed, fully de-identified aggregate signals (no PII, no customer-identifying information) may inform OST's general approach to similar use cases. Opt-in only.

The default is inference-only. Other patterns require explicit contractual authorization.

Section 05

Customer control over AI features

Customers retain control over AI features deployed in their engagement.

  • Disable AI features: Each AI feature can be disabled at the platform admin level without breaking the surrounding workflow
  • Configure guardrails: Content filters, allowed topics, escalation paths, and refusal behaviors are configurable per organization policy
  • Review AI activity: Audit logs of AI interactions are available for review
  • Provide feedback: End users can flag AI responses; flagged interactions feed into engagement-specific improvement (or are simply discarded, per your preference)
  • Roll back: Model changes that affect customer-facing behavior are versioned; rollback is available

Section 06

Audit and transparency

OST treats AI features as audit-relevant infrastructure. Activity is logged. Behavior is documented. Changes are versioned.

  • Activity logs: AI interactions (queries, responses, escalations) are logged with the same retention and access controls as the rest of the platform
  • Model documentation: The AI features OST deploys are documented (what model family, what data, what configuration) for engagement records
  • Behavioral testing: AI features are tested against representative scenarios before deployment and on each significant change
  • Disclosure to end users: When a user is interacting with an AI feature, that fact is disclosed (the chatbot is labeled as such; AI-generated content is identified where appropriate)

Section 07

Regulatory context

The regulatory landscape for AI is evolving quickly. OST tracks emerging frameworks and adapts engagement practices accordingly.

  • EU AI Act: Risk-tiered AI regulation in the EU. Applicable to engagements serving EU users.
  • State-level AI laws: California, Colorado, Illinois, and other states have passed or are passing AI-specific privacy and disclosure rules
  • Sector-specific guidance: Healthcare (FDA, HHS), education (FERPA implications), financial services (CFPB, banking regulators)
  • NIST AI Risk Management Framework: A useful reference structure OST applies to engagement-level AI risk assessment
Ask AI