AI TRANSPARENCY

Last Updated: February 19, 2026
Effective Date: February 19, 2026

QuestWorks Games, LLC ("QuestWorks," "we," "our," or "us") uses artificial intelligence to power our gamified team-development platform. This page describes the AI systems we deploy, what data they use, how we manage risk, and how to reach us with questions or concerns. It supplements our Privacy Policy, Terms of Service, and Statement on AI Use.

This disclosure is provided in accordance with Colorado's Consumer Protections for Artificial Intelligence Act (SB 24‑205), effective June 30, 2026, and reflects our commitment to transparency about how AI is used across our platform.

1. AI SYSTEMS WE DEPLOY

QuestWorks does not develop its own AI models. We deploy commercially available AI services from third-party providers to power the features described below.

1.1. AI Game Facilitator (QuestRooms)

Our core product feature. A large language model generates adaptive, real-time narrative content that guides teams through collaborative role-playing scenarios designed for professional development.

  • What it does: Generates story prompts, responds to player actions, adapts scenarios to group dynamics, and facilitates team interaction during live sessions.
  • Data inputs: Player text and transcribed speech during active sessions; session context (scenario type, team size, prior session history within the same quest).
  • Providers: OpenAI, Google (Gemini), Anthropic.

1.2. Speech-to-Text Transcription

Converts spoken player input into text so the AI facilitator can respond to voice interactions.

  • What it does: Transcribes words spoken during active AI interaction moments (indicated by "🎤 Listening..." in the interface). Does not run continuously.
  • What it does NOT do: Does not create voiceprints, analyze voice characteristics, or perform speaker identification. Identity is established through Slack authentication, not voice analysis.
  • Providers: Deepgram (speech-to-text only), OpenAI (Whisper).

1.3. AI-Generated Art and Avatars

Creates cartoon-style character avatars and visual assets used throughout the platform.

  • What it does: Generates stylized cartoon avatars from Slack profile pictures (opt-in only); produces visual assets for the game experience.
  • Data inputs: User's Slack profile picture (when avatar feature is opted into).
  • Provider: OpenAI (DALL-E / images endpoint).

1.4. Session Insights and Behavioral Analytics

The platform generates observations about collaboration patterns at both team and individual levels.

  • Team-level: High-level observations about team dynamics and communication patterns observed during gameplay, intended as professional development conversation starters.
  • Individual-level (admin reports): Two features surface AI-generated individual insights to the customer's designated administrator:
    • Soft Skill Spotlight: Identifies positive behavioral traits (e.g., leadership, communication, conflict resolution) observed in named individuals during sessions.
    • Rising Leaders: Tracks individuals who demonstrate consistent growth in collaborative behaviors over multi-week periods.
  • Positive-only: Both individual-level features surface positive signals only. The system does not generate negative assessments, warnings, or criticism of any individual.
  • What it does NOT do: Does not generate performance reviews, fitness-for-role assessments, or any output intended for employment decisions. Individual insights are AI-generated observations from in-game behavior, not validated evaluations.
  • Providers: OpenAI, Google (Gemini), Anthropic.

1.5. HeroSystem / HeroGPT

A standalone GPT-based tool that helps individuals reflect on their professional strengths and communication style.

  • What it does: Guides users through a conversational self-assessment to surface development insights.
  • Provider: OpenAI (ChatGPT custom GPT).

2. INTENDED USE AND LIMITATIONS

2.1. What Our AI Is For

Every AI system listed above exists for one purpose: facilitating team-building experiences through collaborative gameplay. Our AI generates stories, transcribes speech, creates art, and surfaces team and individual-level observations — all in support of professional development and team cohesion.

2.2. What Our AI Is NOT For

QuestWorks AI is not designed, intended, or marketed to make or substantially factor into consequential decisions about individuals. Specifically, our AI outputs must not be used for:

  • Performance evaluations or reviews
  • Hiring, firing, promotion, or compensation decisions
  • Disciplinary actions
  • Individual assessment beyond collaborative skill development
  • Any employment decision as defined under applicable law

Our Terms of Service (Section 4.1) contractually prohibit customers from using QuestWorks outputs for these purposes. Our Master Subscription Agreement (Section 3.4(a)) reinforces this restriction for enterprise customers: misuse of Platform data for employment purposes constitutes a material breach that may result in immediate termination, and the customer indemnifies QuestWorks against any resulting employment law claims (Sections 9.2(e), 12.18).

2.3. AI Content Accuracy

AI-generated content — including game narratives, feedback observations, and visual assets — may occasionally be inaccurate, incomplete, or contextually inappropriate. All AI output should be treated as a starting point for conversation, not as authoritative assessment. We implement content safety measures and moderation, but no AI system produces flawless output in every situation.

3. HOW WE MANAGE RISK

3.1. Algorithmic Discrimination

Because our AI facilitates games rather than making decisions about individuals, the primary risk surface for algorithmic discrimination is narrow. That said, we take it seriously:

  • Content safety rails: Every AI interaction runs through content moderation filters designed to catch biased, inappropriate, or harmful outputs before they reach users.
  • Provider selection: We select AI providers (OpenAI, Google, Anthropic) that publish their own safety research, red-teaming practices, and bias mitigation efforts. We evaluate provider safety practices as part of our vendor selection process.
  • Human review: Our team monitors sessions and reviews flagged content. Users can report concerning AI-generated content at any time (see Section 5).
  • Use restrictions: By contractually prohibiting use of our outputs for employment decisions, we limit the pathway through which AI bias could cause consequential harm to individuals.

3.2. Data Protection

  • Session transcripts and voice recordings are not used to train external AI models.
  • AI model improvement uses only anonymized, aggregated data from which individual users cannot be re-identified.
  • Voice recordings are retained for 90 days (trial users) or per enterprise MSA terms, then permanently deleted.
  • Full data practices are described in our Privacy Policy.

3.3. Human Oversight

QuestWorks maintains both automated and human safeguards over AI outputs:

  • Automated: Content filters, safety rails, and moderation systems screen AI output in real time.
  • Human: Team members review flagged sessions, investigate reported content, and can intervene in or terminate sessions producing problematic output.

AI-generated insights about team dynamics require human interpretation. We advise all customers that AI observations are conversation starters, not conclusions, and that final judgments about team development should always involve human review.

3.4. Risk Management Framework

We are in the process of formally aligning our AI risk management practices with the NIST AI Risk Management Framework (AI RMF). As an early-stage company, our current practices include provider due diligence, content safety monitoring, user reporting mechanisms, and contractual use restrictions. We are working to document these practices in a structured framework ahead of the June 30, 2026 compliance date.

4. THIRD-PARTY AI PROVIDERS

We do not build or train our own AI models. The following third-party providers power QuestWorks AI features:

  • OpenAI, L.P. — Language generation, transcription (Whisper), image generation (DALL-E)
  • Google LLC — Language generation (Gemini)
  • Anthropic, PBC — Language generation (Claude)
  • Deepgram, Inc. — Speech-to-text transcription
  • LiveKit, Inc. — Real-time audio/video infrastructure

Each provider is subject to a Data Processing Agreement. Enterprise customers may request copies of applicable agreements and transfer safeguards. We cannot control or guarantee the outputs of third-party AI models, and our liability is subject to the limitations described in our Terms of Service (Section 10.3).

5. CONTACT US

If you have questions about how QuestWorks uses AI, want to report concerning AI-generated content, or want to exercise your rights regarding AI-processed data:

AI Transparency Contact: Asa Reilkoff, Founder
Email: asa@questworks.games
Mail: QuestWorks Games, LLC, 3745 Canfield St, Unit 304, Boulder, CO 80301

For data privacy requests, contact privacy@questworks.games. For general support, contact support@questworks.games.

We aim to respond to all AI-related inquiries within 10 business days.

6. CHANGES TO THIS DISCLOSURE

We will update this page as our AI systems, practices, or legal obligations change. Material changes will be reflected in the "Last Updated" date above. We encourage you to review this page periodically.

7. RELATED DOCUMENTS