At Helpshift, we are committed to building responsible, security-conscious AI solutions designed to empower your support teams while prioritizing the safety of your players and your brand. Our Care AI Agent architecture is built on foundational principles of Privacy, Security, Accuracy, and Customer Control. By operating under a "Shared Responsibility Model," Helpshift provides a robust, security-focused platform, while you retain control over your specific business rules, acceptable knowledge bases, and brand voice.

Here is how Helpshift integrates modern AI technology while working to keep your data and your customers safe.

1. Privacy

Protecting customer data is fundamental to responsible AI. Helpshift’s architecture is designed with privacy at the forefront:

  • Data Masking: Before data is processed by an external Large Language Model (LLM), sensitive data is automatically identified and replaced with secure tokens. Please note that automated masking systems are subject to inherent technical limitations.
  • AI Model Training: Our agreements with third-party LLM providers stipulate that customer data is not used to train or improve their foundational models.
  • Privacy Controls: For specific regulatory requirements, Helpshift features a "Privacy" flag for designated end-users to support compliance efforts

2. Security

Unlike traditional software, generative AI requires dynamic security measures. Modeled on the OWASP Top 10 for LLMs and Agentic Applications and the NIST AI Risk Management Framework, Helpshift utilizes a "Defense in Depth" guardrail strategy. These programmatic monitors are designed to check conversations during the processing phase, before the AI "thinks" and before it "speaks":

  • Input Guardrails (Threat Prevention): We utilize Instruction Isolation, intended to block "Prompt Injections" attempts to override system instructions or extract hidden logic.
  • Safety & Moderation Filters: Incoming queries are automatically filtered to block offensive language, harassment, and dangerous content.
  • Compliance by Design: The Care AI Agent is engineered to restrict the provision of advice in highly regulated or sensitive domains.

3. Transparency

Transparency is a cornerstone of AI trust, and you need visibility into how your AI provides its answers:

  • Knowledge Grounding (RAG): The agent is explicitly instructed to answer only from your approved knowledge base. If an answer cannot be found, it is programmed to state that it does not know, rather than guessing.
  • The "LLM Judge": To mitigate AI "hallucinations," a secondary model verifies that the generated response is factually supported by your source documentation.
  • Audit Logs: Interactions, including the user input and final output, are securely logged in an immutable repository to provide a forensic trail for compliance.

4. Control

You maintain control of how generative AI interacts with your players. Helpshift is designed to accommodate your specific operational needs with automated fallback systems. When the AI reaches a functional, technical, or security limit, it automatically triggers a fallback, ensuring a seamless transition back to human support agents or safe defaults:

  • Guardrail Violations: Triggered if a user's input violates custom security layers or restricted topics.
  • Sentiment Detection: Activated upon the detection of high negative sentiment or user frustration, escalating the ticket to a human with full context.
  • Outside of Capability: Triggered when a question falls outside the scope of your mapped knowledge resources.
  • Least Privilege Access: The Care AI agent is restricted from performing critical database actions (such as deleting user data or accounts) without explicit human approval.

5. Infrastructure

Helpshift maintains a rigorous technical environment to support these guardrails:

  • Secure Architecture: The Helpshift Care AI Agent is architected on a secure, enterprise-grade cloud platform, utilizing advanced AI ecosystems to deliver reliable, secure-by-design LLM operations.
  • Strategic Collaborations: Our security posture is continuously monitored and reinforced through strategic collaborations with specialized technology and security partners.
  • Continuous Vulnerability Scanning: We conduct continuous, automated vulnerability scanning across our repositories to identify and address third-party dependencies.

Helpshift understands that the regulatory landscape surrounding AI is evolving rapidly. By grounding our development in foundational principles of data minimization, transparency, and access controls, we continuously strive to ensure that our platform and our customers are prepared for the future of responsible customer service.