To support your compliance efforts with privacy regulations such as the GDPR and CCPA, Helpshift provides robust tools to manage Data Subject Requests (DSRs). This helps ensure your users can effectively exercise their rights when interacting with our AI products.

1. How Helpshift Processes DSRs

When a user submits a request to view or delete their data, Helpshift’s architecture is designed to manage these requests across our AI products.

  • Auditing & Access: Every AI interaction is logged in an immutable repository to support forensic analysis. This log serves as the primary source for retrieving conversation history.
  • Data Masking: Where applicable, automated masking protocols are utilized to identify and replace sensitive data with secure tokens prior to external processing. Please note that automated masking systems are subject to inherent technical limitations and serve as a defense-in-depth measure rather than an absolute guarantee of anonymity.
  • Deletion Requests: A deletion request initiates a process to remove the user's identifiable data from applicable Helpshift systems, including context memory used by Care AI Agents, in accordance with our standard data retention and deletion policies mentioned here

2. Best Practice: Configuring AI features for DSRs

To facilitate a seamless user experience, we recommend configuring your Care AI Agent to handle DSR-related inquiries. By defining the boundaries below, the Care AI Agent can guide users to the correct compliance channels.

  • Recognition Rules: You can instruct your Care AI Agent using the “Procedures” module to identify common DSR keywords (e.g., "Delete my data," "Privacy request," or "GDPR") and flag such tickets by adding specific Tags, CIFs, etc. Please note that the keywords provided here are not an exhaustive list. It is the customer's responsibility to define and maintain a comprehensive list of trigger terms applicable to their specific regulatory requirements and jurisdictions.
  • Direction & Routing: Instruct the AI feature to automatically trigger a fallback for human review as a “Sensitive Topic”.
  • Security Guardrails: Ensure your configurations adhere to Instruction Isolation, preventing the AI from revealing the internal logic of how it processes these requests to the end-user.
  • Testing Your Care AI Agent: We strongly recommend testing your configurations in a trial environment before deploying them to your live environment. This step ensures your Care AI Agent delivers the exact experience you intended and handles customer inquiries accurately from day one.

3. AI Model Training

A common concern during DSRs is whether user data has been absorbed into a large language model.

  • No Model Training: Our agreements with third-party LLM providers stipulate that customer data is not used to train or improve their foundational models.
  • Data Masking: As noted above, while data masking protocols are applied before data is sent to external providers, customers should be aware of the inherent technical limitations of automated redaction systems.
  • Contextual Isolation: User data is used strictly to provide context for the active support session and is subject to strict data boundaries rather than being used by third-party LLM providers for their own model development.

Our Commitment to Data Rights

Security and privacy are ongoing commitments at Helpshift. We continuously strive to improve our DSR workflows, working to ensure that, as AI technology evolves, you have the tools necessary to support your users' data rights effectively.