The AI Agent Copilot feature helps operators enhance support agent efficiency with an advanced suite of AI-powered features. To use it effectively while ensuring high standards of quality and compliance, operators should follow these best practices:

1. Avoid personal data disclosure

Helpshift implements internal masking mechanisms for personal data before sending it to AI systems. If irrelevant personal data is identified in the AI generated content, operators are advised to report using the feedback option within the feature.

2. Verify AI responses for accuracy

AI-generated content rely on the quality of the operator’s past issue data. To ensure accuracy: 

  • Always review the AI generated content before sending it to the customer.
  • Flag any irrelevant or inaccurate content for AI system improvement.

3. Review and mitigate biased responses

AI generated content may occasionally require adjustments to fully align with brand guidelines. To ensure this:

  • Review all content before sharing them with customers to report any inappropriate, biased, discriminatory, or offensive content.
  • Immediately report any biased or unethical AI-generated content using the feedback option in the feature window of the Agent dashboard UI.
  • Stay updated on the limitations of the AI system, which will be outlined during training.

4. Manage high impact queries responsibly

For complex or high-impact topics like legal or financial issues:

  • AI generated content can serve as a helpful starting point, but operators should always draft the final content manually based on approved internal guidelines and policies.
  • High impact queries require thorough verification, so double-check any AI generated content.
  • Ensure that you have undergone the relevant training and have access to the right resources as operators to identify such scenarios and respond to them appropriately.

Examples of high impact query management

  • Legal proceedings threat: If an end user emails a threat to initiate legal proceedings, operators must understand the implications and ensure the content is carefully crafted without relying on AI.
  • Requests from law enforcement: For support tickets received from law enforcement authorities requesting specific data, operators should follow approved processes and avoid using AI generated content.
  • Vulnerability reports or application issues: When a ticket highlights vulnerabilities or reports application issues, operators must escalate appropriately and avoid relying on AI generated content.

5. Follow customer specific guidelines

Some situations, like legal claims or data rights requests, may require strict adherence to customer templates. Use the pre-existing templates provided for handling such scenarios and avoid AI generated content where usage of templates is advised.

6. Report errors related to AI features

Operators should report inaccurate AI generated content or errors to your team lead/manager and seek further guidance. You could also use the in-app feedback option to report any AI feature related inaccuracies or errors you encounter.

7. Handle Data Subject Access Requests (DSARs) appropriately

Use predefined templates for DSARs to ensure compliance with customer-specific protocols without relying on AI generated content. 

8. Exercise caution with children’s data

If the operators encounter an end user (children), they should not use the AI generated content to respond to their queries. 

9. Limitations of AI

AI generated content may not be accurate as the content is generated from a limited pool of knowledge, and the AI features may provide incorrect or inaccurate information. Therefore, it's advisable to review every content generated by the AI features.