Some support organizations rely on customer service agents in addition to bots. This blended approach has many demonstrated benefits. However, it can also obscure certain important business metrics.

While bots can respond almost instantly to each new support request, human agents cannot. When bot and agent performance data are comingled, it becomes difficult to assess agent responsiveness realistically.

To solve this puzzle, you must assess agent responsiveness separately from bot performance. For exactly this reason, in the summer of 2021, we updated the Helpshift Support Analytics template app for Microsoft Power BI.

Reports in the updated app differ in an important way from the reports in earlier releases.
  • Previously, its report visualizations for issue management defaulted to using one or another of the time-to-first-response (TTFR) metrics.
  • In contrast, modern reports for issue management rely most heavily on one or another of the time-to-first-human-response (TTFHR) metrics.
  • Time to First Human Response — The current default. Measures how much time elapses, on average, before an issue receives its first reply from a human being. The starting point for this measured interval can be either of the following:
    • The moment when your user requested support, supposing that no bots ever responded to the request
    • The moment when a bot fished its work on an issue and reassigned the issue to a human agent
  • Time to First Response — Still available but no longer the default. Measures how much time elapses before a ticket receives its first reply from either a bot or an agent.
Pre-built reports in the support analytics template app default to “Time to First Human Response” metrics. However, the “Time to First Response” metrics are still available to you when you edit a visualization.

Options to edit a visualization

An issue management visualization can use a TTFHR-based or a TTFR-based metric.

All five TTFHR-based metrics are identical in essence. They differ only at the scale of their rounding. For example:
  • “Time to First Human Response (seconds)” rounds its measurements to the nearest second.
  • “Time to First Human Response (minutes)” rounds its measurements to the nearest minute.
All five TTFR-based metrics are, likewise, identical in essence. They, too, differ only at the scale of their rounding.

Your brand’s service-level objectives (SLOs) and service-level agreements (SLAs) dictate which scale of rounding is most appropriate per report or per visualization.

Even if you previously downloaded and installed an earlier release of the Support Analytics template app for Helpshift, you must now download and install the current release. The pre-built reports in earlier releases do not isolate human responses from bot responses.

Use of Power BI template apps requires a paid professional license for Microsoft Power BI.


Even though a TTFHR report is not directly comparable to a TTFR report, you are welcome to compare them.
  • When your issue is fully automated, its TTFHR values are null. You cannot measure the responsiveness of agents where no agents are involved.
  • When your issue is partially automated, direct TTFR-TTFHR comparisons may appear to show less responsiveness in the TTFHR visualization than in the TTFR visualization. This appearance is an illusion, however. The two visualizations measure different things. TTFHR numbers are never diluted by the superhuman performance of bots.
  • In contrast, when your issue is fully manual, any TTFR-TTFHR visualization differences are likely to be small. No bots are involved in either case.