The following glossary includes all metrics as they are defined in SensAI Predict. To learn more about Predict, see What is SensAI Predict, and how does it work?

What is it? What does it do? How does it work? Learn more
SensAI Predict Our AI engine that leverages machine learning to accurately predict the category of an incoming Issue and classify it accordingly. n/a To learn more about what Predict can do, read: What Is SensAI Predict, And How Does It Work?
Model A mapping of related Issue and Label data of the same language. Models learn from the data that you provide to associate Labels with certain message content. Models are language-specific so you will create separate Models for each language you support. For example, you may need to set up an English Model and a Spanish Model to use Predict for those two languages. To learn more about how Models work, read: What Is SensAI Predict, And How Does It Work?
Label Predict classifies incoming Issues by assigning a Label based on the user’s first message. You can then use these Labels in your workflows to route the new Issue to the right Agent, instantly reply to the user via Automations, and more. Some common Label types include: account, billing, connection, and more. To learn how to plan your Labels, read: How Do I Prepare My Data For The Predict Model?
Dataset A zip file containing a CSV that consists of your user’s first messages and their corresponding Labels. Models learn from the dataset you upload to start classifying Issues. n/a To learn how to prepare your dataset, read: How Do I Prepare My Data For The Predict Model?
Accuracy Accuracy is calculated as the percentage of times that an Agent did not mark the predicted Label as wrong.

Accuracy reflects the Model’s performance. Your Accuracy calculation includes all Labels within the Model.

The formula for Accuracy is: (N-r)/(N)

N: Number of Issues for which any Label was predicted
r: Number of Issues for which any predicted Label was changed to another Label or marked as wrong

For an example of how this formula is applied to Labels, see How Are Accuracy And Precision Calculated In Predict?
Precision Precision is calculated as the percentage of times an Agent did not mark a specific Label as wrong when it was predicted. Precision is calculated per Label, so it can be different for each Label. The formula for Precision is: (L-w)/L

L: Number of Issues for which a specific Label was predicted
w: Number of Issues for which a specific predicted Label was changed to another Label or marked as wrong

For an example of how this formula is applied to Labels, see How Are Accuracy And Precision Calculated In Predict?
Confidence A numerical value that is assigned to each Label while Predict is evaluating an Issue. The Confidence value is determined by Predict based on the likelihood that a given Label may be associated with an Issue based on the data available to that Model.

While an Issue is being evaluated by Predict, a Confidence value is assigned to all Labels in the Model. If the highest Confidence value exceeds the Confidence Threshold as defined by your team, then the Label associated with that Confidence value is assigned to the Issue.

To learn more, read: What Is The Confidence Threshold, And What Value Should I Set For It?
Confidence Threshold A value set by the Admins of your team that determines whether or not Predict should assign Labels to Issues where the Confidence value is lower than a certain percentage. Set by your team To learn more, read: What Is The Confidence Threshold, And What Value Should I Set For It?
Issues labeled in last 30 days The percentage of Issues labeled by this Model as per the set Confidence Threshold, over the last 30 days up until yesterday. % of Issues classified in the last 30 days for a given Model = number of Issues labeled / total number of Issues for the Model