View detailed reports on your AI Agent's performance
Measure your AI Agent's performance with a variety of detailed reports. This article outlines the individual reports, what they measure, and how the various filters work. Unless otherwise noted, you can view these reports by going to Performance > Reports in your Ada dashboard.
By default, these reports don't include data from test users. That means that when you're testing your AI Agent, you don't have to worry about skewing your report results.
Learn about each reportβ
Click a report name below to expand it and learn about the metrics it includes. Note that the reports that are available in your AI Agent may vary based on your Ada subscription. If you have any questions, don't hesitate to contact your Ada team.
For more information on the filters you can use to refine your report data, see the Filter the data that appears in a report section of this page.
Automated Resolution Rate
The automated resolution rate is an analysis of how many conversations your AI Agent was able to resolve automatically.
To calculate the automated resolution rate, your AI Agent analyzes each conversation to understand both the customer's intent and the AI Agent's response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation.
For a conversation to be considered automatically resolved, the conversation must be:
-
Relevant - Ada effectively understood the customer's inquiry, and provided directly related information or assistance.
-
Accurate - Ada provided correct, up-to-date information.
-
Safe - Ada interacted with the customer in a respectful manner and avoided engaging in topics that caused danger or harm.
-
Contained - Ada addressed the customer's inquiry without having to hand them off to a human agent.
While Containment Rate can be a useful metric to get a quick glance of the proportion of AI Agent conversations that didn't escalate to a human agent, automated resolution rate takes it a step further. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your AI Agent really is.
In the Conversations portion of the Automated Resolution Rate page, you can view a summary of what each customer was looking for, how your AI Agent classified the conversation, and its reasoning. If you need more information, you can click a row to view the entire conversation transcript.
Metric | Definition |
---|---|
Automated Resolution Rate | The percentage of conversations that your AI Agent
determined were automatically resolved. Your AI Agent calculates this with
the formula |
Containment Rate | The percent of conversations that did not result in a handoff to a human agent. |
Average Handle Time
View the average amount of time customers spent talking with your AI Agent, for conversations that didnβt end in handoffs to human support.
This report uses winsorization on all of its metrics. To handle outliers, your AI Agent calculates the 90th percentile of all handle times. If a handle time is higher than the 90th percentile limit, your AI Agent replaces it with the 90th percentile limit instead.
Metric | Definition |
---|---|
Avg handle time when contained | The average amount of time customers spent talking with your AI Agent, for conversations that didnβt end in handoffs to human support. |
Avg handle time before escalation | The average amount of time customers spent talking to your AI Agent before handoff, for conversations where customers escalated to human support. |
Avg handle time with agents | The average amount of time customers spent talking to live support agents. |
Containment Rate
View how often customers were able to self-serve instead of escalating to human support.
Metric | Definition |
---|---|
Containment rate | The percent of conversations that did not result in a handoff to human support. |
Conversational Messages Volume
View the number of AI Agent, customer, and human agent messages per conversation.
Example conversation:
AI Agent
- Hello! (1)
- Hello! How can I be of assistance today? (2)
Customer
[1] Hello -
[2] What is the status of my order? -
AI Agent
- I can check on that for you. (3)
- What is your order number? (4)
Customer
[3] abc123 -
AI Agent
- Let me fetch that information for you... (5)
- Your order is currently being packaged for shipping. (6)
- Your estimated delivery date is Dec 25. (7)
Customer
[4] that is too long. let me speak to an agent -
AI Agent
- Understood. Connecting you to the next available agent (8)
Human agent
- Hello my name is Sonia. How can I further help you? {1}
Customer
[5] I need my order sooner. please cancel it -
Human agent
- Sorry about the delay. I will cancel your order {2}
- Your order has been cancelled {3}
Customer
[6] Thank you -
Metric | Definition |
---|---|
Number of conversations | The number of conversations where a customer sent at least one message to your AI Agent. |
Messages sent | The number of conversations (y-axis) that contained a given number of messages your AI Agent sent (x-axis). In the example above, where AI Agent messages are counted in parentheses |
Customer messages received | The number of conversations (y-axis) that contained a given number of messages customers sent (x-axis). In the example above, where customer messages are counted in square
brackets |
Agent messages | The number of conversations (y-axis) that contained a given number of messages agents sent (x-axis). In the example above, where agent messages are counted in curly
brackets |
Number of messages (x-axis) | The number of each type of message per conversation. Roughly 95% of conversations have fewer than 45 messages of any one type, which is why the upper end of the scale groups all conversations with 45 or more of any one type of message. |
Number of conversations (y-axis) | The number of conversations that fall in each quantity of messages. |
Conversations Breakdown
View the number of conversations initiated, engaged, and escalated in your AI Agent.
Metric | Definition |
---|---|
Opens | The number of conversations where a customer opened your AI Agent and was presented with a greeting. Every conversation contains one greeting. The entire series of messages that may be sent counts as one greeting, but only one needs to be sent for it to count as an open. |
Engaged | The number of conversations where a customer sent at least one message to your AI Agent. A conversation counts as engaged once a customer sends a message, regardless of whether your AI Agent understands the message. |
Escalated | The number of conversations where a customer requested an escalation to human support. |
Automatically Resolved | The number of conversations that your AI Agent automatically resolved. note Before July 31, 2024, this number was approximated based on the automated resolution rate ( The calculated number of automatically resolved conversations was subject to the error margin of the calculated For more information, see Understand and improve your AI Agent's automated resolution rate. |
Customer Satisfaction Score
View the percent of your AI Agent's conversations that customers reviewed positively. For more information, see Collect and analyze customer satisfaction data with Satisfaction Surveys.
There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:
Rating type | Negative review | Positive review |
---|---|---|
Numeric (5-point scale) | 1, 2, or 3 | 4 or 5 |
Numeric (10-point scale) | 1, 2, 3, 4, 5, or 6 | 7, 8, 9, or 10 |
Emoji (5-point scale) | π , π, or π | π or π |
Thumbs up/down (binary) | π | π |
Metric | Definition |
---|---|
Overall score | The percent of conversations customers reviewed positively, out of all conversations they reviewed. |
Satisfaction Survey Results
View the results of your customer satisfaction (CSAT) survey. For more information, see Collect and analyze chatter satisfaction data with Satisfaction Surveys.
When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.
There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:
Rating type | Negative review | Positive review |
---|---|---|
Numeric (5-point scale) | 1, 2, or 3 | 4 or 5 |
Numeric (10-point scale) | 1, 2, 3, 4, 5, or 6 | 7, 8, 9, or 10 |
Emoji (5-point scale) | π , π, or π | π or π |
Thumbs up/down (binary) | π | π |
Metric | Definition |
---|---|
Last submitted | The most recent time a customer submitted a satisfaction survey. |
Agent | The agent, if any, who participated in the conversation. If multiple agents participated in the conversation, this is the agent who participated closest to the end of the chat. |
Survey type | The type of survey the customer responded to.
|
Rated | The satisfaction rating the chatter selected. |
Reason for rating | The reason(s) that the customer selected in the survey follow-up question, if any. Possible positive reasons:
Possible negative reasons:
|
Resolution | The customer's response, if any, to whether your AI Agent was able to resolve their issue. This can either be yes or no. |
Comments | Additional comments, if any, that the customer wanted to include in the survey about their experience. |
Filter the data that appears in a reportβ
Filter data by dateβ
To filter a report by date:
-
Click the date filter drop-down.
-
Define your date range by one of the following:
-
Select a predefined range from the list on the left.
-
Type the filter start date in the Starting field. Type the filter end date in the Ending field.
-
Click the starting date on the calendar on the left, and the ending date on the calendar on the right.
-
-
Click Apply.
The date filter dropdown provides you with the ability to specify the date range you want to filter the report's data by. You can select from a list of preset date ranges or select Custom⦠to specify your own by way of a calendar selector.
Filter data by additional criteriaβ
The list of available filters differs for each report, depending on the data the report includes. Clicking the Add Filter drop-down menu gives you access to the filters relevant to the report you're viewing.
-
Compare to past: Display the immediate past timeframe (same length) to compare against current selection. Graphs will also display a figure representing the delta (difference) between ranges (ie. how much did your AI Agent's volume rise or drop between timeframes)
-
Browser: Isolate users from specific internet browsers (for example, Chrome, Firefox, Safari, etc.)
-
Channel: Isolate different platforms that your AI Agent is visible in or interacts with (for example, Ada Web Chat, SMS, WhatsApp, etc.)
-
Device: Isolate users from specific devices and operating systems (for example, Windows, iPhone, Android, etc.)
-
Language (if Multilingual feature enabled): Include/exclude volume of different languages if your AI Agent has content in other languages.
-
Include Test User: Include conversations originating from the Ada dashboard test AI Agent. Test conversations are excluded by default.
-
Filter by Variable: View only the conversations which include one or more variables. For each variable, you can define specific content the variable must contain, or simply whether the variable Is Set or Is Not Set with any data.
Additional informationβ
-
Report data is updated approximately every hour (but may take up to three hours).
-
Reports are in the time zone set in your profile.
Printingβ
We recommend viewing your AI Agent's data in the dashboard for the best experience. However, if you need to save the report as a PDF or print it physically, use the following recommendations to limit rendering issues:
-
Click Print.
-
In the Print window that appears, beside Destination, select either Save as PDF or a printer.
-
Click More settings to display additional print settings.
-
Set Margins to Minimum.
-
Set Scale to Custom, then change the value to 70.
- Alternatively, you can set the Paper size to A3 (11-3/4 x 16-1/2 in) or Legal (8.5 x 14 in).
-
Under Options, select the Background graphics checkbox.
-
Right before saving or printing, scroll through your print preview, and beside Pages, change the number of pages you want to include in your PDF or printout. The settings you changed above may affect how these pages render.
-
If your destination is Save as PDF, click Save. If your destination is a printer, click Print.