Performance reports
Measure your AI Agentβs performance with a variety of detailed reports. This article outlines the individual reports, what they measure, and how the various filters work. Unless otherwise noted, you can view these reports by going to Performance > Reports in your Ada dashboard.
By default, these reports donβt include data from test users. That means that when youβre testing your AI Agent, you donβt have to worry about skewing your report results.
Learn about each report
Click a report name below to expand it and learn about the metrics it includes. Note that the reports that are available in your AI Agent may vary based on your Ada subscription. If you have any questions, donβt hesitate to contact your Ada team.
For more information on the filters you can use to refine your report data, see the Filter the data that appears in a report section of this page.
Automated resolution and containment
The automated resolution rate is an analysis of how many conversations your AI Agent was able to resolve automatically.
To calculate the automated resolution rate, your AI Agent analyzes each completed conversation to understand both the customerβs intent and the AI Agentβs response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation.
For a conversation to be considered automatically resolved, the conversation must be:
-
Relevant - Ada effectively understood the customerβs inquiry, and provided directly related information or assistance.
-
Accurate - Ada provided correct, up-to-date information.
-
Safe - Ada interacted with the customer in a respectful manner and avoided engaging in topics that caused danger or harm.
-
Contained - Ada addressed the customerβs inquiry without having to hand them off to a human agent.
While Containment Rate can be a useful metric to get a quick glance of the proportion of AI Agent conversations that didnβt escalate to a human agent, automated resolution rate takes it a step further. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your AI Agent really is.
Your AI Agent will only assess for automated resolution when a conversation has ended. When viewing the automated resolution rate graph, a dotted line may appear to indicate that recent conversations may not have ended and therefore may cause the automated resolution rate to fluctuate once theyβre analyzed. For more information on how the conversation lifecycle impacts automated resolution, see automated resolution rate.
In this list, you can view a summary of what each customer was looking for, how your AI Agent classified the conversation, and its reasoning. If you need more information, you can click a row to view the entire conversation transcript.

API usage
Provides visibility into how often Ada is performing each action, and highlights errors with full log download functionality - allowing your team to troubleshoot effectively. You can access this report through the Reports tab (under Performance) in the left navigation menu or directly through the report icon at the top of the Actions Hub.

Agent satisfaction score
View customer satisfaction (CSAT) surveys where the scores are attributed to human support, available if the βAutomatically survey after chatβ option is turned on.
When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.
There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:
Average handle time
View the average amount of time customers spent talking with your AI Agent, for conversations that didnβt end in handoffs to human support.
This report uses winsorization on all of its metrics. To handle outliers, your AI Agent calculates the 90th percentile of all handle times. If a handle time is higher than the 90th percentile limit, your AI Agent replaces it with the 90th percentile limit instead.

Conversational messages volume
View the number of AI Agent, customer, and human agent messages per conversation.
Example conversation:

Conversations breakdown
View the number of conversations initiated, engaged, and escalated in your AI Agent.

Customer satisfaction score
View the percent of your AI Agentβs conversations that customers reviewed positively. For more information, see Collect and analyze customer satisfaction data with Satisfaction Surveys.
There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Knowledge usage
View to help you understand which articles are most frequently used by Ada in customer responses, and which articles are correlated with high or low Automated Resolution Rates as well as other performance metrics. Includes conversation drill-throughs to support improvement workflows. You can access this report through the Reports tab (under Performance) in the left navigation menu or directly through the report icon at the top of the Knowledge Hub.

Satisfaction survey results
View the results of your customer satisfaction (CSAT) survey. For more information, see Collect and analyze customer satisfaction data with Satisfaction Surveys.
When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.
There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Proactive conversations
View a detailed breakdown of how your AI Agent is engaging customers through Proactive conversations and how effectively those interactions contribute to automated resolutions and customer satisfaction.
At the top of the report, youβll see a graph that compares:
- All Conversations: The total number of conversations that occurred within the selected date range.
- Proactive Conversations: The number of conversations initiated by your AI Agent through Proactive messages that received at least one customer response.
This graph allows you to understand the reach and uptake of Proactive messaging in the context of your broader customer engagement volume. By comparing trends in Proactive conversations to overall volume, you can assess how actively customers are engaging with proactive outreach efforts and identify opportunities to refine your messaging strategy.
Beneath the graph, a table gives you a detailed breakdown of how each individual Proactive conversation is performing. Each row represents a specific Proactive conversation, with the following metrics displayed:

Filter the data that appears in a report
Filter data by date
To filter a report by date:
-
Click the date filter drop-down.
-
Define your date range by one of the following:
-
Select a predefined range from the list on the left.
-
Type the filter start date in the Starting field. Type the filter end date in the Ending field.
-
Click the starting date on the calendar on the left, and the ending date on the calendar on the right.
-
-
Click Apply.
The date filter dropdown provides you with the ability to specify the date range you want to filter the reportβs data by. You can select from a list of preset date ranges or select Customβ¦ to specify your own by way of a calendar selector.
Filter data by additional criteria
The list of available filters differs for each report, depending on the data the report includes. Clicking the Add Filter drop-down menu gives you access to the filters relevant to the report youβre viewing.
Use these options to control which data appears in a report.
-
Include test user: Include data from conversations originating from the Ada dashboard test AI Agent. Test conversations are excluded by default.
-
AR classification: The automatic resolution classification for the conversation.
-
Coaching: Conversations where one or more Coaching instructions were applied.
-
CSAT: Customer satisfaction (CSAT) ratings submitted by end users.
-
Article: Conversations that referenced one or more specific articles.
-
Action: Conversations associated with one or more Actions.
-
Playbook: Conversations associated with one or more Playbooks.
-
Conversation category: Conversations whose assigned topics have been grouped under one or more categories.
-
Generated topic: Conversations your AI Agent automatically assigned to one or more topics.
-
Engaged: Conversations where an end user sent at least one message.
-
Handoff: Conversations that were handed off to a human agent.
-
Language (Multilingual feature required): View reporting analytics that referenced conversations in one or more specific languages.
-
Channel: The channel where the conversation took place. For example, Ada Web Chat, SMS, WhatsApp, and so on.
-
Browser: Conversations where end users used specific browsers. For example, Chrome, Firefox, Safari, and so on.
-
Device: Conversations where end users used a specific device or operating system. For example, Windows, iPhone, Android, and so on.
-
Live agent: Conversations that involved one or more human agents.
-
Status code: Conversations that include API calls that resulted with one or more specific error code types. For example, 1xx, 2xx, 3xx, and so on.
-
Agent review: Conversations that include a human agentβs review.
-
Reason for rating: Conversations where end users selected one or more specific reasons when submitting a CSAT rating.
-
Variable: Conversations that include one or more variables. You can filter by specific values or by whether a variable Is Set or Is Not Set.
Additional information
-
Report data is updated approximately every hour (but may take up to three hours).
-
Reports are in the time zone set in your profile.
Printing
We recommend viewing your AI Agentβs data in the dashboard for the best experience. However, if you need to save the report as a PDF or print it physically, use the following recommendations to limit rendering issues:
-
Click Print.
-
In the Print window that appears, beside Destination, select either Save as PDF or a printer.
-
Click More settings to display additional print settings.
-
Set Margins to Minimum.
-
Set Scale to Custom, then change the value to 70.
- Alternatively, you can set the Paper size to A3 (11-3/4 x 16-1/2 in) or Legal (8.5 x 14 in).
-
Under Options, select the Background graphics checkbox.
-
Right before saving or printing, scroll through your print preview, and beside Pages, change the number of pages you want to include in your PDF or printout. The settings you changed above may affect how these pages render.
-
If your destination is Save as PDF, click Save. If your destination is a printer, click Print.