View detailed reports on your bot's performance
Overview
There is a variety of detailed reports you can use to measure your bot's performance. This article outlines the individual reports, what they measure, and how the various filters work. Unless otherwise noted, you can view these reports by going to Measure > Reports in your Ada dashboard.
Generally, these reports don't include data from test users. That means that when you're testing your bot, you don't have to worry about skewing your report results. The exception to this, as noted below, is SMS campaigns, because there's no way to mark SMS message recipients as test users.
Learn about each report
Click a report name below to expand it and learn about the metrics it includes. Note that the reports that are available in your bot will vary based on your Ada subscription. If you have any questions, don't hesitate to contact your Ada team.
For more information on the filters you can use to refine your report data, see the Filter the data that appears in a report section of this page.
View the results from comparing Answer content variants, so you can choose the option that performs best. For more information on A/B testing, see Run an A/B test.
Metric | Definition |
---|---|
Shown | The total number of times the test was presented to chatters. |
Testing | The event selected to measure the test against. This was configured upon test setup. |
Result | The status of the test. Possible results:
|

View a detailed breakdown of how your Answer variants performed. This breakdown appears when you click onto a specific A/B test to see how it performed. For more information on A/B testing, see Run an A/B test.
Metric | Definition |
---|---|
Significance test results | A rating of whether the test results are conclusive or inconclusive, based on a two-sided significance test with 95% confidence. This is a statistical analysis based on the normal distribution of the data using the z-score to represent a given variant mean vs the overall population's mean. Then, we use the z-score to determine the probability a given variant is within 2 standard deviations from the mean. (2 standard deviations represents a 95% confidence level.) Possible results:
|
Control | The performance of the control version of the Answer content, including monetary value and percent of conversations where this variant triggered the Event. The control's success rate is calculated with the formula |
Variant(s) | The performance of the Answer content variants, including monetary value and percent of conversations where this variant triggered the Event. Each variant's success rate is calculated with the formula |

View customer satisfaction (CSAT) surveys where the scores are attributed to human support, available if the โAutomatically survey after chatโ option is turned on.
Note
When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.
There are three ways you can set up customer satisfaction reviews:
A numeric scale from 1 to 5
An emoji scale from ๐ to ๐
A binary ๐๐ผ or ๐๐ผ rating
Your bot counts the following as positive reviews:
4 or 5
๐ย or ๐
๐๐ผ
Your bot counts the following as negative reviews:
1, 2, or 3
๐ , ๐, or ๐
๐๐ผ
Metric | Definition |
---|---|
Live chat score | The percent of agent reviews that were positive. Your bot calculates this with the formula |
Agent name | The name of the agent who spoke with the chatter immediately before the chatter provided the review. If multiple agents interacted with the chatter in the same conversation, even if only one agent's name appears in this list, all of the agents in that conversation are assigned the chatter's CSAT score. Agent names appear in this list if they have at least one review in the time periods selected for either data display or for comparison. |
Avg score | The percent of agent reviews that were positive. |
# of positive | The number of agent reviews that were positive. |
# of negative | The number of agent reviews that were negative. |
Total # of surveys | The total number of agent reviews. |

View feedback your chatters have given your Answers via thumbs up or down responses, or via the Positive Review or Negative Review locked Answers. For more information, see Improve Answer training using chatter feedback.
Metric | Definition |
---|---|
Feedback rate | The percent of reviewable Answers that chatters reviewed. Your bot calculates this with the formula |
Total thumbs up | The number of reviewable Answers that chatters gave positive reviews. |
Total thumbs down | The number of reviewable Answers that chatters gave negative reviews. |
Reviewable Answers | A list of reviewable Answers that received chatter reviews in the selected timeframe. |
Frequency | The number of times this Answer appeared to chatters. |
Thumbs up | The number of positive reviews chatters gave this Answer. |
Positive review rate | The percent of time chatters gave this Answer a positive review, out of all the times it appeared. Your bot calculates this with the formula |
Thumbs down | The number of negative reviews chatters gave this Answer. |
Negative review rate | The percent of time chatters gave this Answer a negative review, out of all the times it appeared. Your bot calculates this with the formula |

View a list of Answers that most often preceded a chatter's request for human support.
Metric | Definition |
---|---|
Answer name | The last Answer to appear before the chatter requested a handoff to human support. |
Frequency | The number of times the Answer appeared to chatters. |
Total handoffs | The number of times the Answer appeared to a chatter immediately before they requested a handoff to human support. Your bot counts all handoff attempts as handoffs. Therefore, if your bot attempts to hand a chatter off to human support twice, and only the second attempt is successful, this report still counts it as two handoffs. However, the platform that handles your human support handoffs may count these differently. |
Handoff rate | The percent of time chatters requested a handoff to human support after seeing this Answer, out of all the times it appeared. Your bot calculates this with the formula |
Percent of total handoffs | The percent of escalated conversations where the handoff occurred directly after this Answer, out of all Answers that were followed by handoffs. Your bot calculates this with the formula |

View how your knowledge base article links performed after your bot suggested them. For more information, see Let chatters search your Zendesk or Salesforce knowledge base content.
Metric | Definition |
---|---|
Article name | The name of the knowledge base article. |
Suggestion | The number of times your bot suggested the article to chatters. |
Clicks | The number of times chatters clicked on a unique message that contained a knowledge base link. If a chatter clicked multiple times on the same message, it only counts as one click. However, if your bot suggests the same link multiple times, and a chatter clicks those links in different messages, those count as separate clicks. |
Click rate | The percent of time chatters clicked on suggested article links. |

The automated resolution rate is an analysis of how many conversations your bot was able to resolve automatically.
Note
This feature is still in Early Access, so you may not see it yet in your bot. For more information, contact your Ada team.
To calculate the automated resolution rate, your bot takes a random sample of your bot's conversations, then analyzes each conversation in the sample to understand both the chatter's intent and the bot's response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation in the sample.
For a conversation to be considered automatically resolved, the conversation must be:
Relevant - Ada effectively understood the chatter's inquiry, and provided directly related information or assistance.
Accurate - Ada provided correct, up-to-date information.
Safe - Ada interacts with the chatter in a respectful manner and avoided engaging in topics that caused danger or harm.
Contained - Ada addressed the chatter's inquiry without having to hand them off to a human agent.
While Containment Rate can be a useful metric to get a quick glance of the proportion of bot conversations that didn't escalate to a human agent, automated resolution rate takes it a step further. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your bot content really is.
In the Conversations portion of the Automated Resolution Rate page, you can view a summary of what each chatter was looking for, how your bot classified the conversation, and its reasoning. If you need more information, you can click a row to view the entire conversation transcript.
Metric | Definition |
---|---|
Automated Resolution Rate | The percentage of conversations in your sample that your bot determined were automatically resolved. Your bot calculates this with the formula |
Error margin | Because we are measuring automated resolutions by sampling, the error margin tells us how much we can can expect sampled results to differ from the actual if we were to measure automated resolutions on every single conversation. For example, let's say your bot lists your automated resolution rate as 40%, with a error margin of ยฑ3%. This means that if you were to conduct the same sampling over and over again, the results would fluctuate between 37% ( |
Containment Rate | The percent of conversations that did not result in a handoff to human support. |
View the average amount of time chatters spent talking with your bot, for conversations that didnโt end in handoffs to human support.
This report uses winsorization on all of its metrics. To handle outliers, your bot calculates the 90th percentile of all handle times. If a handle time is higher than the 90th percentile limit, your bot replaces it with the 90th percentile limit instead.
Metric | Definition |
---|---|
Avg handle time when contained | The average amount of time chatters spent talking with your bot, for conversations that didnโt end in handoffs to human support. |
Avg handle time before escalation | The average amount of time chatters spent talking to your bot before handoff, for conversations where chatters escalated to human support. |
Avg handle time with agents | The average amount of time chatters spent talking to live support agents. |

View the proactive campaign messages you have configured to start via SMS, and how often those messages were attempted, delivered successfully, and replied to. For more information, see Start text conversations using proactive campaigns for SMS.
Unlike web content, there is no way to mark SMS conversations as test content. Be aware that this data may include data from your internal tests as a result.
Metric | Definition |
---|---|
Campaign name | The name of the campaign. |
Attempted | The number of campaign messages Ada attempted to send chatters via SMS. |
Delivered | The number of campaign messages Ada attempted to send chatters via SMS that didn't result in delivery errors. |
Engaged | The number of campaign messages chatters replied to via SMS. |

View how a specific SMS campaign has performed. For more information, see Start text conversations using proactive campaigns for SMS.
Unlike web content, there is no way to mark SMS conversations as test content. Be aware that this data may include data from your internal tests as a result.
Metric | Definition |
---|---|
Attempted | The number of campaign messages Ada attempted to send chatters via SMS. |
Delivered | The number of campaign messages Ada successfully delivered to chatters via SMS, and the percent of successful message deliveries out of all delivery attempts. |
Engaged | The number of SMS campaign messages Ada successfully chatters replied to, and the percent of successful message deliveries out of all delivery attempts. |

View the proactive campaign messages you have configured to appear on web, and how often those messages have been shown, opened, and replied to. For more information, see Start conversations using basic proactive campaigns and Start customizable interactions using advanced proactive campaigns.
Metric | Definition |
---|---|
Campaign name | The name of the campaign. |
Shown | The number of times your bot showed chatters the campaign message. |
Opened | The percent of campaign messages shown that chatters opened. Your bot calculates this with the formula |
Engaged | The percent of campaign messages shown that chatters responded to. Your bot calculates this with the formula |

View how a specific web campaign has performed. For more information, see Start conversations using basic proactive campaigns and Start customizable interactions using advanced proactive campaigns.
Metric | Definition |
---|---|
Shown | The number of times your bot showed chatters the campaign message. |
Opened | The number and percent of campaign messages shown that chatters opened. |
Engaged | The number and percent of campaign messages shown that chatters responded to. |

View the percent of conversations where your bot required at least one clarification. For more information, see Understand the Needs Clarification and Not Understood Answers.
Metric | Definition |
---|---|
Clarification rate | The percent of conversations in which the Needs Clarification Answer appeared at least once. |

View how often chatters were able to self-serve instead of escalating to human support.
Metric | Definition |
---|---|
Containment rate | The percent of conversations that did not result in a handoff to human support. |

View the number of bot, chatter, and agent messages per conversation.
Bot - Hello! (1) - Hello! How can I be of assistance today? (2) Chatter [1] Hello - [2] What is the status of my order? - Bot - I can check on that for you. (3) - What is your order number? (4) Chatter [3] abc123 - Bot - Let me fetch that information for you... (5) - Your order is currently being packaged for shipping. (6) - Your estimated delivery date is Dec 25. (7) Chatter [4] that is too long. let me speak to an agent - Bot - Understood. Connecting you to the next available Agent (8) Agent - Hello my name is Ada. How can I further help you? {1} Chatter [5] I need my order sooner. please cancel it - Agent - Sorry about the delay. I will cancel your order {2} - Your order has been cancelled {3} Chatter [6] Thank you -
Metric | Definition |
---|---|
Number of conversations | The number of conversations where a chatter sent at least one message to your bot. |
Bot messages | The number of conversations (y-axis) that contained a given number of messages your bot sent (x-axis). In the example above, where bot messages are counted in parentheses |
Agent messages | The number of conversations (y-axis) that contained a given number of messages agents sent (x-axis). In the example above, where agent messages are counted in curly brackets |
Chatter messages | The number of conversations (y-axis) that contained a given number of messages chatters sent (x-axis). In the example above, where chatter messages are counted in square brackets |
Number of messages (x-axis) | The number of each type of message per conversation. Roughly 95% of conversations have fewer than 45 messages of any one type, which is why the upper end of the scale groups all conversations with 45 or more of any one type of message. |
Number of conversations (y-axis) | The number of conversations that fall in each quantity of messages. |

View the number of conversations initiated, engaged, and escalated in your bot.
Metric | Definition |
---|---|
Opens | The number of conversations where a chatter opened your bot and was presented with a greeting. Every conversation contains one greeting. The entire series of messages that may be sent counts as one greeting, but only one needs to be sent for it to count as an open. |
Engaged | The number of conversations where a chatter sent at least one message to your bot. A conversation counts as engaged once a chatter sends a message, regardless of whether your bot understands the message. |
Escalated | The number of conversations where a chatter requested an escalation to human support. |

View a list of topics your chatters talk about. For more information, see Track conversation topics.
This report isn't listed with the other reports; instead, you can see it if you go to Conversations > Topics in your Ada dashboard.
Metric | Definition |
---|---|
Topics | A list of conversation topics that bot builders in your organization have configured. |
Volume | The number of conversations that contain the conversation topic keywords. |
Handoffs | The number of conversations that contain the conversation topic keywords and that were escalated to human support. |
Updated by | The last bot builder who updated the conversation topic. |

View how a particular conversation topic performed. For more information, see Track conversation topics.
This report isn't listed with the other reports; instead, you can see it if you go to Conversations > Topics in your Ada dashboard and click on a topic to see more detail.
There are three ways you can set up customer satisfaction reviews:
A numeric scale from 1 to 5
An emoji scale from ๐ to ๐
A binary ๐๐ผ or ๐๐ผ rating
Your bot counts the following as positive reviews:
4 or 5
๐ย or ๐
๐๐ผ
Your bot counts the following as negative reviews:
1, 2, or 3
๐ , ๐, or ๐
๐๐ผ
Metric | Definition |
---|---|
Volume | The number of conversations that contain the conversation topic keywords. |
Handoffs | The number of conversations that contain the conversation topic keywords and that were escalated to human support. |
Customer satisfaction score | Of all conversations that contained this topicโs keywords, the percent that received positive customer satisfaction reviews. |

View the percent of your bot's conversations that chatters reviewed positively. For more information, see Collect and analyze chatter satisfaction data with Satisfaction Surveys.
Chatters can rate conversations in surveys at the end of the chat, the Anytime Survey, or a survey triggered by an Answer with a Satisfaction Survey block. If the survey appeared more than once in a single conversation, the bot will show the previous selection to the chatter, allowing them to update their feedback. Only the most recent rating is recorded per conversation; the latest rating overrides any previous ratings.
There are three ways you can set up customer satisfaction reviews:
A numeric scale from 1 to 5
An emoji scale from ๐ to ๐
A binary ๐๐ผ or ๐๐ผ rating
Your bot counts the following as positive reviews:
4 or 5
๐ย or ๐
๐๐ผ
Your bot counts the following as negative reviews:
1, 2, or 3
๐ , ๐, or ๐
๐๐ผ
Metric | Definition |
---|---|
Overall score | The percent of conversations chatters reviewed positively, out of all conversations they reviewed. |
Bot only score | The percent of conversations chatters reviewed positively, out of all conversations that did not escalate to a live agent. |
Live chat score | The percent of conversations chatters reviewed positively, out of all conversations that escalated to a live agent. |

View how often customers chose to chat with your bot.
Metric | Definition |
---|---|
Engagement rate | The percent of conversations where chatters sent at least one message or quick reply to your bot. |

View your botโs tracked events, how often they occurred, and the monetary values associated with them. For more information, see Create and track chatter actions.
Metric | Definition |
---|---|
Total count (top) | The total number of events that occurred. |
Total value | The total monetary value of all of the events that occurred. |
Event name | The name of the event being measured. |
Total count (table) | The number of times the event occurred. |
Total value | The total monetary value of all of the occurrences of the event, based on the value assigned to the event when it was configured. |

View how a specific event performed. For more information, see Create and track chatter actions.
Metric | Definition |
---|---|
Total count | The number of times the event occurred. |
Total value | The total monetary value of all of the occurrences of the event, based on the value assigned to the event when it was configured. |

View how often your botโs goals were met, so you can track and measure valuable business interactions. For more information, see Set goals to measure your bot's impact.
Location | Metric | Definition |
---|---|---|
Top | Goal completion | The number of times any goals in the table were completed. |
Goal conversion rate | The percent of conversations in which any goals in the list were completed. | |
Goal value | The total monetary value of all goals in the list. | |
Table | Goal name | The name of the goal being measured. |
Goal completion | The number of times the goal was completed. | |
Goal conversion rate | The percent of conversations in which the goal was completed. | |
Goal value | The total monetary value associated with the goal. Your bot calculates this with the formula |

View how a specific goal performed. For more information, see Set goals to measure your bot's impact.
Metric | Definition |
---|---|
Goal completion | The number of times the goal was completed. |
Goal conversion rate | The percent of conversations where the goal was completed. |
Goal value | The total monetary value associated with the goal. Your bot calculates this with the formula |

View the click-through rates for links presented via Link Message or Web Window blocks.
Metric | Definition |
---|---|
Total shown | The total number of links your bot showed to chatters. Multiple instances of the same link count as one link. |
Total clicks | The total number of times chatters clicked on a unique message that contained a link. If a chatter clicks multiple times on the same message, it only counts as one click. However, if the same link appears multiple times as part of the same conversation, and a chatter clicks more than one instance of that link, the clicks are counted separately. |
Click rate | The percent of links chatters clicked on, out of all the links shown. |
URL | The link URL. |
Answers | The Answers that contain the Link Message or Web Window block with the URL your bot showed to chatters. |
Shown | The number of times your bot showed this link to chatters. Links that contain variables (such as a user ID) are counted as the same โbaseโ link. |
Clicked | The number of times chatters clicked on this link. Links that contain variables are counted as the same โbaseโ link. |
Click rate | The percent of links chatters clicked on, out of all the links shown. |

View a list of Answers that appear most often in conversations.
Metric | Definition |
---|---|
Answer name | A list of all Answers that appeared to chatters, sorted in descending order by frequency by default. This list excludes greeting Answers. |
Frequency | The total number of times the Answer appeared to chatters. |
Percent of total Answers | The percent of time your bot showed this Answer to chatters, out of all Answers it showed to chatters. |

View how often your bot was able to recognize and answer chattersโ questions. For more information, see Understand the Needs Clarification and Not Understood Answers.
Metric | Definition |
---|---|
Recognition rate | The percent of Answers your bot sent that were not the Not Understood Answer, including text messages, suggestions, quick replies, knowledge base suggestions, and clarifications, and excluding greeting Answers. You don't have to aim for this rate to be 100%. In the case of chatter questions that were either incoherent or didn't have any training, the Not Understood Answer would be appropriate and expected. |

View the results of your customer satisfaction (CSAT) survey. For more information, see Collect and analyze chatter satisfaction data with Satisfaction Surveys.
Note
When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.
There are three ways you can set up customer satisfaction reviews:
A numeric scale from 1 to 5
An emoji scale from ๐ to ๐
A binary ๐๐ผ or ๐๐ผ rating
Your bot counts the following as positive reviews:
4 or 5
๐ย or ๐
๐๐ผ
Your bot counts the following as negative reviews:
1, 2, or 3
๐ , ๐, or ๐
๐๐ผ
Metric | Definition |
---|---|
Last submitted | The most recent time a chatter submitted a satisfaction survey. |
Agent | The agent, if any, who participated in the conversation. If multiple agents participated in the conversation, this is the agent who participated closest to the end of the chat. |
Survey type | The type of survey the chatter responded to. If the chatter responded to multiple survey types, this is the one that happened closest to the end of the chat. Possible survey types:
|
Rated | The satisfaction rating the chatter selected. |
Reason for rating | The reason(s) that the chatter selected in the survey follow-up question, if any. Possible positive reasons:
Possible negative reasons:
|
Resolution | The chatterโs response, if any, to whether your bot was able to resolve their issue. This can either be yes or no. |
Comments | Additional comments, if any, that the chatter wanted to include in the survey about their experience. |

View answer and conversation volumes by Answer tag. For more information, see Manage content using tags and descriptions.
Metric | Definition |
---|---|
Total Answers | The number of Answers with tags that appeared in conversations. |
Conversation volume | The number of conversations where your bot showed at least one Answer with a particular tag to chatters. If a conversation has two Answers with the same tag, your bot only counts the tag once. |
Tags | The tag that was assigned to at least one Answer in the conversation. |
Answer frequency | The number of Answers your bot showed to chatters with the associated tag. |
% of all Answers | The percent of all Answers your bot showed to chatters that had the associated tag. |
# of conversations | The number of conversations where an Answer with the associated tag appeared. |
% of all conversations | The percent of all conversations where an Answer with the associated tag appeared. |

View the total number of Answers your bot has served over time (excluding greetings).
Metric | Definition |
---|---|
Total Answers | The total number of Answers your bot showed to chatters, excluding greetings. |

Filter the data that appears in a report
Filter data by date
To filter a report by date:
Click the date filter drop-down.
Define your date range by one of the following:
Select a predefined range from the list on the left.
Type the filter start date in the Starting field. Type the filter end date in the Ending field.
Click the starting date on the calendar on the left, and the ending date on the calendar on the right.
Click Apply.
The date filter dropdown provides you with the ability to specify the date range you want to filter the report's data by. You can select from a list of preset date ranges or select Customโฆ to specify your own by way of a calendar selector.
Filter data by additional criteria
The list of available filters differs for each report, depending on the data the report includes. Clicking the Add Filter drop-down menu gives you access to the filters relevant to the report you're viewing.
Previous Timeframe: Display the immediate past timeframe (same length) to compare against current selection. Graphs will also display a figure representing the delta (difference) between ranges (ie. how much did your botโs volume rise or drop between timeframes)
Exclude Locked Answers: Graphs and tables will only display volumes for answers created after bot creation (for more details on locked answers, see โAnswers That Donโt Need Trainingโ here). This will remove volume for answers like โGreetingโ and โNot Understood.โ
Language (if Multilingual feature enabled): Include/exclude volume of different languages if your bot has content in other languages.
Platform: Isolate different platforms that your bot is visible in or interacts with (ex. Nuance, Zendesk, SMS, etc)
Browser: Isolate users from specific internet browsers (ex. Chrome, Firefox, etc)
Device: Isolate users from specific devices and operating systems (ex. Windows, iPhone, Android, etc.)
Answers: Isolate specific answer(s). This can be used to check the performance of an answer or multiple answers over time.
Interaction Type: Isolate answers that result from questions that were clicked (quick reply buttons) or typed.
Include Test User: Include conversations originating from the Ada dashboard test bot. Test bot conversations are excluded by default.
Filter by Variable: View only the conversations which include one or more variables. For each variable, you can define specific content the variable must contain, or simply whether the variable Is Set or Is Not Set with any data.
![]() |
Additional information
Report data is updated approximately every hour.
Reports are in the time zone set in your profile.
Printing
We recommend viewing your bot's data in the dashboard for the best experience. However, if you need to save the report as a PDF or print it physically, use the following recommendations to limit rendering issues:
Click Print.
In the Print window that appears, beside Destination, select either Save as PDF or a printer.
Click More settings to display additional print settings.
Set Margins to Minimum.
Set Scale to Custom, then change the value to 70.
Alternatively, you can set the Paper size to A3 (11-3/4 x 16-1/2 in) or Legal (8.5 x 14 in).
Under Options, select the Background graphics checkbox.
Right before saving or printing, scroll through your print preview, and beside Pages, change the number of pages you want to include in your PDF or printout. The settings you changed above may affect how these pages render.
If your destination is Save as PDF, click Save. If your destination is a printer, click Print.
Have any questions? Contact your Ada teamโor email us at help@ada.support.