Skip to main content

View detailed reports on your bot's performance

Overview

There is a variety of detailed reports you can use to measure your bot's performance. This article outlines the individual reports, what they measure, and how the various filters work. Unless otherwise noted, you can view these reports by going to Measure > Reports in your Ada dashboard.

Generally, these reports don't include data from test users. That means that when you're testing your bot, you don't have to worry about skewing your report results. The exception to this, as noted below, is SMS campaigns, because there's no way to mark SMS message recipients as test users.

Learn about each report

Click a report name below to expand it and learn about the metrics it includes. Note that the reports that are available in your bot will vary based on your Ada subscription. If you have any questions, don't hesitate to contact your Ada team.

For more information on the filters you can use to refine your report data, see the Filter the data that appears in a report section of this page.

A/B Testing Overview

View the results from comparing Answer content variants, so you can choose the option that performs best. For more information on A/B testing, see Run an A/B test.

MetricDefinition

Shown

The total number of times the test was presented to chatters.

Testing

The event selected to measure the test against. This was configured upon test setup.

Result

The status of the test. Possible results:

  • Draft: The test has never been live; variants can still be added and/or removed.

  • Active: The test is currently running.

  • Complete: The test has been completed.

A/B Testing Breakdown

View a detailed breakdown of how your Answer variants performed. This breakdown appears when you click onto a specific A/B test to see how it performed. For more information on A/B testing, see Run an A/B test.

MetricDefinition

Significance test results

A rating of whether the test results are conclusive or inconclusive, based on a two-sided significance test with 95% confidence.

This is a statistical analysis based on the normal distribution of the data using the z-score to represent a given variant mean vs the overall population's mean. Then, we use the z-score to determine the probability a given variant is within 2 standard deviations from the mean. (2 standard deviations represents a 95% confidence level.) Possible results:

  • Conclusive: There is enough data to determine that a specific variant (including control) leads to a better conversion rate or to accomplish the event objective.

  • Inconclusive: There is not enough data to have a proper population mean, and therefore can’t be determined if a variant resulted in a better conversion rate or not.

Control

The performance of the control version of the Answer content, including monetary value and percent of conversations where this variant triggered the Event.

The control's success rate is calculated with the formula (control successes) / (control occurrences) * 100. If the number of control occurrences is 0, then the control's success rate is also 0.

Variant(s)

The performance of the Answer content variants, including monetary value and percent of conversations where this variant triggered the Event.

Each variant's success rate is calculated with the formula (variant successes) / (variant occurrences) * 100. If the number of variant occurrences is 0, then the variant's success rate is also 0.

Agent Satisfaction Score

View customer satisfaction (CSAT) surveys where the scores are attributed to human support, available if the “Automatically survey after chat” option is turned on.

note

When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition

Live chat score

The percent of agent reviews that were positive. Your bot calculates this with the formula SUM (positive agent reviews) / SUM (all agent reviews) * 100.

Agent name

The name of the agent who spoke with the chatter immediately before the chatter provided the review. If multiple agents interacted with the chatter in the same conversation, even if only one agent's name appears in this list, all of the agents in that conversation are assigned the chatter's CSAT score.

Agent names appear in this list if they have at least one review in the time periods selected for either data display or for comparison.

Avg score

The percent of agent reviews that were positive.

# of positive

The number of agent reviews that were positive.

# of negative

The number of agent reviews that were negative.

Total # of surveys

The total number of agent reviews.

Answer Performance

View feedback your chatters have given your Answers via thumbs up or down responses, or via the Positive Review or Negative Review locked Answers. For more information, see Improve Answer training using chatter feedback.

MetricDefinition
Feedback rateThe percent of reviewable Answers that chatters reviewed. Your bot calculates this with the formula ((total thumbs up) + (total thumbs down)) / (bot reviewable answers shown) * 100.
Total thumbs upThe number of reviewable Answers that chatters gave positive reviews.
Total thumbs downThe number of reviewable Answers that chatters gave negative reviews.
Reviewable AnswersA list of reviewable Answers that received chatter reviews in the selected timeframe.
FrequencyThe number of times this Answer appeared to chatters.
Thumbs upThe number of positive reviews chatters gave this Answer.
Positive review rateThe percent of time chatters gave this Answer a positive review, out of all the times it appeared. Your bot calculates this with the formula (number of positive reviews) / (Answer frequency) * 100.
Thumbs downThe number of negative reviews chatters gave this Answer.
Negative review rateThe percent of time chatters gave this Answer a negative review, out of all the times it appeared. Your bot calculates this with the formula (number of negative reviews) / (Answer frequency)* 100.

Answers Resulting in Handoffs

View a list of Answers that most often preceded a chatter's request for human support.

MetricDefinition

Answer name

The last Answer to appear before the chatter requested a handoff to human support.

Frequency

The number of times the Answer appeared to chatters.

Total handoffs

The number of times the Answer appeared to a chatter immediately before they requested a handoff to human support.

Your bot counts all handoff attempts as handoffs. Therefore, if your bot attempts to hand a chatter off to human support twice, and only the second attempt is successful, this report still counts it as two handoffs. However, the platform that handles your human support handoffs may count these differently.

Handoff rate

The percent of time chatters requested a handoff to human support after seeing this Answer, out of all the times it appeared. Your bot calculates this with the formula (total handoffs) / frequency * 100.

Percent of total handoffs

The percent of escalated conversations where the handoff occurred directly after this Answer, out of all Answers that were followed by handoffs. Your bot calculates this with the formula SUM (handoffs for Answer) / SUM (handoffs for all Answers) * 100.

Article Performance

View how your knowledge base article links performed after your bot suggested them. For more information, see Let chatters search your Zendesk or Salesforce knowledge base content.

MetricDefinition

Article name

The name of the knowledge base article.

Suggestion

The number of times your bot suggested the article to chatters.

Clicks

The number of times chatters clicked on a unique message that contained a knowledge base link.

If a chatter clicked multiple times on the same message, it only counts as one click. However, if your bot suggests the same link multiple times, and a chatter clicks those links in different messages, those count as separate clicks.

Click rate

The percent of time chatters clicked on suggested article links.

Automated Resolution Rate

The automated resolution rate is an analysis of how many conversations your bot was able to resolve automatically.

To calculate the automated resolution rate, your bot takes a random sample of your bot's conversations, then analyzes each conversation in the sample to understand both the chatter's intent and the bot's response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation in the sample.

For a conversation to be considered automatically resolved, the conversation must be:

  • Relevant - Ada effectively understood the chatter's inquiry, and provided directly related information or assistance.

  • Accurate - Ada provided correct, up-to-date information.

  • Safe - Ada interacts with the chatter in a respectful manner and avoided engaging in topics that caused danger or harm.

  • Contained - Ada addressed the chatter's inquiry without having to hand them off to a human agent.

    While Containment Rate can be a useful metric to get a quick glance of the proportion of bot conversations that didn't escalate to a human agent, automated resolution rate takes it a step further. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your bot content really is.

In the Conversations portion of the Automated Resolution Rate page, you can view a summary of what each chatter was looking for, how your bot classified the conversation, and its reasoning. If you need more information, you can click a row to view the entire conversation transcript.

MetricDefinition

Automated Resolution Rate

The percentage of conversations in your sample that your bot determined were automatically resolved. Your bot calculates this with the formula Resolved conversations / (Resolved conversations + Not Resolved conversations).

Error margin

Because we are measuring automated resolutions by sampling, the error margin tells us how much we can can expect sampled results to differ from the actual if we were to measure automated resolutions on every single conversation.

For example, let's say your bot lists your automated resolution rate as 40%, with a error margin of ±3%. This means that if you were to conduct the same sampling over and over again, the results would fluctuate between 37% (40 - 3) and 43% (40 + 3) in 95% of cases.

Containment Rate

The percent of conversations that did not result in a handoff to human support.

Average Handle Time

View the average amount of time chatters spent talking with your bot, for conversations that didn’t end in handoffs to human support.

This report uses winsorization on all of its metrics. To handle outliers, your bot calculates the 90th percentile of all handle times. If a handle time is higher than the 90th percentile limit, your bot replaces it with the 90th percentile limit instead.

MetricDefinition
Avg handle time when containedThe average amount of time chatters spent talking with your bot, for conversations that didn’t end in handoffs to human support.
Avg handle time before escalationThe average amount of time chatters spent talking to your bot before handoff, for conversations where chatters escalated to human support.
Avg handle time with agentsThe average amount of time chatters spent talking to live support agents.

Campaign Performance (SMS)

View the proactive campaign messages you have configured to start via SMS, and how often those messages were attempted, delivered successfully, and replied to. For more information, see Start text conversations using proactive campaigns for SMS.

Unlike web content, there is no way to mark SMS conversations as test content. Be aware that this data may include data from your internal tests as a result.

MetricDefinition
Campaign nameThe name of the campaign.
AttemptedThe number of campaign messages Ada attempted to send chatters via SMS.
DeliveredThe number of campaign messages Ada attempted to send chatters via SMS that didn't result in delivery errors.
EngagedThe number of campaign messages chatters replied to via SMS.

Campaign Breakdown (SMS)

View how a specific SMS campaign has performed. For more information, see Start text conversations using proactive campaigns for SMS.

Unlike web content, there is no way to mark SMS conversations as test content. Be aware that this data may include data from your internal tests as a result.

MetricDefinition
AttemptedThe number of campaign messages Ada attempted to send chatters via SMS.
DeliveredThe number of campaign messages Ada successfully delivered to chatters via SMS, and the percent of successful message deliveries out of all delivery attempts.
EngagedThe number of SMS campaign messages Ada successfully chatters replied to, and the percent of successful message deliveries out of all delivery attempts.

Campaign Performance (Web)

View the proactive campaign messages you have configured to appear on web, and how often those messages have been shown, opened, and replied to. For more information, see Start conversations using basic proactive campaigns and Start customizable interactions using advanced proactive campaigns.

MetricDefinition
Campaign nameThe name of the campaign.
ShownThe number of times your bot showed chatters the campaign message.
OpenedThe percent of campaign messages shown that chatters opened. Your bot calculates this with the formula (messages opened) / (messages shown) * 100.
EngagedThe percent of campaign messages shown that chatters responded to. Your bot calculates this with the formula (messages responded to) / (messages shown) * 100.

Campaign Breakdown (Web)

View how a specific web campaign has performed. For more information, see Start conversations using basic proactive campaigns and Start customizable interactions using advanced proactive campaigns.

MetricDefinition
ShownThe number of times your bot showed chatters the campaign message.
OpenedThe number and percent of campaign messages shown that chatters opened.
EngagedThe number and percent of campaign messages shown that chatters responded to.

Clarification Rate

View the percent of conversations where your bot required at least one clarification. For more information, see Understand the Needs Clarification and Not Understood Answers.

MetricDefinition
Clarification rateThe percent of conversations in which the Needs Clarification Answer appeared at least once.

Containment Rate

View how often chatters were able to self-serve instead of escalating to human support.

MetricDefinition
Containment rateThe percent of conversations that did not result in a handoff to human support.

Conversational Messages Volume

View the number of bot, chatter, and agent messages per conversation.

Example conversation:

Bot
- Hello! (1)
- Hello! How can I be of assistance today? (2)

Chatter
[1] Hello -
[2] What is the status of my order? -

Bot
- I can check on that for you. (3)
- What is your order number? (4)

Chatter
[3] abc123 -

Bot
- Let me fetch that information for you... (5)
- Your order is currently being packaged for shipping. (6)
- Your estimated delivery date is Dec 25. (7)

Chatter
[4] that is too long. let me speak to an agent -

Bot
- Understood. Connecting you to the next available Agent (8)

Agent
- Hello my name is Ada. How can I further help you? {1}

Chatter
[5] I need my order sooner. please cancel it -

Agent
- Sorry about the delay. I will cancel your order {2}
- Your order has been cancelled {3}

Chatter
[6] Thank you -
MetricDefinition

Number of conversations

The number of conversations where a chatter sent at least one message to your bot.

Bot messages

The number of conversations (y-axis) that contained a given number of messages your bot sent (x-axis).

In the example above, where bot messages are counted in parentheses (), this conversation would fall under 8 bot messages. Each response bubble counts as a single message, excluding messages that indicate a live agent has joined or left the chat. Emojis, links, pictures, and knowledge base articles all count as bot messages for this report.

Agent messages

The number of conversations (y-axis) that contained a given number of messages agents sent (x-axis).

In the example above, where agent messages are counted in curly brackets , this conversation would fall under 3 agent messages. Emojis, links, and pictures all count as agent messages for this report.

Chatter messages

The number of conversations (y-axis) that contained a given number of messages chatters sent (x-axis).

In the example above, where chatter messages are counted in square brackets [], this conversation would fall under 6 chatter messages. Emojis, links, and pictures count as chatter messages for this report.

Number of messages (x-axis)

The number of each type of message per conversation.

Roughly 95% of conversations have fewer than 45 messages of any one type, which is why the upper end of the scale groups all conversations with 45 or more of any one type of message.

Number of conversations (y-axis)

The number of conversations that fall in each quantity of messages.

Conversations Breakdown

View the number of conversations initiated, engaged, and escalated in your bot.

MetricDefinition

Opens

The number of conversations where a chatter opened your bot and was presented with a greeting. Every conversation contains one greeting. The entire series of messages that may be sent counts as one greeting, but only one needs to be sent for it to count as an open.

Engaged

The number of conversations where a chatter sent at least one message to your bot.

A conversation counts as engaged once a chatter sends a message, regardless of whether your bot understands the message.

Escalated

The number of conversations where a chatter requested an escalation to human support.

Conversation Topics Overview

View a list of topics your chatters talk about. For more information, see Track conversation topics.

This report isn't listed with the other reports; instead, you can see it if you go to Conversations > Topics in your Ada dashboard.

MetricDefinition
TopicsA list of conversation topics that bot builders in your organization have configured.
VolumeThe number of conversations that contain the conversation topic keywords.
HandoffsThe number of conversations that contain the conversation topic keywords and that were escalated to human support.
Updated byThe last bot builder who updated the conversation topic.

Conversation Topics Breakdown

View how a particular conversation topic performed. For more information, see Track conversation topics.

This report isn't listed with the other reports; instead, you can see it if you go to Conversations > Topics in your Ada dashboard and click on a topic to see more detail.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition
VolumeThe number of conversations that contain the conversation topic keywords.
HandoffsThe number of conversations that contain the conversation topic keywords and that were escalated to human support.
Customer satisfaction scoreOf all conversations that contained this topic’s keywords, the percent that received positive customer satisfaction reviews.

Customer Satisfaction Score

View the percent of your bot's conversations that chatters reviewed positively. For more information, see Collect and analyze chatter satisfaction data with Satisfaction Surveys.

Chatters can rate conversations in surveys at the end of the chat, the Anytime Survey, or a survey triggered by an Answer with a Satisfaction Survey block. If the survey appeared more than once in a single conversation, the bot will show the previous selection to the chatter, allowing them to update their feedback. Only the most recent rating is recorded per conversation; the latest rating overrides any previous ratings.

There are three ways you can set up customer satisfaction reviews:

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition
Overall scoreThe percent of conversations chatters reviewed positively, out of all conversations they reviewed.
Bot only scoreThe percent of conversations chatters reviewed positively, out of all conversations that did not escalate to a live agent.
Live chat scoreThe percent of conversations chatters reviewed positively, out of all conversations that escalated to a live agent.

Engagement Rate

View how often customers chose to chat with your bot.

MetricDefinition
Engagement rateThe percent of conversations where chatters sent at least one message or quick reply to your bot.

Events Overview

View your bot’s tracked events, how often they occurred, and the monetary values associated with them. For more information, see Create and track chatter actions.

MetricDefinition
Total count (top)The total number of events that occurred.
Total valueThe total monetary value of all of the events that occurred.
Event nameThe name of the event being measured.
Total count (table)The number of times the event occurred.
Total valueThe total monetary value of all of the occurrences of the event, based on the value assigned to the event when it was configured.

Events Breakdown

View how a specific event performed. For more information, see Create and track chatter actions.

MetricDefinition
Total countThe number of times the event occurred.
Total valueThe total monetary value of all of the occurrences of the event, based on the value assigned to the event when it was configured.

Goals Overview

View how often your bot’s goals were met, so you can track and measure valuable business interactions. For more information, see Set goals to measure your bot's impact.

LocationMetricDefinition
TopGoal completionThe number of times any goals in the table were completed.
Goal conversion rateThe percent of conversations in which any goals in the list were completed.
Goal valueThe total monetary value of all goals in the list.
TableGoal nameThe name of the goal being measured.
Goal completionThe number of times the goal was completed.
Goal conversion rateThe percent of conversations in which the goal was completed.
Goal valueThe total monetary value associated with the goal. Your bot calculates this with the formula (number of conversations where the goal was completed) * (value assigned to the goal).

Goals Breakdown

View how a specific goal performed. For more information, see Set goals to measure your bot's impact.

MetricDefinition
Goal completionThe number of times the goal was completed.
Goal conversion rateThe percent of conversations where the goal was completed.
Goal valueThe total monetary value associated with the goal. Your bot calculates this with the formula (number of conversations where the goal was completed) * (value assigned to the goal).

Link Click Performance

View the click-through rates for links presented via Link Message or Web Window blocks.

MetricDefinition

Total shown

The total number of links your bot showed to chatters. Multiple instances of the same link count as one link.

Total clicks

The total number of times chatters clicked on a unique message that contained a link.

If a chatter clicks multiple times on the same message, it only counts as one click. However, if the same link appears multiple times as part of the same conversation, and a chatter clicks more than one instance of that link, the clicks are counted separately.

Click rate

The percent of links chatters clicked on, out of all the links shown.

URL

The link URL.

Answers

The Answers that contain the Link Message or Web Window block with the URL your bot showed to chatters.

Shown

The number of times your bot showed this link to chatters. Links that contain variables (such as a user ID) are counted as the same “base” link.

Clicked

The number of times chatters clicked on this link. Links that contain variables are counted as the same “base” link.

Click rate

The percent of links chatters clicked on, out of all the links shown.

Popular Answers

View a list of Answers that appear most often in conversations.

MetricDefinition
Answer nameA list of all Answers that appeared to chatters, sorted in descending order by frequency by default. This list excludes greeting Answers.
FrequencyThe total number of times the Answer appeared to chatters.
Percent of total AnswersThe percent of time your bot showed this Answer to chatters, out of all Answers it showed to chatters.

Recognition Rate

View how often your bot was able to recognize and answer chatters’ questions. For more information, see Understand the Needs Clarification and Not Understood Answers.

MetricDefinition

Recognition rate

The percent of Answers your bot sent that were not the Not Understood Answer, including text messages, suggestions, quick replies, knowledge base suggestions, and clarifications, and excluding greeting Answers.

You don't have to aim for this rate to be 100%. In the case of chatter questions that were either incoherent or didn't have any training, the Not Understood Answer would be appropriate and expected.

Satisfaction Survey Results

View the results of your customer satisfaction (CSAT) survey. For more information, see Collect and analyze chatter satisfaction data with Satisfaction Surveys.

note

When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
Thumbs up/down (binary)👎👍
MetricDefinition

Last submitted

The most recent time a chatter submitted a satisfaction survey.

Agent

The agent, if any, who participated in the conversation. If multiple agents participated in the conversation, this is the agent who participated closest to the end of the chat.

Survey type

The type of survey the chatter responded to. If the chatter responded to multiple survey types, this is the one that happened closest to the end of the chat. Possible survey types:

  • End chat: The survey presented to the chatter when they click “End chat” outside of a handoff.

  • Anytime survey: If enabled, the survey the chatter can access at any point after sending your bot four messages.

  • Live agent: The survey chatters receive when they close the chat after speaking with an agent, or when an agent leaves the conversation.

  • CSAT block: The survey presented to chatters when the Satisfaction Survey block is included in an Answer.

Rated

The satisfaction rating the chatter selected.

Reason for rating

The reason(s) that the chatter selected in the survey follow-up question, if any.

Possible positive reasons:

  • Efficient chat

  • Helpful resolution

  • Knowledgeable support

  • Friendly tone

  • Easy to use

  • Bot was intelligent

  • Other

Possible negative reasons:

  • Took too long

  • Unhelpful resolution

  • Lack of expertise

  • Unfriendly tone

  • Technical issues

  • Bot didn't understand

  • Other

Resolution

The chatter’s response, if any, to whether your bot was able to resolve their issue. This can either be yes or no.

Comments

Additional comments, if any, that the chatter wanted to include in the survey about their experience.

Tags Overview

View answer and conversation volumes by Answer tag. For more information, see Manage content using tags and descriptions.

MetricDefinition
Total AnswersThe number of Answers with tags that appeared in conversations.
Conversation volumeThe number of conversations where your bot showed at least one Answer with a particular tag to chatters. If a conversation has two Answers with the same tag, your bot only counts the tag once.
TagsThe tag that was assigned to at least one Answer in the conversation.
Answer frequencyThe number of Answers your bot showed to chatters with the associated tag.
% of all AnswersThe percent of all Answers your bot showed to chatters that had the associated tag.
# of conversationsThe number of conversations where an Answer with the associated tag appeared.
% of all conversationsThe percent of all conversations where an Answer with the associated tag appeared.

Total Answers

View the total number of Answers your bot has served over time (excluding greetings).

MetricDefinition
Total AnswersThe total number of Answers your bot showed to chatters, excluding greetings.

Filter the data that appears in a report

Filter data by date

To filter a report by date:

  1. Click the date filter drop-down.

  2. Define your date range by one of the following:

    • Select a predefined range from the list on the left.

    • Type the filter start date in the Starting field. Type the filter end date in the Ending field.

    • Click the starting date on the calendar on the left, and the ending date on the calendar on the right.

  3. Click Apply.

The date filter dropdown provides you with the ability to specify the date range you want to filter the report's data by. You can select from a list of preset date ranges or select Custom… to specify your own by way of a calendar selector.

Filter data by additional criteria

The list of available filters differs for each report, depending on the data the report includes. Clicking the Add Filter drop-down menu gives you access to the filters relevant to the report you're viewing.

  • Previous Timeframe: Display the immediate past timeframe (same length) to compare against current selection. Graphs will also display a figure representing the delta (difference) between ranges (ie. how much did your bot’s volume rise or drop between timeframes)

  • Exclude Locked Answers: Graphs and tables will only display volumes for answers created after bot creation (for more details on locked answers, see “Answers That Don’t Need Training” here). This will remove volume for answers like “Greeting” and “Not Understood.”

  • Language (if Multilingual feature enabled): Include/exclude volume of different languages if your bot has content in other languages.

  • Platform: Isolate different platforms that your bot is visible in or interacts with (ex. Nuance, Zendesk, SMS, etc)

  • Browser: Isolate users from specific internet browsers (ex. Chrome, Firefox, etc)

  • Device: Isolate users from specific devices and operating systems (ex. Windows, iPhone, Android, etc.)

  • Answers: Isolate specific answer(s). This can be used to check the performance of an answer or multiple answers over time.

  • Interaction Type: Isolate answers that result from questions that were clicked (quick reply buttons) or typed.

  • Include Test User: Include conversations originating from the Ada dashboard test bot. Test bot conversations are excluded by default.

  • Filter by Variable: View only the conversations which include one or more variables. For each variable, you can define specific content the variable must contain, or simply whether the variable Is Set or Is Not Set with any data.

Additional information

  • Report data is updated approximately every hour.

  • Reports are in the time zone set in your profile.

Printing

We recommend viewing your bot's data in the dashboard for the best experience. However, if you need to save the report as a PDF or print it physically, use the following recommendations to limit rendering issues:

  1. Click Print.

  2. In the Print window that appears, beside Destination, select either Save as PDF or a printer.

  3. Click More settings to display additional print settings.

  4. Set Margins to Minimum.

  5. Set Scale to Custom, then change the value to 70.

    • Alternatively, you can set the Paper size to A3 (11-3/4 x 16-1/2 in) or Legal (8.5 x 14 in).
  6. Under Options, select the Background graphics checkbox.

  7. Right before saving or printing, scroll through your print preview, and beside Pages, change the number of pages you want to include in your PDF or printout. The settings you changed above may affect how these pages render.

  8. If your destination is Save as PDF, click Save. If your destination is a printer, click Print.


Have any questions? Contact your Ada team—or email us at .