Skip to main content

View detailed reports on your AI Agent's performance

Measure your AI Agent's performance with a variety of detailed reports. This article outlines the individual reports, what they measure, and how the various filters work. Unless otherwise noted, you can view these reports by going to Performance > Reports in your Ada dashboard.

By default, these reports don't include data from test users. That means that when you're testing your AI Agent, you don't have to worry about skewing your report results.

Learn about each report​

Click a report name below to expand it and learn about the metrics it includes. Note that the reports that are available in your AI Agent may vary based on your Ada subscription. If you have any questions, don't hesitate to contact your Ada team.

For more information on the filters you can use to refine your report data, see the Filter the data that appears in a report section of this page.

Automated Resolution Rate

The automated resolution rate is an analysis of how many conversations your AI Agent was able to resolve automatically.

To calculate the automated resolution rate, your AI Agent takes a random sample of conversations, then analyzes each conversation in the sample to understand both the customer's intent and the AI Agent's response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation in the sample.

For a conversation to be considered automatically resolved, the conversation must be:

  • Relevant - Ada effectively understood the customer's inquiry, and provided directly related information or assistance.

  • Accurate - Ada provided correct, up-to-date information.

  • Safe - Ada interacted with the customer in a respectful manner and avoided engaging in topics that caused danger or harm.

  • Contained - Ada addressed the customer's inquiry without having to hand them off to a human agent.

    While Containment Rate can be a useful metric to get a quick glance of the proportion of AI Agent conversations that didn't escalate to a human agent, automated resolution rate takes it a step further. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your AI Agent really is.

In the Conversations portion of the Automated Resolution Rate page, you can view a summary of what each customer was looking for, how your AI Agent classified the conversation, and its reasoning. If you need more information, you can click a row to view the entire conversation transcript.

MetricDefinition

Automated Resolution Rate

The percentage of conversations in your sample that your AI Agent determined were automatically resolved. Your AI Agent calculates this with the formula Resolved conversations / (Resolved conversations + Not Resolved conversations).

Error margin

Because we are measuring automated resolutions by sampling, the error margin tells us how much we can can expect sampled results to differ from the actual if we were to measure automated resolutions on every single conversation.

For example, let's say your AI Agent lists your automated resolution rate as 40%, with an error margin of Β±3%. This means that if you were to conduct the same sampling over and over again, the results would fluctuate between 37% (40 - 3) and 43% (40 + 3) in 95% of cases.

Containment Rate

The percent of conversations that did not result in a handoff to a human agent.

Average Handle Time

View the average amount of time customers spent talking with your AI Agent, for conversations that didn’t end in handoffs to human support.

This report uses winsorization on all of its metrics. To handle outliers, your AI Agent calculates the 90th percentile of all handle times. If a handle time is higher than the 90th percentile limit, your AI Agent replaces it with the 90th percentile limit instead.

MetricDefinition
Avg handle time when containedThe average amount of time customers spent talking with your AI Agent, for conversations that didn’t end in handoffs to human support.
Avg handle time before escalationThe average amount of time customers spent talking to your AI Agent before handoff, for conversations where customers escalated to human support.
Avg handle time with agentsThe average amount of time customers spent talking to live support agents.

Containment Rate

View how often customers were able to self-serve instead of escalating to human support.

MetricDefinition
Containment rateThe percent of conversations that did not result in a handoff to human support.

Conversational Messages Volume

View the number of AI Agent, customer, and human agent messages per conversation.

Example conversation:

AI Agent
- Hello! (1)
- Hello! How can I be of assistance today? (2)

Customer
[1] Hello -
[2] What is the status of my order? -

AI Agent
- I can check on that for you. (3)
- What is your order number? (4)

Customer
[3] abc123 -

AI Agent
- Let me fetch that information for you... (5)
- Your order is currently being packaged for shipping. (6)
- Your estimated delivery date is Dec 25. (7)

Customer
[4] that is too long. let me speak to an agent -

AI Agent
- Understood. Connecting you to the next available agent (8)

Human agent
- Hello my name is Sonia. How can I further help you? {1}

Customer
[5] I need my order sooner. please cancel it -

Human agent
- Sorry about the delay. I will cancel your order {2}
- Your order has been cancelled {3}

Customer
[6] Thank you -
MetricDefinition

Number of conversations

The number of conversations where a customer sent at least one message to your AI Agent.

Messages sent

The number of conversations (y-axis) that contained a given number of messages your AI Agent sent (x-axis).

In the example above, where AI Agent messages are counted in parentheses (), this conversation would fall under 8 AI Agent messages. Each response bubble counts as a single message, excluding messages that indicate a live agent has joined or left the chat.

Customer messages received

The number of conversations (y-axis) that contained a given number of messages customers sent (x-axis).

In the example above, where customer messages are counted in square brackets [], this conversation would fall under 6 customer messages.

Agent messages

The number of conversations (y-axis) that contained a given number of messages agents sent (x-axis).

In the example above, where agent messages are counted in curly brackets {}, this conversation would fall under 3 agent messages. Emojis, links, and pictures all count as agent messages for this report.

Number of messages (x-axis)

The number of each type of message per conversation.

Roughly 95% of conversations have fewer than 45 messages of any one type, which is why the upper end of the scale groups all conversations with 45 or more of any one type of message.

Number of conversations (y-axis)

The number of conversations that fall in each quantity of messages.

Conversations Breakdown

View the number of conversations initiated, engaged, and escalated in your AI Agent.

MetricDefinition

Opens

The number of conversations where a customer opened your AI Agent and was presented with a greeting. Every conversation contains one greeting. The entire series of messages that may be sent counts as one greeting, but only one needs to be sent for it to count as an open.

Engaged

The number of conversations where a customer sent at least one message to your AI Agent.

A conversation counts as engaged once a customer sends a message, regardless of whether your AI Agent understands the message.

Escalated

The number of conversations where a customer requested an escalation to human support.

Customer Satisfaction Score

View the percent of your AI Agent's conversations that customers reviewed positively. For more information, see Collect and analyze customer satisfaction data with Satisfaction Surveys.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, πŸ™, or πŸ˜πŸ™‚ or 😍
Thumbs up/down (binary)πŸ‘ŽπŸ‘
MetricDefinition
Overall scoreThe percent of conversations customers reviewed positively, out of all conversations they reviewed.
Satisfaction Survey Results

View the results of your customer satisfaction (CSAT) survey. For more information, see Collect and analyze chatter satisfaction data with Satisfaction Surveys.

note

When you filter this report by date, it uses the date that the user submitted their satisfaction survey, rather than the date the conversation started. As a result, the number of conversations that appear in this report may vary from other reports.

There are four ways you can set up customer satisfaction reviews, each with different scales for recording feedback:

Rating typeNegative reviewPositive review
Numeric (5-point scale)1, 2, or 34 or 5
Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
Emoji (5-point scale)😠, πŸ™, or πŸ˜πŸ™‚ or 😍
Thumbs up/down (binary)πŸ‘ŽπŸ‘
MetricDefinition

Last submitted

The most recent time a customer submitted a satisfaction survey.

Agent

The agent, if any, who participated in the conversation. If multiple agents participated in the conversation, this is the agent who participated closest to the end of the chat.

Survey type

The type of survey the customer responded to.

  • End chat: The survey presented to the customer when they click β€œEnd chat” outside of a handoff.

  • Live agent: The survey customers receive when they close the chat after speaking with an agent, or when an agent leaves the conversation.

Rated

The satisfaction rating the chatter selected.

Reason for rating

The reason(s) that the customer selected in the survey follow-up question, if any.

Possible positive reasons:

  • Efficient chat

  • Helpful resolution

  • Knowledgeable support

  • Friendly tone

  • Easy to use

  • Bot was intelligent

  • Other

Possible negative reasons:

  • Took too long

  • Unhelpful resolution

  • Lack of expertise

  • Unfriendly tone

  • Technical issues

  • Bot didn't understand

  • Other

Resolution

The customer's response, if any, to whether your AI Agent was able to resolve their issue. This can either be yes or no.

Comments

Additional comments, if any, that the customer wanted to include in the survey about their experience.

Filter the data that appears in a report​

Filter data by date​

To filter a report by date:

  1. Click the date filter drop-down.

  2. Define your date range by one of the following:

    • Select a predefined range from the list on the left.

    • Type the filter start date in the Starting field. Type the filter end date in the Ending field.

    • Click the starting date on the calendar on the left, and the ending date on the calendar on the right.

  3. Click Apply.

The date filter dropdown provides you with the ability to specify the date range you want to filter the report's data by. You can select from a list of preset date ranges or select Custom… to specify your own by way of a calendar selector.

Filter data by additional criteria​

The list of available filters differs for each report, depending on the data the report includes. Clicking the Add Filter drop-down menu gives you access to the filters relevant to the report you're viewing.

  • Compare to past: Display the immediate past timeframe (same length) to compare against current selection. Graphs will also display a figure representing the delta (difference) between ranges (ie. how much did your AI Agent's volume rise or drop between timeframes)

  • Browser: Isolate users from specific internet browsers (for example, Chrome, Firefox, Safari, etc.)

  • Channel: Isolate different platforms that your AI Agent is visible in or interacts with (for example, Ada Web Chat, SMS, WhatsApp, etc.)

  • Device: Isolate users from specific devices and operating systems (for example, Windows, iPhone, Android, etc.)

  • Language (if Multilingual feature enabled): Include/exclude volume of different languages if your AI Agent has content in other languages.

  • Include Test User: Include conversations originating from the Ada dashboard test AI Agent. Test conversations are excluded by default.

  • Filter by Variable: View only the conversations which include one or more variables. For each variable, you can define specific content the variable must contain, or simply whether the variable Is Set or Is Not Set with any data.

Additional information​

  • Report data is updated approximately every hour.

  • Reports are in the time zone set in your profile.

Printing​

We recommend viewing your AI Agent's data in the dashboard for the best experience. However, if you need to save the report as a PDF or print it physically, use the following recommendations to limit rendering issues:

  1. Click Print.

  2. In the Print window that appears, beside Destination, select either Save as PDF or a printer.

  3. Click More settings to display additional print settings.

  4. Set Margins to Minimum.

  5. Set Scale to Custom, then change the value to 70.

    • Alternatively, you can set the Paper size to A3 (11-3/4 x 16-1/2 in) or Legal (8.5 x 14 in).
  6. Under Options, select the Background graphics checkbox.

  7. Right before saving or printing, scroll through your print preview, and beside Pages, change the number of pages you want to include in your PDF or printout. The settings you changed above may affect how these pages render.

  8. If your destination is Save as PDF, click Save. If your destination is a printer, click Print.


Have any questions? Contact your Ada teamβ€”or email us at .