Skip to main content

Understand and improve your AI Agent's automated resolution rate

Measuring your AI Agent's success is a challenging task. It's impossible to go into every conversation that chatters have with your AI Agent, and with all of the different reports and metrics you can look at, it can be hard to know which ones to focus on.

Ada has created the automation resolution metric, which is not only a measure of whether your AI Agent was able to automatically resolve a chatter's inquiry without handing them off to human support, but also whether it was able to successfully address the reason why your chatter came to your AI Agent in the first place. Your AI Agent uses AI language understanding to assess both chatters' inquiries and its own responses to figure out whether a successful automatic resolution did or didn't happen.

Why automated resolution?

Historically, metrics like the containment rate have come close to measuring success, but they couldn't provide a complete picture. The containment rate tells you which proportion of conversations that chatters had with your AI Agent ended without being handed off to a human agent. But without context, this metric can only tell you so much. Can you differentiate between a chatter who ended a conversation with your AI Agent because they were satisfied, as opposed to one who got frustrated and gave up?

While containment can be a useful metric to get a quick glance of the proportion of bot conversations that didn't escalate to a human agent, automated resolution takes it a step farther. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your AI Agent content really is.

Understand how your AI Agent classifies automatic resolutions

Your AI Agent takes a random sample of its conversations, then analyzes each conversation in the sample to understand both the chatter's intent and the AI Agent's response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation in the sample.

info

Ada considers a conversation over after 24 hours have passed after the customer's last message. Because of that, your AI Agent won't assess conversations for automatic resolutions until 24 hours after the conversation ends.

For a conversation to be considered automatically resolved, the conversation must be:

  • Relevant - Ada effectively understood the chatter's inquiry, and provided directly related information or assistance.

  • Accurate - Ada provided correct, up-to-date information.

  • Safe - Ada interacts with the chatter in a respectful manner and avoided engaging in topics that caused danger or harm.

  • Contained - Ada addressed the chatter's inquiry without having to hand them off to a human agent.

To get an overview of what percentage of conversations in your sample were automatically resolved, and the conversations your AI Agent included in your sample, you can use the Automated Resolution Rate report.

Provide feedback on how your AI Agent classified a conversation

You can read through individual conversations in your sample to see how chatters interacted with your AI Agent. If you disagree with how your AI Agent automatically classified a conversation, you can provide feedback to Ada.

note

This feedback is only so Ada can improve our classification model. It does not override the automatic classification assigned to the conversation.

  1. Open a conversation. You can do this two ways: from the Automated Resolution Rate report, or from the Conversations view.

    • From the Automated Resolution Rate report

      If you're looking at the Automated Resolution Rate report, scroll down to the Conversations section. Click a conversation to read the entire transcript.

    • From the Conversations view

      On the Ada dashboard, go to Conversations. Optionally, you can use the AR Classification filter to narrow down conversations by selecting any combination of Resolved, Not Resolved, or Not in Sample to look through those conversations.

      Select a conversation in the conversation library on the left side of the screen to read through the entire transcript.

  2. If the sidebar on the right side of your page is collapsed, click Details to expand it. There, you can read through your AI Agent's automatic assessment of the conversation:

    • Inquiry Summary - An automatically generated summary of what the chatter wanted to accomplish.

    • Classification - The classification of either Resolved or Not Resolved that your AI Agent assigned to the conversation.

    • Reason for Classification - Your AI Agent's understanding of what happened in the conversation, which led it to the classification that it assigned.

    Take a second to understand the reasoning that your AI Agent used to make its classification, so you can give more targeted feedback.

  3. Beside Agree?, click Yes or No.

    If you clicked No, enter some additional details about what your AI Agent got wrong in its assessment.

  4. Click Submit.

Automated resolution data can be very powerful in helping you find gaps in your AI Agent's content that you can improve. While going through your automatic resolution data, ask yourself:

  • What kinds of patterns am I seeing?

    For example, let's say there are a lot of unresolved conversations caused by your AI Agent being unable to understand what your chatter was asking for. Are there improvements you can make to your knowledge base content to improve recognition?

  • Is there additional information I can add?

    If you see that multiple unresolved conversations relate to the same topic, consider adding that information to your knowledge base to address it.

Keep in mind, it's not a realistic goal to get to 100% automated resolutions in your AI Agent. Some chatters ask for handoffs immediately, and others might ask inquiries that are off-topic or even abusive, and you can't expect your AI Agent to resolve those queries.


Have any questions? Contact your Ada team—or email us at .