Skip to main content

Understand and improve your bot's automated resolution rate

Measuring your bot's success is a challenging task. It's impossible to go into every conversation that chatters have with your bot, and with all of the different reports and metrics you can look at, it can be hard to know which ones to focus on.

Ada has created the automation resolution metric, which is not only a measure of whether your bot was able to automatically resolve a chatter's inquiry without handing them off to human support, but also whether it was able to successfully address the reason why your chatter came to your bot in the first place. Your bot uses AI language understanding to assess both chatters' inquiries and its own responses to figure out whether a successful automatic resolution did or didn't happen.

Why automated resolution?

Historically, metrics like the containment rate have come close to measuring a bot's success, but they couldn't provide a complete picture. The containment rate tells you which proportion of conversations that chatters had with your bot ended without being handed off to a human agent. But without context, this metric can only tell you so much. Can you differentiate between a chatter who ended a conversation with your bot because they were satisfied, as opposed to one who got frustrated and gave up?

While containment can be a useful metric to get a quick glance of the proportion of bot conversations that didn't escalate to a human agent, automated resolution takes it a step farther. By measuring the success of those conversations and the content they contain, you can get a much better idea of how helpful your bot content really is.

Understand how your bot classifies automatic resolutions

Your bot takes a random sample of your bot's conversations, then analyzes each conversation in the sample to understand both the chatter's intent and the bot's response. Based on that analysis, it then assigns a classification of either Resolved or Not Resolved to each conversation in the sample.

For a conversation to be considered automatically resolved, the conversation must be:

  • Relevant - Ada effectively understood the chatter's inquiry, and provided directly related information or assistance.

  • Accurate - Ada provided correct, up-to-date information.

  • Safe - Ada interacts with the chatter in a respectful manner and avoided engaging in topics that caused danger or harm.

  • Contained - Ada addressed the chatter's inquiry without having to hand them off to a human agent.

To get an overview of what percentage of conversations in your sample were automatically resolved, and the conversations your bot included in your sample, you can use the Automated Resolution Rate report.

Provide feedback on how your bot classified a conversation

You can read through individual conversations in your sample to see how chatters interacted with your bot. If you disagree with how your bot automatically classified a conversation, you can provide feedback to Ada.

note

This feedback is only so Ada can improve our classification model. It does not override the automatic classification your bot assigned the conversation.

  1. Open a conversation. You can do this two ways: from the Automated Resolution Rate report, or from the Conversations view.

    • From the Automated Resolution Rate report

      If you're looking at the Automated Resolution Rate report, scroll down to the Conversations section. Click a conversation to read the entire transcript.

    • From the Conversations view

      On the Ada dashboard, go to Conversations > All Conversations. Optionally, you can use filters to narrow down conversations; for example, you can click All, then select any combination of Resolved, Not Resolved, or Not in Sample to look through those conversations.

      Select a conversation in the conversation library on the left side of the screen to read through the entire transcript.

  2. If the sidebar on the right side of your page is collapsed, click Details to expand it. There, you can read through your bot's automatic assessment of the conversation:

    • Inquiry Summary - An automatically generated summary of what the chatter wanted to accomplish.

    • Classification - The classification of either Resolved or Not Resolved that your bot assigned to the conversation.

    • Reason for Classification - Your bot's understanding of what happened in the conversation, which led it to the classification that it assigned.

    Take a second to understand the reasoning that your bot used to make its classification, so you can give more targeted feedback.

  3. Beside Agree?, click Yes or No.

    If you clicked No, enter some additional details about what your bot got wrong in its assessment.

  4. Click Submit.

Find trends so you can improve your bot's content

Automated resolution data can be very powerful in helping you find gaps in your bots' content that you can improve. While going through your automatic resolution data, ask yourself:

  • What kinds of patterns am I seeing?

    For example, let's say there are a lot of unresolved conversations caused by your bot being unable to understand what your chatter was asking for. Are there any Answers you can improve the training for, so your bot has a better chance at directing chatters to the right content?

  • Is there additional information I can add?

    If you see that multiple unresolved conversations relate to the same topic, consider either creating a new Answer, or adding that information to your knowledge base, to address it, depending on the type of bot you have.

Just like when you're trying to improve training for conversations that contained unanswered questions, Answers that chatters marked as not helpful, or that required clarifications to understand what the chatter was asking for, it's not a realistic goal to get to 100% automated resolutions in your bot. Some chatters ask for handoffs immediately, and others might ask inquiries that are off-topic or even abusive, and you can't expect your bot to resolve those queries.


Have any questions? Contact your Ada team—or email us at .