Skip to main content

Understand how your AI Agent works

When you've used a chatbot in the past, you've probably thought of it as slow or difficult to use. The majority of bots out there aren't great at understanding what your customers want, or knowing how to respond or take actions like a human agent would.

By combining the information in your knowledge base with cutting-edge AI, you don't just have a chatbot with Ada - you have a generative AI Agent, designed to perform tasks that human agents have previously only been able to do.

This topic will take you through Ada's technology that we use to make the customer experience with an AI Agent different from any chatbot you've used before.

Understand Large Language Models (LLMs) and generative AI

The secret behind how your AI Agent both understands and writes messages is in the AI, or artificial intelligence, that Ada uses behind the scenes. Broadly, AI is a range of complex computer programs designed to solve problems like humans. It can address a variety of situations and incorporate a variety of types of data; in your AI Agent's case, it focuses on analyzing language to connect customers with answers.

When a customer interacts with your AI Agent, your AI Agent uses Large Language Models, or LLMs, which are computer programs trained on large amounts of text, to identify what the customer is asking for. Based on the patterns the LLM identified in the text data, an LLM can analyze a question from a customer and determine the intent behind it. Then, it can analyze information from your knowledge base and determine whether the meaning behind it matches what the customer is looking for.

Generative AI is a type of LLM that uses its analysis of existing content to create new content: it builds sentences word by word, based on which words are most likely to follow the ones it has already chosen. Using generative AI, your AI Agent constructs responses based on pieces of your knowledge base that contain the information the customer is looking for, and phrases them in a natural-sounding and conversational way.

Understand your AI Agent's content filters

LLM training data can contain harmful or undesirable content, and generative AI can sometimes generate details that aren't true, which are called hallucinations. To combat these issues, your AI Agent uses an additional set of models to ensure the quality of its responses.

Before sending any generated response to your customer, your AI Agent checks to make sure the response is:

  • Safe: The response doesn't contain any harmful content.
  • Relevant: The response actually answers the customer's question. Even if the information in the response is correct, it has to be the information the customer was looking for in order to give the customer a positive experience.
  • Accurate: The response matches the content in your knowledge base, so your AI Agent can double-check that its response is true.

With these checks in place, you can feel confident that your AI Agent has not only made sound decisions in how to help your customer, but has also sent them high-quality responses.

Understand Ada's Reasoning Engine

Your AI Agent runs on a sophisticated Reasoning Engine Ada created to provide customers with the knowledge and solutions they need. For more information, watch this short video, or keep reading.

When customers ask your AI Agent a question, it takes into account the following when deciding what to do next:

  • Conversation context: Does the conversation before the current question contain context that would help your AI Agent better answer the question?
  • Knowledge base: Does the knowledge base contain the information the customer is looking for?
  • Business systems: Are there any Actions configured with your AI Agent designed to let it fetch the information the customer is looking for?

From there, it decides how to respond to the customer:

  • Follow-up question: If your AI Agent needs more information to help the customer, it can ask for more information.
  • Knowledge base: If the answer to the customer's inquiry is in the knowledge base, it can obtain that information and use it to write a response. For more information, see Understand how your AI Agent generates content from your knowledge base.
  • Business systems: If the answer to the customer's inquiry is available using one of the Actions configured in your AI Agent, your AI Agent can fetch that information by making an API call.
  • Handoff: If your AI Agent is otherwise unable to respond to the customer's request, it can hand the customer off to a human agent for further assistance.

Together, the mechanism that makes these complex decisions on how to help the customer is called Ada's Reasoning Engine. Just like when a human agent makes decisions about how to help a customer based on what they know about what the customer wants, the Reasoning Engine takes into account a variety of information to figure out how to resolve the customer's inquiry as effectively as possible.

Understand how your AI Agent prevents prompt injections

Many AI chatbots are vulnerable to prompt injections or jailbreaking, which are prompts that get the chatbot to provide information that it shouldn't - for example, information that is confidential or unsafe.

The reasoning engine behind Ada's AI Agents is structured in such a way as to make adversarial LLM attacks very difficult to succeed. Specifically, it has:

  • A series of AI subsystems interacting together, each of which modifies the context surrounding a customer's message
  • Several prompt instructions that make the task to be performed very clear, directing the AI Agent to not share inner workings and instructions, and to redirect conversations away from casual chitchat
  • Models that aim to detect and filter out harmful content in inputs or outputs
  • State of the art generative AI testing prior to new deployments

Have any questions? Contact your Ada team—or email us at .