Best practices

Improving your AI Agent starts with understanding how users interact with it. The Topics view is especially valuable because it highlights where conversations cluster, how automation is performing, and where satisfaction may be slipping. Combined with other reporting tools, it gives AI Managers a clear path to identify gaps and prioritize improvements.

These examples can help you improve your AI Agent by revealing patterns in user interactions. AI Managers can:

  1. Start with the Topics view to review key metrics like Conversation volume, AR Opportunity, and CSAT rate.
  2. Use overall CSAT scores as another reference point — if CSAT is dropping, check the CSAT rate in the Topics view.
  3. Drill into related Conversations to find the issues behind poor CSAT and uncover automation gaps.

Example: Refund requests

Use this scenario when a Topic shows high volume or a high AR Opportunity—especially for underserved user segments. In this case, a Topic analysis reveals that student users often ask for refunds but rarely receive helpful responses from the AI Agent.

  1. On the Ada dashboard, navigate to Performance > Topics.
  2. In the Topics view, identify a Refund requests Topic with high Conversation volume and a high AR Opportunity rate.
  1. Drill into the Topic and review related Conversations. You notice student users frequently ask how to get a refund for purchases made through third-party vendors.
  2. The Agent responds with a generic message or hands off the conversation because no relevant Knowledge article exists.

The Agent uses the same language for all users, even though student users may be less familiar with refund policies or know what to expect from support.

These Conversations often end in unnecessary Handoffs that could be avoided with better guidance.

Next steps:

  1. Add a Knowledge article tailored to student users that explains how to request a refund from an external vendor. For example:
  1. Apply Personalization rules to adjust tone and messaging for student accounts—creating a more relevant, accessible experience that supports self-serve resolution. For example:

Example: Card fulfillment

Use this scenario when a Topic shows moderate automation but a high AR Opportunity—suggesting the AI Agent is initiating the right flow but consistently handing off where automation could continue.

  1. On the Ada dashboard, navigate to Performance > Topics.
  2. In the Topics view, identify a Card fulfillment Topic with high AR Opportunity rate.
  3. Drill into the Topic and review Conversations. You find a recurring pattern: the Agent checks shipment details, confirms the delivery address, and provides messaging based on how many days have passed.
  4. However, if the user needs to update their address or request a new card, the Agent always hands off—even though this could be automated.

Next steps:

  1. Add Coaching to refine the Agent’s messaging and ensure the user receives clear, actionable next steps.

  2. Build an Action to automate common fulfillment requests like reshipments or address changes—reducing premature handoffs and improving resolution rates.

Example: Account cancellation

In the Customer Satisfaction Score report, you notice a drop in the Overall Score.

To investigate, you navigate to the Topics view and sort by CSAT rate—where a Topic related to subscription or account cancellation stands out with consistently low satisfaction. This signals a high-friction experience in a sensitive scenario.

Drilling into the related Conversations reveals a recurring pattern: the AI Agent detects the intent correctly but fails to guide users through the next steps—resulting in confusion, frustration, and low CSAT.

  1. An end user asks to cancel their account or subscription.
  2. The AI Agent detects the intent but responds with a generic message like Let me connect you with support, without acknowledging the request or providing guidance.
  3. There’s no attempt to explain the cancellation process, gather details, or offer relevant resources.
  4. As a result, the Conversation is handed off immediately—leaving the user dissatisfied.

Next steps:

  1. Create or refine a Knowledge article that outlines the cancellation process in simple, direct language. Include details based on plan types or support policies to improve clarity.
  2. Use Coaching to train the AI Agent to acknowledge cancellation requests clearly and provide pre-escalation guidance that sets expectations.

These updates will help reduce premature Handoffs, improve CSAT, and ensure customers feel understood and supported at a critical moment.

Example: Warranty replacement

Start by reviewing the Topics view and sorting by the CSAT rate. You notice that Topics like Replacements or Damaged products are performing poorly, with consistently low satisfaction scores.

Drilling into one of these Topics reveals a common pattern: the AI Agent recognizes the intent but lacks the ability to collect the required details—resulting in early handoffs and a missed opportunity to automate the experience.

  1. An end user requests a replacement for a damaged product under warranty.
  2. The Agent acknowledges the request but doesn’t collect critical information like purchase date, product model, or description of the issue.
  3. Because these inputs are necessary to check eligibility, the Agent hands off the conversation prematurely.
  4. The Agent also uses the same communication style for all users, missing the chance to personalize the response based on the user’s plan (e.g., Premium support) or region.

Next steps:

  1. Build a Playbook that gathers all required inputs, checks warranty eligibility through an Action, and either submits the replacement request or provides clear next steps if ineligible.

  2. Apply Personalization rules to tailor messaging by user segment—such as using elevated tone for premium members or adding shipping timelines based on region.

These improvements reduce Handoffs, streamline the replacement workflow, and create a more supportive, tailored experience for end users.