CSAT survey configuration

Overview

Satisfaction Surveys help you understand what end users think about their automated support experience. Collect feedback at the end of conversations to measure and improve your AI Agent’s performance.

When enabled, the survey appears when end users close the chat (as long as they have sent at least one message) or when a human agent leaves the conversation.

Use cases

CSAT surveys help you measure end user satisfaction and identify areas for improvement.

  • Measure AI Agent effectiveness: Track satisfaction scores to understand how well your AI Agent resolves end user inquiries.
  • Identify improvement opportunities: Use follow-up questions and comments to uncover specific issues or areas where the AI Agent can improve.
  • Compare AI Agent vs. human agent performance: Configure separate surveys for AI Agent and human agent interactions to benchmark performance.
  • Track customer effort: Use Customer Effort Score (CES) to measure how easy it is for end users to get help.

Capabilities & configuration

Satisfaction Surveys offer flexible options for collecting end user feedback.

  • Dual survey support: Configure separate surveys for AI Agent conversations and human agent handoffs.
  • Multiple rating scales: Choose from 5-point numeric, 10-point numeric, emoji, or thumbs up/down scales.
  • Customizable questions: Edit wording for all survey questions in each supported language.
  • Optional questions: Make questions optional to maximize response rates with auto-save behavior.
  • Follow-up reasons: Collect structured feedback on why end users rated their experience positively or negatively.
  • Customer Effort Score (CES): Measure how easy it was for end users to get help on a 5-point or 7-point scale.
  • Net Promoter Score (NPS): Measure end user loyalty on a standard 0-10 scale.
  • Additional comments: Allow end users to provide free-form feedback (up to 320 characters).

Quick start

Enable and configure a CSAT survey in a few steps.

1

On the Ada dashboard, go to Config > AI AGENT > CSAT.

2

Click the AI Agent chat or Human Agent tab.

3

Enable the Enable AI Agent Survey or Enable Human Agent Survey toggle.

4

Configure your survey questions and click Save.

For detailed configuration options, see Implementation & usage.

Implementation & usage

Configure satisfaction surveys to collect end user feedback after conversations.

Configure a satisfaction survey

Customize the questions, rating scales, and languages for your CSAT surveys.

To configure a satisfaction survey:

  1. On the Ada dashboard, go to Config > AI AGENT > CSAT.

  2. Click either the AI Agent chat or the Human Agent tab. The settings for both surveys are the same, but you can configure different questions for each scenario.

  3. Click the Enable AI Agent Survey or Enable Human Agent Survey toggle to turn the survey on or off.

  4. Beside Survey Questions, select one of the languages your AI Agent supports to edit the survey questions for that language.

  5. Configure the questions you want to ask in the survey. You can choose to hide them, disable the Required toggle to allow end users to skip them, or click and drag them into a different order.

    Survey auto-save behavior: If no questions in the survey are marked as required, responses automatically save as end users fill them out. However, if any questions are marked as required, responses only save when end users click the Submit button.

    Recommendation: Keep all survey questions optional (not required). This ensures responses auto-save as end users fill them out, which helps maximize the data collected and prevents data loss if end users abandon the survey midway through.

    • Satisfaction Rating Question - Edit the wording your AI Agent uses to ask the end user about their experience, and choose a scale for end users to choose from.

      There are four ways you can set up satisfaction reviews, each with different scales for recording feedback:

      Rating typeNegative reviewPositive review
      Numeric (5-point scale)1, 2, or 34 or 5
      Numeric (10-point scale)1, 2, 3, 4, 5, or 67, 8, 9, or 10
      Emoji (5-point scale)😠, 🙁, or 😐🙂 or 😍
      Thumbs up/down (binary)👎👍
    • Follow-Up Question - Edit the wording your AI Agent uses to get additional information about their rating, and allow end users to select from a list of reasons.

      The options they can select vary, depending on whether they provided a positive or negative rating:

Possible positive reasons

  • Efficient chat
  • Helpful resolution
  • Knowledgeable support
  • Friendly tone
  • Easy to use
  • Other

Possible negative reasons

  • Took too long
  • Unhelpful resolution
  • Lack of expertise
  • Unfriendly tone
  • Technical issues
  • Other

Prior to July 8, 2025, the Follow-Up Question also included “AI Agent was intelligent” as a positive reason and “AI Agent did not understand” as a negative reason. These options were removed to simplify the survey and reduce confusion.

  • Resolution Question - Edit the wording your AI Agent uses to ask about whether it resolved end users’ issues, to which they can respond either “Yes” or “No.”

  • Additional Comments - Edit the wording your AI Agent uses to ask for additional feedback, and the placeholder hint copy where end users can type it in. End users can type up to 320 characters.

  • Customer Effort Score (CES) - Edit the wording your AI Agent uses to ask how easy it was for end users to get help. The default question is “How easy was it to get the help you needed today?” You can configure CES on a 5-point or 7-point scale.

  • Net Promoter Score (NPS) - Edit the wording your AI Agent uses to ask about end user loyalty. The default question is “How likely are you to recommend us to a friend or colleague?” NPS uses a standard 0-10 scale where 0-6 are Detractors, 7-8 are Passives, and 9-10 are Promoters.

  1. Click Save. Your AI Agent saves your survey settings.

Related features

Use these reports to interpret data from your Satisfaction Survey:

To view these reports, go to Analytics > Reports in the Ada dashboard.