Simulations

Simulations enable AI Managers to understand how configuration changes impact AI Agent behavior. By creating and running test cases, you can verify that updates improve behavior without causing regressions.

Simulations differ from Interactive Testing in that they run test cases as multi-turn conversations in bulk, rather than one at a time.

Use Simulations for systematic validation at scale—including regression testing, compliance audits, and deployment readiness.

Use Interactive Testing for quick checks and qualitative exploration.

Overview

When you modify an AI Agent—whether by updating Knowledge, adjusting Actions, or refining Playbooks, predicting the full impact across all support scenarios is difficult. Simulations address this challenge by allowing you to define a set of test cases, simulate end-user inquiries, and evaluate the AI Agent’s simulated responses against the expected outcomes you define.

With Simulations, you can:

  • Create a library of test cases that you can run anytime to understand how your AI Agent is behaving
  • Quickly test across end-user segments to ensure coverage of real-world situations

You can run up to 300 test runs per day and maintain up to 100 test cases per instance. If you need additional capacity, contact your Ada representative.

Limitations

Simulations have the following constraints:

  • Web Chat, Email, and Voice channels only.
  • Up to 40 turns per simulation: Each test case runs as a multi-turn conversation capped at 40 turns between the simulated end user and the AI Agent.
  • Greetings not included: Greetings are not played at the start of a simulated conversation.
  • Test case creation: Manual test case creation is available in the dashboard. Bulk test case creation can be facilitated through the MCP Server.
  • Voice simulation is reasoner-only: On the Voice channel, simulations test how the AI Agent’s reasoning surfaces as voice output—that is, you can hear how the Agent would respond. Simulations do not yet test other production voice behaviors such as end-to-end latency, background noise handling, interruptions, or other real-time voice interaction attributes.
  • Pass/fail evaluations only: Expected outcomes produce binary pass/fail results. Complex scoring or weighted evaluations are not available.
  • Production configuration only: Tests run against the current published AI Agent configuration. Draft or staged changes cannot be tested in isolation. However, you can use availability rules to publish changes that are not yet live with end users, then run tests against that configuration.
  • No direct dashboard export: Test results cannot be exported directly from the dashboard. Test cases and results can be exported as CSV through the MCP Server.
  • Default test case settings: Language and channel default to English and Web Chat unless otherwise specified at test case creation.
  • Volume: Up to 300 test runs per day and 100 test cases per instance.

Both simulated responses and evaluations are powered by generative AI. Some minor variability in responses and evaluation results is expected between test runs. To improve consistency, use clear and specific expected outcomes, ensure relevant Coaching and Custom Instructions are in place, and re-run tests periodically to observe trends over time.

Tests run separately from live traffic and do not affect production performance. You can run tests at any time without impacting end-user conversations.

Multi-turn behavior by capability

Simulated conversations run as multi-turn exchanges, capped at 40 turns. The simulated end user responds based on the Scenario you define, and the AI Agent uses its full production capabilities. The conversation ends when the Agent resolves the inquiry, reaches a handoff, or hits the 40-turn cap.

CapabilitySupportedBehavior
Knowledge✅ YesSearches Knowledge as in production.
Actions✅ YesExecutes API calls against configured production endpoints. Actions are not mocked—they execute against live systems.
Coaching✅ YesConsidered when generating the AI Agent’s responses.
Custom Instructions✅ YesConsidered when generating the AI Agent’s responses.
Playbooks✅ YesRuns Playbooks across multiple turns, including follow-up prompts to the end user.
Processes✅ YesExecutes Process Blocks, including Capture Blocks that request end user input. Note: simulated end user responses to Capture Block selections may vary across simulation runs.
Handoffs✅ YesRuns the Handoff flow. Handoffs are mocked—no tickets are created.
Greetings❌ NoNot played at the start of a simulated conversation.

Use cases

Simulations support several common workflows:

  • Pre-deployment validation: During initial AI Agent configuration, validate behavior across key scenarios before launching to end users.
  • Regression Testing: After updating Knowledge articles or modifying Actions, re-run existing test cases to confirm the changes produce expected results without causing regressions.
  • Continuous improvement: Post-deployment, run tests regularly to monitor AI Agent performance and identify areas for improvement.
  • Major update validation: Before and after significant changes, run comprehensive test suites to catch unintended downstream impacts.
  • Coverage validation: Create test cases representing different end-user segments, languages, and channels to ensure the AI Agent handles a broad range of real-world situations.
  • Deployment readiness: Run a full test suite before deploying changes, generating clear pass/fail metrics to share with stakeholders and support go/no-go decisions.
  • Compliance and safety audits: Validate AI Agent behavior for compliance-sensitive scenarios and edge cases.

Capabilities & configuration

Simulations provide automated conversation testing capabilities through structured test cases.

Test case structure

Each test case includes the following elements:

FieldDescriptionRequired
Test case nameA descriptive name for the test caseYes
Customer inquiryThe opening message the simulated end user sends to the AI AgentYes
ScenarioA description of the simulated end user’s goal, context, and how they should respond across turnsYes
VariablesOptional variables to set for the test sessionNo
Expected outcomesConditions the AI Agent’s responses must meet to pass (1–10 criteria per test)Yes

Test case examples

The following examples illustrate how to structure test cases for common scenarios:

Test case nameScenarioEnd-user inquiryExpected outcomes
Refund policy accuracyEnd user purchased an item 45 days ago and wants to know if it’s still refundable. They push back once if told no, then accept the Agent’s explanation.Can I get a refund after 45 days?States the correct refund window of “30 days”; Does not state an incorrect or invented window; Offers alternatives if the refund is declined
Order status uses lookup ActionEnd user has order “12345” and wants its status. They provide the order number when asked.What’s the status of my order?Asks for the order number if not provided; Uses the order lookup Action; Reports the returned status accurately
Cancellation with retentionEnd user wants to cancel their subscription but is open to a retention offer. They accept a discount if presented.Cancel my subscription.Recognizes cancellation intent; Initiates the Subscription Cancellation Playbook; Offers a retention discount before confirming cancellation
Regulated advice avoidanceEnd user is asking for personalized tax advice. They press for a specific recommendation when initially deflected.Which plan should I choose to lower my taxes?Does not provide personalized financial advice; Provides general information or escalation guidance; Maintains position when pressed

Test results

Each test run generates pass/fail results for individual expected outcomes and an overall status for the test case. Results also include a rationale explaining each judgment and a list of generative entities (Knowledge, Actions, Playbooks, etc.) used to produce the response. For more details, see Review results.

Quick start

Get started with Simulations in just a few steps. For detailed instructions, see Implementation & usage.

1

Create a test case

In your Ada dashboard, navigate to Simulations, and click Add. Enter a Test case name, a Customer inquiry, a Scenario describing the end user’s goal and behavior across turns, select the Language and Channel (Web Chat, Email, or Voice), and define at least one Expected outcome. Then, click Save.

2

Run the test

Select your test case and click Test. The AI Agent runs a multi-turn simulation and evaluates the transcript against your expected outcomes.

3

Review results

View the pass/fail status for each criterion, read the rationale, and review the details to understand how the response was generated.

4

Iterate and improve

If the test fails, update the relevant Knowledge, Actions, or Playbooks, then re-run the test to verify the improvement.

Implementation & usage

Create test cases, run tests, and review results to validate your AI Agent’s responses.

Create a test case

Test cases define the end-user inquiry and expected outcome that the AI Agent’s response is evaluated against. Each test case captures a specific scenario you want to validate, making it reusable for regression testing and ongoing verification.

Creating test cases

Test case name

The Test case name should be descriptive and clearly reflect the scenario being tested—for example, Refund policy accuracy or Password reset initiation. A clear name makes it easier to identify test cases when running batches or reviewing results.

Naming test cases

Test inquiry setup

Test inquiry setup defines what the AI Agent receives and the context in which it responds.

  • Customer inquiry: The message the AI Agent receives, simulating what an end user would send. This should reflect realistic phrasing and context.

  • Test Variables: Variables allow test cases to simulate specific end-user contexts, such as language preferences or channel type. Adjusting values like language and channel helps ensure the AI Agent’s response reflects real-world conditions.

Test inquiry setup

Scenario

The Scenario describes the simulated end user’s goal, context, and how they should respond across turns. It drives the simulated end user’s behavior throughout the multi-turn conversation, so realistic scenarios produce more representative results.

A well-written Scenario:

  • States the end user’s goal (for example, get a refund for a damaged item)
  • Provides relevant context the end user knows and would share when asked (account identifiers, dates, product names, prior interactions)
  • Defines how the end user responds—whether they answer clarifying questions directly, push back, provide partial information, or accept the Agent’s suggestions
  • Focuses on a single goal per test case

For detailed guidance, see Scenarios in the best practices guide.

Migrating existing test cases

Test cases created before multi-turn Simulations launched remain runnable. Their existing Customer inquiry is reused as the Scenario for simulated runs.

The next time you edit a pre-existing test case, you are required to add a Scenario before saving.

To populate Scenarios for existing test cases in bulk, use the MCP Server rather than editing each test case individually.

Test answer evaluation

Test answer evaluation defines what the AI Agent’s response must achieve to pass. The AI Agent’s response is evaluated against each criterion independently, producing:

  • A pass/fail result per criterion

  • An overall pass/fail for the test case

  • A rationale explaining each judgment

  • Expected outcomes: Each test case requires at least one expected outcome and supports up to ten. Write outcomes that are specific and measurable—for example, instead of responds helpfully, use provides the return policy timeframe or includes a link to a help article. Clear, well-defined outcomes produce more reliable pass/fail evaluations and meaningful rationale.

Test answer evaluation

Add a new test case

You can add test cases from the Simulations page in your Ada dashboard.

To create a test case:

  1. In your Ada dashboard, navigate to Simulations, then click Add.
  2. On the Simulations page, enter a Test case name.
  3. Under Test inquiry setup, enter a Customer inquiry and optionally add Test Variables to simulate specific end-user contexts.
  4. Enter a Scenario describing the end user’s goal, context, and how they should respond across turns.
  5. Under Test answer evaluation, add at least one Expected outcome.
  6. Click Save to create the test case.

Edit or delete a test case

You can modify or remove existing test cases from the Simulations page.

To edit a test case:

  1. In your Ada dashboard, navigate to Simulations.
  2. Select the test case you want to edit from the list on the left.
  3. In the test case section on the right, click the three dots in the top-right corner and select Edit.
  4. Update the Test case name, Test inquiry setup, or Test answer evaluation as needed.
  5. Click Save to apply your changes.

To delete a test case:

  1. In your Ada dashboard, navigate to Simulations.
  2. Select the test case you want to delete from the list on the left.
  3. In the test case section on the right, click the three dots in the top-right corner and select Delete.

Run tests

Test runs execute one or more test cases against the current published AI Agent configuration. Each test simulates a multi-turn conversation—up to 40 turns—between the simulated end user and the AI Agent, then evaluates the transcript against the defined expected outcomes.

Running tests

Select and run test cases

Tests can be run individually to validate specific scenarios, or in batches to evaluate broader coverage. Batch testing is useful for regression testing after configuration changes or for validating deployment readiness across multiple scenarios at once.

  • Selecting multiple test cases and running them together produces a consolidated test run with results for each case.

  • Test runs execute separately from live traffic and do not affect production performance.

Selecting tests

To run tests:

  1. In your Ada dashboard, navigate to Simulations.
  2. On the Simulations page, select one or more test cases on the left.
  3. Click Test and wait for the test run to complete.

Review results

Test results provide visibility into how the AI Agent performed against each expected outcome. Results include pass/fail status, evaluation rationale, and details about which tools were referenced by the AI Agent to generate its response.

Each test case displays an overall pass/fail status based on whether the AI Agent’s response met all defined expected outcomes. Individual criterion results are also available, allowing you to identify which specific expectations passed or failed.

Clicking into a test case reveals additional context:

  • Conversation transcript: The full multi-turn exchange between the simulated end user and the AI Agent, including every message from both sides.

  • Audio playback (Voice channel only): A simulated audio recording of the AI Agent’s and simulated end user’s turns. The AI Agent’s responses are generated using the configured speaking voice for the selected language, while the simulated end user uses one of four default voices.

  • Evaluation rationale: An explanation for each criterion judgment, describing why the response passed or failed.

  • Generative entities used: A list of Knowledge, Actions, Playbooks, and other configuration elements that contributed to the response.

Review test results

To review test results:

  1. In your Ada dashboard, navigate to Simulations.
  2. Select a completed test run to view the results.
  3. Select a test case to see the response details, evaluation results, and rationale.

Improvement actions

Failed test cases highlight areas where the AI Agent’s behavior does not meet expectations. The results provide the context needed to diagnose issues and make targeted improvements.

Evaluation rationale

Each test result includes an evaluation rationale that explains why each criterion passed or failed. The rationale provides insight into the AI Agent’s reasoning and helps identify whether the issue stems from missing Knowledge, incorrect Action behavior, Playbook logic, or other configuration.

Evaluation rationale

Test results include direct links to the generative entities—such as Knowledge articles, Actions, or Playbooks—that contributed to the response. These links provide quick access to the relevant configuration, making it easier to locate and update the source of an issue.

Configuration links

Iterative improvement

Re-running test cases after making changes confirms whether updates resolved the issue. This cycle of testing, diagnosing, and improving supports continuous refinement of AI Agent behavior over time.

These features complement Simulations and support AI Agent optimization:

  • Interactive Testing: Test your AI Agent in real time by chatting with it directly, simulating different user types with variables.

  • MCP Server: Retrieve test cases, test run results, and quota information, or export test data as CSV through a connected AI assistant.