Welcome to Ada’s release notes. Scroll down to see a list of recent releases.

You can subscribe to updates with our RSS feed, or sign up to get news about updates directly in your inbox.

At the end of every week that has had at least one feature release, we’ll send you an email on Friday at 11 a.m. Eastern to let you know about our last few releases.

Real-time conversation data in the Data Export API

The Conversations endpoint in the Data Export API (v2) now returns data in real time. Previously, conversation data took 24–48 hours to become available — now you can query for conversations as soon as they are created or updated.

If you’re already on v2, no changes are required on your end. You can simply start fetching data earlier if you’d like to take advantage of the reduced latency.

Real-time availability for the Messages endpoint is coming soon. In the meantime, message data continues to take 24–48 hours to be fully ingested.

For more details, see the Data Export API documentation.


Conversations API now supports native Voice

The Conversations API now provides webhook events and channel details for Ada’s native Voice channel, bringing Voice to parity with Chat and Email.

You can now:

  • Retrieve Voice channel details using GET /v2/channels with type=native
  • Subscribe to conversation webhooks for Voice calls:
    • v1.conversation.created — triggered when a Voice call begins
    • v1.conversation.message — triggered for each message from end users, the AI Agent, or human agents
    • v1.conversation.ended — triggered when a call disconnects (caller hangup, bot-initiated end, or handoff)

This enables real-time integrations such as triggering third-party CSAT surveys when calls end, syncing Voice conversations to your helpdesk, or building unified analytics across all channels.

Voice webhook support requires the Unified Reasoning Engine, which is rolling out to all bots throughout February 2026.

To learn more, see the Conversations API overview and webhook documentation.


Testing at Scale Launch

Introducing Testing at Scale, a new dashboard tool that helps AI Managers confidently improve their AI Agent by simulating and evaluating key customer scenarios in bulk.

As you continue to grow your AI Agent’s capabilities, even small configuration changes can have unintended impacts on performance. Testing at Scale gives you a reliable way to understand how your AI Agent is behaving — before deployment and as it continues to change in production — without relying on manual audits of live conversations or manual one-off testing to surface issues.

What’s new

With Testing at Scale, you can:

  • Create benchmark test cases that represent key customer scenarios, including the end user inquiry, language, channel, relevant context (variables), and criteria for what a “good” response includes
  • Run test cases at scale to simulate how your AI Agent would respond under real-world conditions
  • Automatically evaluate results, with clear pass/fail outcomes and explanations for each result
  • Catch regressions early by re-running test cases after configuration changes to ensure intended improvements don’t negatively impact other scenarios
  • Understand where to focus your improvement efforts

Why it matters

Testing at Scale replaces manual, one-off testing and reactive investigation with a scalable, repeatable evaluation workflow. It helps teams:

  • Ship changes with confidence
  • Detect failures introduced by changes earlier
  • Maintain consistent performance checks as use case coverage expands
  • Run routine audits and safety checks on sensitive-topic scenarios

Getting started

You can access Testing at Scale from the new Testing page in the Ada dashboard. Start by creating test cases for your most important customer scenarios, then run them in bulk to establish a performance baseline and guide future improvements.

Learn more about how this feature works and its limitations here.

Currently available for Web Chat and Email channels with single-turn conversation testing.


Introducing a Refreshed Navigation Experience

We’ve redesigned the Ada dashboard navigation to make it easier to find what you need and to better support how you work with your AI Agent.

What’s new

  • Streamlined primary navigation that emphasizes monitoring and insights — Home, Analytics, Conversations, and the new Testing feature now have clear top-level visibility
  • Consolidated secondary navigation for build-focused areas like Knowledge, Actions, Playbooks, and Settings — everything is now easier to scan without hunting through nested menus

Why we made this change

As our product has grown, we’ve updated the navigation to better reflect how AI Managers work day to day. The new structure makes it easier to monitor performance and take targeted action, while keeping the experience clear and flexible as the product continues to evolve.


Unified Reasoning Engine

A major upgrade to the AI architecture is rolling out over the course of February. The unified Reasoning Engine replaces multiple channel-specific systems with a single, shared intelligence layer across Voice, Messaging, and Email.

What’s new

  • Faster FAQ responses: Simple questions and chitchat in Messaging now receive faster responses.
  • Improved language detection: More accurate language detection in Messaging and Email channels.
  • Multi-message support: Messaging users can send multiple messages in succession and receive a single, coherent reply.
  • Coaching on Voice: Coaching is now applied to Voice conversations.
  • Playbooks on Voice: Playbooks can now run in Voice conversations. Performance optimizations for Voice are ongoing—talk to your Customer Solutions Consultant about the right timing for your use cases.

Rollout timeline

The unified Reasoning Engine will roll out gradually throughout February 2026, starting with 1% of customer conversations the week of February 2. No action is required—the upgrade will be applied automatically.

Preparing for the rollout

Voice customers: Review your existing Playbooks and Coaching items. If you don’t want certain Playbooks or Coaching available in Voice, add an availability rule (channel ≠ Voice). If you’re planning to deploy Playbooks on Voice, a phased approach is recommended—your CSC can help you evaluate which Playbooks are ready now and which may benefit from upcoming performance improvements.

Why this matters

This foundational upgrade delivers more consistent AI behavior across all channels, reduces response latency for straightforward inquiries, and enables Voice customers to benefit from Coaching and Playbooks for the first time.


Same-thread email replies for Zendesk Ticketing handoffs

With the SMTP Connector configured, human agent replies to email conversations escalated via the Zendesk Ticketing block can now continue in the same email thread as the original end-user conversation. This provides a more consistent experience for end users and reduces confusion caused by broken or split threads.

Requirements:

  • The SMTP Connector must be configured for this behavior.
  • Without the SMTP Connector, human agents must use a different email address than the AI Agent’s BYOD address, and replies are sent in a separate thread.
  • If using the Ticket Recipient field with a different email address than the AI Agent’s BYOD address, SMTP must be configured for both addresses. Both addresses must be on the same domain.

Learn more about configuring Zendesk Ticketing and the SMTP Connector.


Option to disable Multiple participant conversations in Email

Added a toggle to disable multiple participant conversations in the Email channel settings.

By default, the AI Agent replies to all participants in an email thread and accepts messages from anyone included in the To or CC fields. With this update, AI Managers can now opt out of this behaviour:

  • Toggle ON (default): The Agent replies to all participants in the same thread, and everyone has visibility into the conversation.
  • Toggle OFF: The Agent replies only to the original sender. If another participant replies, the Agent starts a separate conversation with them.

To configure this setting, go to Channels > Email > Customization and locate the Include CC’d participants in responses toggle.


Introducing Linked Playbooks

Playbooks can now call other Playbooks to create modular, reusable workflows. Call any child Playbook from the Playbook editor with the @ option. Linked Playbooks always return to the parent Playbook when complete, enabling shared steps like authentication or eligibility checks without duplication.

Learn more about Linked Playbooks in Playbook management.


Connect Ada to AI assistants with MCP Server

You can now connect Ada to AI assistants like Claude Desktop and ChatGPT using the Model Context Protocol (MCP). This lets you query and analyze your AI agent data conversationally—ask questions like “Can you review my low CSAT convos from last week and let me know where I should focus my improvement efforts?” and get actionable insights without navigating dashboards.

What you can do

  • Analyze hundreds of conversations at once to identify root causes and themes
  • Get improvement recommendations based on conversation patterns
  • Create visualizations like trend charts and Sankey diagrams

See our prompt library for more examples.

Getting started

MCP Server supports OAuth (recommended) or API key authentication, and works with any MCP-compatible assistant.

Learn more in our MCP Server documentation.


January 29, 2026

Email CCs for Zendesk Handoffs

An optional Email CCs input field is now available on the Zendesk Ticketing block. This allows CC’d recipients from an email conversation to be passed directly into a Zendesk ticket at handoff, without requiring a custom field in Zendesk.

When configured with the email_latest_CC_list variable, all CC’d participants are automatically included on the ticket, ensuring they receive replies from human agents after handoff.

Learn more about configuring email handoffs in Zendesk in the Zendesk Ticketing block documentation and Email handoffs.