Welcome to Ada’s release notes. Scroll down to see a list of recent releases, or subscribe to get notified about updates.
Subscribe via email
Receive a weekly email summary of releases every Friday at 11 a.m. Eastern, provided there has been at least one release that week.
Subscribe via RSS
Copy the following URL into your RSS reader to get notified about new releases:
Conversations API now supports native Voice
The Conversations API now provides webhook events and channel details for Ada’s native Voice channel, bringing Voice to parity with Chat and Email.
You can now:
- Retrieve Voice channel details using
GET /v2/channelswithtype=native - Subscribe to conversation webhooks for Voice calls:
v1.conversation.created— triggered when a Voice call beginsv1.conversation.message— triggered for each message from end users, the AI Agent, or human agentsv1.conversation.ended— triggered when a call disconnects (caller hangup, bot-initiated end, or handoff)
This enables real-time integrations such as triggering third-party CSAT surveys when calls end, syncing Voice conversations to your helpdesk, or building unified analytics across all channels.
Voice webhook support requires the Unified Reasoning Engine, which is rolling out to all bots throughout February 2026.
To learn more, see the Conversations API overview and webhook documentation.
Testing at Scale Launch
Introducing Testing at Scale, a new dashboard tool that helps AI Managers confidently improve their AI Agent by simulating and evaluating key customer scenarios in bulk.
As you continue to grow your AI Agent’s capabilities, even small configuration changes can have unintended impacts on performance. Testing at Scale gives you a reliable way to understand how your AI Agent is behaving — before deployment and as it continues to change in production — without relying on manual audits of live conversations or manual one-off testing to surface issues.
What’s new
With Testing at Scale, you can:
- Create benchmark test cases that represent key customer scenarios, including the end user inquiry, language, channel, relevant context (variables), and criteria for what a “good” response includes
- Run test cases at scale to simulate how your AI Agent would respond under real-world conditions
- Automatically evaluate results, with clear pass/fail outcomes and explanations for each result
- Catch regressions early by re-running test cases after configuration changes to ensure intended improvements don’t negatively impact other scenarios
- Understand where to focus your improvement efforts
Why it matters
Testing at Scale replaces manual, one-off testing and reactive investigation with a scalable, repeatable evaluation workflow. It helps teams:
- Ship changes with confidence
- Detect failures introduced by changes earlier
- Maintain consistent performance checks as use case coverage expands
- Run routine audits and safety checks on sensitive-topic scenarios
Getting started
You can access Testing at Scale from the new Testing page in the Ada dashboard. Start by creating test cases for your most important customer scenarios, then run them in bulk to establish a performance baseline and guide future improvements.
Learn more about how this feature works and its limitations here.
Currently available for Web Chat and Email channels with single-turn conversation testing.