Welcome to Ada’s release notes. Scroll down to see a list of recent releases.
You can subscribe to updates with our RSS feed, or sign up to get news about updates directly in your inbox.
Sign up for email notifications
At the end of every week that has had at least one feature release, we’ll send you an email on Friday at 11 a.m. Eastern to let you know about our last few releases.
Edit sync frequency and name for existing web sources
You can now edit the sync frequency and name of existing Web Import sources without needing to delete and recreate them.
Previously, these settings could only be configured during initial setup. Now you can:
- Change sync frequency between Daily, Weekly, or Never for any existing web import
- Rename knowledge sources for better organization
To update an existing web import, navigate to the Knowledge section in your dashboard, locate the web import you want to modify, and click the settings icon.
Learn more in our Web Import documentation.
View step-by-step reasoning for Playbook decisions
AI Managers can now see the reasoning behind each Playbook step selection directly in the conversation view. Click the information (ℹ) icon beside any Playbook event to see why the AI Agent chose each step.
Learn more in Playbook management.
Increased per-day rate limits for End Users and Conversations APIs
The per-day rate limit for the End Users API and Conversations API has been doubled from 30,000 to 60,000 requests per day.
This applies to:
- All
/v2/end-users/<end_user_id>endpoints POST /v2/conversations/,GET /v2/conversations/<conversation_id>/, andPOST /v2/conversations/<conversation_id>/end/
Per-minute (300) and per-second (30) limits are unchanged. No changes are required on your end.
For full rate limit details, see the End Users API and Conversations API documentation.
Configure sync frequency for web sources
Web Import sources now support a configurable sync frequency. When adding a website source, choose how often Ada automatically re-syncs its content: Daily, Weekly, or Never. Weekly is the default for all new sources.
Learn more in our Web Import documentation.
Real-time message data in the Data Export API
The Messages endpoint in the Data Export API (v2) now returns data in near real time. Previously, message data took 24–48 hours to become available — now messages are typically available within seconds of being created.
This completes the near-real-time migration of the Data Export API. Both the Conversations endpoint and the Messages endpoint now deliver data within seconds, rather than hours.
If you’re already on v2, no changes are required on your end.
For more details, see the Data Export API documentation.
Basic UI conversation view deprecated
The legacy Basic UI in the conversation view has been deprecated. The Coaching UI is now the only conversation view experience going forward. No action is required — all existing functionality remains available in the Coaching UI.
New filters for targeted analysis in MCP Server
We’ve added new filters to the MCP Server, unlocking more ways to slice conversation data for targeted analysis.
For example, you can now ask your AI assistant:
“Can you review unresolved Voice conversations from last week that used the ‘Update booking’ playbook and let me know where I should focus my improvement efforts?”
New filters include Topics, Playbooks, Coaching, Language, Channel, Browser, Device, Status Code, and Test User.
Learn more in our MCP Server documentation.
Real-time conversation data in the Data Export API
The Conversations endpoint in the Data Export API (v2) now returns data in real time. Previously, conversation data took 24–48 hours to become available — now you can query for conversations as soon as they are created or updated.
If you’re already on v2, no changes are required on your end. You can simply start fetching data earlier if you’d like to take advantage of the reduced latency.
For more details, see the Data Export API documentation.
Conversations API now supports native Voice
The Conversations API now provides webhook events and channel details for Ada’s native Voice channel, bringing Voice to parity with Chat and Email.
You can now:
- Retrieve Voice channel details using
GET /v2/channelswithtype=native - Subscribe to conversation webhooks for Voice calls:
v1.conversation.created— triggered when a Voice call beginsv1.conversation.message— triggered for each message from end users, the AI Agent, or human agentsv1.conversation.ended— triggered when a call disconnects (caller hangup, bot-initiated end, or handoff)
This enables real-time integrations such as triggering third-party CSAT surveys when calls end, syncing Voice conversations to your helpdesk, or building unified analytics across all channels.
Voice webhook support requires the Unified Reasoning Engine, which is rolling out to all bots throughout February 2026.
To learn more, see the Conversations API overview and webhook documentation.
Testing at Scale Launch
Introducing Testing at Scale, a new dashboard tool that helps AI Managers confidently improve their AI Agent by simulating and evaluating key customer scenarios in bulk.
As you continue to grow your AI Agent’s capabilities, even small configuration changes can have unintended impacts on performance. Testing at Scale gives you a reliable way to understand how your AI Agent is behaving — before deployment and as it continues to change in production — without relying on manual audits of live conversations or manual one-off testing to surface issues.
What’s new
With Testing at Scale, you can:
- Create benchmark test cases that represent key customer scenarios, including the end user inquiry, language, channel, relevant context (variables), and criteria for what a “good” response includes
- Run test cases at scale to simulate how your AI Agent would respond under real-world conditions
- Automatically evaluate results, with clear pass/fail outcomes and explanations for each result
- Catch regressions early by re-running test cases after configuration changes to ensure intended improvements don’t negatively impact other scenarios
- Understand where to focus your improvement efforts
Why it matters
Testing at Scale replaces manual, one-off testing and reactive investigation with a scalable, repeatable evaluation workflow. It helps teams:
- Ship changes with confidence
- Detect failures introduced by changes earlier
- Maintain consistent performance checks as use case coverage expands
- Run routine audits and safety checks on sensitive-topic scenarios
Getting started
You can access Testing at Scale from the new Testing page in the Ada dashboard. Start by creating test cases for your most important customer scenarios, then run them in bulk to establish a performance baseline and guide future improvements.
Learn more about how this feature works and its limitations here.
Currently available for Web Chat and Email channels with single-turn conversation testing.