Developer guide
This guide will take you on a hands-on journey through Ada’s Conversations API, showing you exactly how to build, run, and understand a custom channel integration from start to finish.
If you’ve already reviewed the Getting started guide, you’ll notice this goes much deeper. Here, we’ll not only show you what to do, but explain why each step matters, what’s happening under the hood, and how to extend it into production.
Rate limits and retries
Ada’s APIs include rate limits to ensure consistent performance and reliability. You’re unlikely to hit these during normal development, but it’s good practice to handle HTTP 429 Too Many Requests responses and implement retry logic. For more details, see:
Setting up
Before we dive into the code, let’s set up everything you need to run our demo locally and connect your environment to Ada’s Conversations API. This ensures your local app can authenticate with Ada, create conversations, and receive webhook events in real time, just like a production integration would.
Obtain an Ada API key
If you don’t already have one, generate a new API key in the Ada Dashboard.
This key lets your integration securely communicate with Ada’s Conversations API.
Clone the demo repo
Our demo repository contains a working example of how to connect to the Conversations API, send and receive messages, and handle webhook events. You’ll use it both as a reference and a sandbox for testing your integration.
- 
Run these commands in your Terminal: bashThis will: - Download the full demo project to your local machine.
- Change into the project directory so you can start working inside it.
 
What's inside the repo?
Once cloned, take a moment to look around the folder structure. The key files and directories you’ll work with are:
How it works
This demo is built around a simple event flow:
- A user sends a message through the local UI or via API call.
- The backend (in ada_api.py) forwards that message to Ada’s Conversations API.
- Ada responds asynchronously via a webhook (handled in webhooks.py).
- The message gets rendered back into the local chat interface or forwarded to another platform like Slack.
Behind the scenes
Create a tunnel for Ada's webhooks
Important: This step applies to local testing only. In production, your webhook endpoint should be hosted on a publicly accessible, HTTPS-secured domain that Ada can reach directly.
Ada sends all AI Agent responses and conversation updates as webhooks, HTTP callbacks that your integration needs to receive in real time.
Because your computer isn’t publicly accessible during local development, Ada can’t reach localhost directly. To bridge that gap, you can use a tunneling tool. One option is ngrok, which creates a secure, temporary public URL that forwards requests to your local server.
Run ngrok
Run this in a new terminal window:
This creates a secure public URL that forwards requests to your local FastAPI server running on port 8080. If everything is working as expected, you will see a newly created forwarding URL, for example: https://1234-56-78-90.ngrok-free.app.
That’s the public address Ada will use to send webhook requests to your local app.
Keep ngrok running
Keep this terminal window open while you’re testing. If you close it, your tunnel (and webhook connectivity) will stop.
If you restart ngrok, it will generate a new URL. You’ll need to update the endpoint in the Ada Dashboard whenever that happens.
https://1234-56-78-90.ngrok-free.app/webhooks/message.Configure Conversations API webhooks
Ada delivers AI Agent responses and conversation events to your integration through webhooks. These are secure HTTP callbacks that notify your app when something happens in Ada, such as a new message or a conversation ending.
To protect your integration, you need a way to ensure those webhook requests really come from Ada and not from another source trying to mimic them. Ada provides a Signing Secret for each webhook endpoint, which your service can use to verify the authenticity of every incoming request.
Add a webhook endpoint
- In the Ada Dashboard, go to Platform > Webhooks > Endpoints.
- Create a new endpoint, or open an existing one if you already have one configured.
Set your endpoint URL
- 
Make sure the endpoint points to either the temporary public URL that forwards requests to your local server (via ngrok if you’re testing locally) or to your production webhook URL that matches the route your server listens on. For example: https://1234-56-78-90.ngrok-free.app/webhooks/message.This URL corresponds to the /webhooks/messageroute defined in your demo app (app/webhooks.py).
Subscribe to Conversations API events
- 
On the Endpoints tab, under Subscribe to events, make sure to include the Conversations API events. You’ll find them under the v1 > conversation category: - v1.conversation.message: Triggers when a message is sent or received.
- v1.conversation.created: Triggers when a new conversation starts.
- v1.conversation.ended: Triggers when a conversation closes.
 These events ensure your integration receives real-time updates for every conversation and message handled by Ada. 
Obtain the Signing Secret
- In the right-side navigation panel, locate the Signing Secret.
- Reveal and copy the value. You will need it in the next step when updating your .envfile:WEBHOOK_SECRET=<your-signing-secret>. Your integration will use this secret to verify that all incoming webhook requests originate from Ada.
Set up your .env file
The .env file is where you’ll store the core configuration values that connect your local demo to your Ada instance. Think of it as the bridge between your code and the Ada platform: it tells your local environment which AI Agent to talk to, how to authenticate, what channel to use, and how to verify incoming webhook requests.
Locate and duplicate the template
In the demo repository, you’ll find a template called .env.example. This file includes all the configuration keys you’ll need. Start by duplicating it so you can edit your own version:
Open and review the file
Now open your newly created .env file in your editor. It will look something like this:
Start updating your .env file
Here’s what each of these values means and how to update them:
- ADA_BASE_URL: The base URL for your Ada instance consisting of your agent’s handle and your organization’s domain. For example:- ADA_BASE_URL=https://example.ada.support.
- ADA_API_KEY: The Ada API key you generated in your Ada Dashboard under Platform > APIs. This authenticates every API request to Ada. Treat it like a password: never commit it to Git.
- ADA_CHANNEL_ID:The unique ID of the custom channel your app will use to create and manage conversations. You don’t have this yet. We’ll create the channel in Step 1, then come back to this file and fill it in. For now, leave it blank:- ADA_CHANNEL_ID=.
- WEBHOOK_SECRET: The signing secret Ada uses to verify webhook requests. Use the value obtained from the webhook endpoint you created in the previous step in the Ada Dashboard (Platform > Webhooks > Endpoints > Signing Secret).
Sample .env file
After you’ve filled in the available values, your .env file should look something like this:
Step 1: Create a custom channel
A custom channel is the connection point between Ada and your integration. It tells Ada where conversations will take place and how messages should flow between your AI Agent and your external system. In other words, you’re creating a bridge that lets Ada talk to your app.
Create your custom channel using the Create a new channel endpoint.
When to create a custom channel
You only need to create a custom channel once per integration, not every time your service runs. Typically, this happens during initial setup or deployment, when you’re wiring up Ada to a new platform or system.
If you’re connecting Ada to multiple external systems (for example, Slack, SMS, or a web chat widget), you’ll create a separate channel for each integration.
Once a channel is created, you’ll reuse its unique channel ID in all future API calls that reference this integration. For example, when starting conversations or sending messages.
What to include in the request
To create a channel, call the Create a new channel endpoint and include the applicable fields in the request. At a minimum, you’ll need to provide a name, description, and modality for your new channel.
Sample request
This example shows the minimal HTTP request needed to create a new custom channel in Ada using the Conversations API.
Replace <handle> with your Ada handle (the same subdomain defined in your .env file). In the Authorization heade, include <your-api-key> that you created earlier to authenticate your request.
name and description clear and unique. They’ll help you identify this integration later, especially if you manage multiple channels.See what Ada returns
If the call is successful, Ada returns a JSON response with the details of your new channel. Here’s the field that matters most for this step:
Sample response
Example abbreviated for clarity. See the full response here.
This id is your channel ID. You’ll need it for all future Conversations API calls. Once added to your configuration, your app will automatically use this channel for every conversation and message it creates.
Finalize your .env file
Now that you’ve created your custom channel and received its ID, you can complete your .env configuration.
Open the file and update the ADA_CHANNEL_ID value using the id returned in Ada’s response. For example: ADA_CHANNEL_ID=68f25a35072ad87710c1d96b.
Once saved, your app will automatically use this channel for all conversations and messages it creates.
Sample .env file
Your final .env file should now include all the required values:
Step 2: Start a conversation
Before you can send or receive messages, you need a conversation in Ada. A conversation acts as the container that holds context — who the user is, what channel they’re on, their current state, and any relevant metadata.
Start a conversation using the Create a new conversation endpoint. When you do that, Ada automatically sends a Greeting message to the user. This initial message is triggered by Ada’s platform APIs each time a conversation is created, even if no user message has been received yet.
When to start a conversation
Start a new conversation when:
- You receive the first message from a user on your custom channel and no active conversation exists yet.
- You want to proactively start a thread or outreach message on behalf of a user.
Each new conversation in Ada establishes the context for all messages that follow. After a conversation starts, all messages from the user, Ada, or a human agent are linked to that conversation ID.
What to include in the request
For this step, you’ll only need to include the following fields when creating a conversation:
- channel_id(required): The custom channel where this conversation will live (from Step 1).
- end_user_id(optional): The ID of a user that already exists in Ada.- If you don’t pass it, Ada will create a new end user and return the newly created end_user_idin the response.
- If you do pass it, Ada will associate the conversation with that user.
 
- If you don’t pass it, Ada will create a new end user and return the newly created 
For details on all supported parameters, see the endpoint request fields.
Sample request
This example shows the minimal HTTP request needed to start a new conversation. It includes only the required fields and uses standard authentication headers.
Replace <handle>, <your-api-key>, <end_user_id>, and <your-channel-id> with your actual values. The end_user_id field is optional. If omitted, Ada automatically creates a new end user and returns the ID in the response.
Code example
This example uses the standard Python requests library to create a conversation synchronously. It’s the simplest approach: ideal for quick tests, scripts, or integrations where you don’t need asynchronous I/O.
The call includes the required channel_id and, optionally, an end_user_id if you want to associate the conversation with a specific user.
See what Ada returns
If your request succeeds, Ada responds with a JSON payload that includes details about the new conversation.
Here are the key fields you’ll need to keep track of:
Sample response
Example abbreviated for clarity. See the full response here.
- id: The conversation ID. You’ll use this to send messages and end the conversation.
- end_user_id: Identifies the end user participating in the conversation. You’ll need this when sending messages as that user.
Keep both values stored in your system. They’re required for all subsequent API calls, such as sending user messages or ending the conversation.
Step 3: Send messages
Once a conversation is active, your app needs to send user messages to Ada’s AI Agent. This is how you transmit what the user types in your channel so Ada can process it and respond. Each message is tied to a specific conversation (identified by conversation_id) and a specific user (end_user_id).
You can send end users’ messages using the Create a new message endpoint.
When to send a message
Send a message whenever your integration receives input from the user. For example, a chat message, a Slack thread reply, or a mobile text.
Each message you send must belong to an active conversation.
You’ll reference the conversation_id from Step 2 and the end_user_id associated with that conversation.
If the conversation has already ended, start a new one before sending more messages.
conversation_id and end_user_id together so you can easily route messages to the right Ada conversation later.What to include in the request
Each message must tell Ada who sent it and what was said. At minimum, include the following fields in your request body:
- conversation_id: The ID of the conversation.
- author: The person sending the message, with their role set to- end_user.
- content: The message body.
Optional fields such as display_name, avatar, or additional metadata are described in the endpoint request fields.
Sample request
This example shows the minimal HTTP request required to send a user message to Ada within an active conversation and trigger a response from the AI Agent. Replace <handle>, <conversation_id>, <your-api-key>, and <end_user_id> with your actual values.
Code example
This example uses the standard Python requests library and is ideal for quick tests or simple applications where you don’t need concurrency.
See what Ada returns
When your request succeeds, Ada returns a JSON response with details about the new message. Here are the fields that matter most for this step:
Sample response
Example abbreviated for clarity. See the full response here.
- conversation_id: Identifies the conversation that this message belongs to. You’ll use it for all follow-up actions, such as sending additional messages or ending the conversation.
- id: Identifies this specific message. You can use it for logging or debugging if you need to trace individual messages.
Once the message is processed, Ada sends the AI Agent’s response back to your integration as a v1.conversation.message webhook event.
Your integration should listen for that event (using your webhook endpoint) and display the AI Agent’s reply in your channel or UI.
Step 4: Listen on responses via webhooks
Once your app starts sending messages to Ada, you need a way to listen for Ada’s responses. Ada delivers AI Agent messages, conversation updates, and events as webhooks. A webhook is a secure HTTP callback that notifies your integration in real time when something happens.
When Ada sends a webhook, it’s Ada’s way of saying: Here’s something new that happened — a message, an event, or an update!
Your integration listens for these events, verifies that they came from Ada, and responds accordingly, for example, by displaying the AI Agent’s reply in your channel or forwarding it to another system like Slack.
To receive these webhook events, your app needs to expose a publicly accessible endpoint that Ada can post these webhook events to.
- In local development, this endpoint is usually made accessible through a tunneling tool such as ngrok (for example, https://1234-56-78-90.ngrok-free.app/webhooks/message).
- In production, it should live on a public, HTTPS-secured domain (for example, https://api.yourapp.com/webhooks/message).
When Ada sends webhooks
Ada triggers webhooks whenever events occur in a conversation, such as:
- When the AI Agent sends a message (v1.conversation.message)
- When a conversation starts (v1.conversation.created)
- When a conversation ends (v1.conversation.ended)
Your app can subscribe to these events in the Webhooks > Endpoints section of the Ada Dashboard. Once subscribed, Ada will POST event payloads to your configured webhook URL.
What to expect in the webhook payload
Each webhook event includes a JSON payload that represents the event type and associated data.
Sample webhook (AI Agent message)
When your app receives this webhook, it can take the message text (content.body) and display it to the user, send it to a connected channel (like Slack), or log it for analytics.
How to handle and verify webhooks
To receive and process webhooks, your app must define an endpoint that matches the URL configured in the Ada Dashboard. This route handles incoming POST requests, verifies that they’re from Ada, and parses the message data.
- 
If you want to use a language-specific package, you can use the package provided by Svix as documented here. 
- 
Alternatively, webhooks can be verified without their library using this manual verification guide. 
For more information about how Ada uses webhooks, see this topic.
Code example
The following example shows a sample webhook handler from our demo repository, located in app/webhooks.py.
WEBHOOK_SECRET (from the Ada Dashboard) before trusting the data.Step 5: Buffer and order messages
Ada delivers each message as its own webhook. Network jitter and parallel processing can prevent those webhooks from arriving in chronological order. The fix is simple: buffer briefly, sort by timestamp, then render.
How it works in the demo
In our demo repository, incoming webhook messages are received, sorted, and rendered in order to keep the chat experience real-time and conversational.
- Webhooks are received one by one.
- Each message is stored in a short-lived, in-memory queue per conversation.
- A brief delay (e.g., 1–2 seconds) allows messages to accumulate into a micro-batch.
- The batch is then sorted by timestamp.
- Messages are displayed in the UI in the correct order.
This pattern preserves the conversational flow while still feeling real-time.
Code example
The demo implements a simple in-memory batcher using asyncio. You can find this logic in app/webhooks.py.
What's happening here
- push_message_to_queue()collects incoming webhook messages in a global queue.
- When new messages arrive, any pending batch task is canceled and rescheduled to include the latest messages.
- After a short delay, batch_process_messages()runs, sorting messages by timestamp and sending them to the chat UI for rendering.
- This lightweight batching logic ensures that even if webhook events arrive out of order, messages are displayed in sequence for a smooth, real-time conversation experience.
If you're setting up Slack
When flushing a batch to Slack:
- Use the channel and thread identifier (thread_ts) associated with the Ada conversationid.
- Post in order after sorting.
- For user/bot rendering, map Ada’s author.roleto Slack’s display (e.g., different user name/icon forai_agentvs.end_user).
Step 6: Render messages in your UI
Now that your integration can send and receive messages, the next step is to display the conversation in your UI or channel so that users see both their own messages and Ada’s responses in real time.
Your integration’s job here is to take the incoming messages (from webhooks) and render them in order, with clear distinction between:
- End-user messages (what the user says)
- AI Agent messages (Ada’s responses)
- Human agent messages (if your system supports Handoffs)
How it works in the demo
In our demo repository, incoming webhook messages are processed and displayed in the local chat UI:
- The webhook payload is received and verified.
- Message details (author, content, role, and so on) are passed to the UI handler.
- The handler identifies the correct chat instance, converts the message to a UI element, and renders it in real time.
This flow keeps the chat interface responsive and aligned with incoming webhook events.
Code example
In the demo, the following function adds verified messages to the chat UI. You can find this logic in app/webpage/index.py.
What's happening here
- push_message_to_chattakes the message payload from the webhook.
- role(e.g.,- end_user,- ai_agent,- human_agent) determines who the message is from.
- chat_ui.add_message()renders the text, avatar, and display name in the browser window.
- The UI auto-scrolls to the newest message so the conversation feels natural.
Design considerations
- Differentiate roles visually:
- Show the user’s messages aligned right (e.g., blue bubble).
- Show Ada’s messages aligned left (e.g., gray bubble, Ada avatar).
- If you support handoffs, render human agent messages in a distinct color or include the agent’s name.
 
- Show metadata when useful:
- Timestamp each message.
- Optionally show Delivered or Seen indicators.
- Use display_nameandavatarfields from the payload for personalization.
 
- Handle link messages:
- Some messages (for example, CSAT surveys or links) use content.type: "link".
- Render these as clickable links or buttons rather than plain text.
 
- Some messages (for example, CSAT surveys or links) use 
- Graceful endings:
- When you receive a v1.conversation.endedevent, disable inputs and show a Conversation closed notice.
 
- When you receive a 
Rendering messages in other channels
If you’re not using a browser UI (for example, you’re integrating with Slack, Teams, or SMS), the same webhook data can be sent to those platforms instead of your UI:
- Slack: Use the chat.postMessageAPI to post Ada’s responses in the correct thread (matching the storedconversation_idtothread_tsmapping).
- SMS: Send Ada’s content.bodyto your messaging provider’s API.
If you're setting up Slack
When Ada sends a webhook containing the AI Agent’s response, your app needs to post that message back to Slack in the correct channel or thread.
Sample code
Use your stored mapping between Ada’s conversation id and Slack’s channel/thread_ts to ensure replies appear in the right thread. This keeps the full conversation—both user messages and Ada’s responses—neatly organized within the same Slack thread.
Step 7: End a conversation
Once a conversation has run its course, you can close it using the End a conversation endpoint. Ending a conversation signals to Ada that no further messages will be exchanged. This ensures that sessions are tracked, reported, and summarized correctly.
When to end a conversation
Not every channel will have an explicit End chat control, but many include one (for example, a Close or End conversation button). In your custom channel, you can call the End a conversation endpoint when:
- The end user clicks an End Chat or equivalent UI action.
- The system detects inactivity or a timeout.
- Your integration’s workflow determines the chat should close automatically (for example, after a successful resolution).
After a conversation ends:
- No further messages can be sent to that conversation ID.
- Ada may send a follow-up webhook, such as a CSAT (customer satisfaction) survey link, depending on the AI Agent’s configuration.
About CSAT surveys
A Customer Satisfaction (CSAT) survey lets end users rate their experience or leave comments after interacting with your AI Agent. The feedback helps you measure satisfaction, spot improvement opportunities, and track your AI Agent’s performance.
When to trigger them
A CSAT survey is typically sent right after the conversation is closed, either:
- When your integration calls the End a conversation endpoint,
- When the conversation ends automatically based on your Agent’s settings (for example, after a period of inactivity or when a workflow rule closes it),
- After a handoff completes with a human agent.
Both AI Agent CSAT and Human Agent CSAT surveys can be enabled and managed in your AI Agent settings.
The CSAT survey is sent as a webhook event (v1.conversation.message) with a link message type. This allows your integration to display the survey link in your custom channel, such as Slack or a web chat.
How to handle surveys
Listen for the CSAT webhook event just like any other message event. When you receive a link message (for example, content.type = "link"), render it appropriately in your channel UI. For example, as a clickable link or a button, depending on the channel’s capabilities.
Best practices
- Treat CSAT surveys as a special message type. Just display them, don’t respond to them.
- Render the survey link in a way that fits your channel (like a clickable button or message).
- If you don’t want Ada to send CSAT surveys, you can turn them off or customize them in your AI Agent settings.
What to include in the request
To end a conversation, all you need is the conversation ID of the active session. No request body is required: simply make a POST call to the endpoint that includes the conversation_id in the URL. This tells Ada that the conversation is complete and prevents any further messages from being added to it.
Sample request
This example shows the minimal HTTP request required to end an active conversation in Ada. Replace <handle>, <conversation_id>, and <your-api-key> with your actual values.
Code example
This example uses the standard Python requests library to end a conversation synchronously. It’s ideal for simple scripts or applications that don’t rely on asynchronous I/O.
What happens next
Once a conversation is ended:
- Ada stops processing new messages for that session.
- If configured, a CSAT survey link or closing message is sent as a v1.conversation.messagewebhook event.
Important notes
- A conversation cannot be reactivated after it’s ended.
- The conversation ID remains valid for querying past messages or logs.
- Always ensure your front end reflects the closed state: disable input fields or prompt users to start a new conversation.
Making your integration production-ready
You’ve already seen the note about rate limits and retries earlier in this guide. In production, make sure your retry logic is fully tested, especially for HTTP 429-type responses.
Even with well-formed requests, things can still go wrong. Network issues or invalid payloads can cause occasional hiccups. Here’s how to make your integration resilient when those things happen.
Error handling
The Conversations API uses standard HTTP conventions for reporting errors. Here are a few best practices for production:
- Add retries with backoff: Retry failed requests after a short delay, increasing the delay each time.
- Handle rate limits: When you receive 429 Too Many Requests, check theRetry-Afterheader and wait before retrying.
- Validate before sending: Double-check request fields and types before making an API call.
- Log and monitor errors: Capture response codes and request details to help diagnose issues later.
- Be user-friendly: If something goes wrong, surface a helpful message instead of letting the app fail silently.
What happens next
At this point, your integration should be ready for production use — it can create conversations, send and receive messages, and handle webhooks reliably. From here, you can:
- Experiment with additional automation, logging, or analytics for your custom channel.
- Explore Channels, Conversations, and Webhooks in the sidebar for complete endpoint details.