Developer guide

This guide will take you on a hands-on journey through Ada’s Conversations API, showing you exactly how to build, run, and understand a custom channel integration from start to finish.

If you’ve already reviewed the Getting started guide, you’ll notice this goes much deeper. Here, we’ll not only show you what to do, but explain why each step matters, what’s happening under the hood, and how to extend it into production.

Some examples in this guide are taken from our official Conversations API demo repository, while others were created to illustrate alternative approaches.

Ada’s APIs include rate limits to ensure consistent performance and reliability. You’re unlikely to hit these during normal development, but it’s good practice to handle HTTP 429 Too Many Requests responses and implement retry logic. For more details, see:

Setting up

Before we dive into the code, let’s set up everything you need to run our demo locally and connect your environment to Ada’s Conversations API. This ensures your local app can authenticate with Ada, create conversations, and receive webhook events in real time, just like a production integration would.

1

Obtain an Ada API key

If you don’t already have one, generate a new API key in the Ada Dashboard.
This key lets your integration securely communicate with Ada’s Conversations API.

2

Clone the demo repo

Our demo repository contains a working example of how to connect to the Conversations API, send and receive messages, and handle webhook events. You’ll use it both as a reference and a sandbox for testing your integration.

  • Run these commands in your Terminal:

    bash
    $git clone https://github.com/AdaSupport/ada-conversations-api-demo.git
    >cd ada-conversations-api-demo

    This will:

    • Download the full demo project to your local machine.
    • Change into the project directory so you can start working inside it.

Once cloned, take a moment to look around the folder structure. The key files and directories you’ll work with are:

├── app/
│ ├── ada_api.py # Handles API calls to Ada (create conversation, send messages, end chat)
│ ├── webhooks.py # Defines webhook endpoints and verification logic
│ ├── webpage/index.py # Simple local chat UI for testing messages
│ └── server/ # Entry point for running the FastAPI server
├── .env.example # Template for your environment variables
├── requirements.txt # Python dependencies (aiohttp, FastAPI, svix, etc.)
└── README.md # Short overview of the demo

This demo is built around a simple event flow:

  1. A user sends a message through the local UI or via API call.
  2. The backend (in ada_api.py) forwards that message to Ada’s Conversations API.
  3. Ada responds asynchronously via a webhook (handled in webhooks.py).
  4. The message gets rendered back into the local chat interface or forwarded to another platform like Slack.

This repo uses FastAPI for its local web server, aiohttp for async API calls, and Svix for webhook verification. In production, you’ll likely split these pieces across your own services, but the demo keeps it all in one place for simplicity, making it perfect for learning the flow end to end.

4

Create a tunnel for Ada's webhooks

Important: This step applies to local testing only. In production, your webhook endpoint should be hosted on a publicly accessible, HTTPS-secured domain that Ada can reach directly.

Ada sends all AI Agent responses and conversation updates as webhooks, HTTP callbacks that your integration needs to receive in real time.

Because your computer isn’t publicly accessible during local development, Ada can’t reach localhost directly. To bridge that gap, you can use a tunneling tool. One option is ngrok, which creates a secure, temporary public URL that forwards requests to your local server.

There are other ways to achieve the same result, such as Cloudflare Tunnels, Localtunnel, or even network port forwarding. For simplicity, let’s assume you’re using ngrok.

Run this in a new terminal window:

bash
$ngrok http 8080

This creates a secure public URL that forwards requests to your local FastAPI server running on port 8080. If everything is working as expected, you will see a newly created forwarding URL, for example: https://1234-56-78-90.ngrok-free.app.

That’s the public address Ada will use to send webhook requests to your local app.

Keep this terminal window open while you’re testing. If you close it, your tunnel (and webhook connectivity) will stop.

If you restart ngrok, it will generate a new URL. You’ll need to update the endpoint in the Ada Dashboard whenever that happens.

Later in this guide, you’ll use this forwarding address when you create a webhook endpoint in the Ada Dashboard (under Platform > Webhooks > Endpoints). For example, your full webhook endpoint URL might look like this: https://1234-56-78-90.ngrok-free.app/webhooks/message.
5

Configure Conversations API webhooks

Ada delivers AI Agent responses and conversation events to your integration through webhooks. These are secure HTTP callbacks that notify your app when something happens in Ada, such as a new message or a conversation ending.

To protect your integration, you need a way to ensure those webhook requests really come from Ada and not from another source trying to mimic them. Ada provides a Signing Secret for each webhook endpoint, which your service can use to verify the authenticity of every incoming request.

  1. In the Ada Dashboard, go to Platform > Webhooks > Endpoints.
  1. Create a new endpoint, or open an existing one if you already have one configured.
  • Make sure the endpoint points to either the temporary public URL that forwards requests to your local server (via ngrok if you’re testing locally) or to your production webhook URL that matches the route your server listens on.

    For example: https://1234-56-78-90.ngrok-free.app/webhooks/message.

    This URL corresponds to the /webhooks/message route defined in your demo app (app/webhooks.py).

  • On the Endpoints tab, under Subscribe to events, make sure to include the Conversations API events. You’ll find them under the v1 > conversation category:

    • v1.conversation.message: Triggers when a message is sent or received.
    • v1.conversation.created: Triggers when a new conversation starts.
    • v1.conversation.ended: Triggers when a conversation closes.

    These events ensure your integration receives real-time updates for every conversation and message handled by Ada.

  1. In the right-side navigation panel, locate the Signing Secret.
  2. Reveal and copy the value. You will need it in the next step when updating your .env file: WEBHOOK_SECRET=<your-signing-secret>. Your integration will use this secret to verify that all incoming webhook requests originate from Ada.
6

Set up your .env file

The .env file is where you’ll store the core configuration values that connect your local demo to your Ada instance. Think of it as the bridge between your code and the Ada platform: it tells your local environment which AI Agent to talk to, how to authenticate, what channel to use, and how to verify incoming webhook requests.

In the demo repository, you’ll find a template called .env.example. This file includes all the configuration keys you’ll need. Start by duplicating it so you can edit your own version:

bash
$ cp .env.example .env

Now open your newly created .env file in your editor. It will look something like this:

bash
$ ADA_BASE_URL=
> ADA_API_KEY=
> ADA_CHANNEL_ID=
> WEBHOOK_SECRET=

Here’s what each of these values means and how to update them:

  • ADA_BASE_URL: The base URL for your Ada instance consisting of your agent’s handle and your organization’s domain. For example: ADA_BASE_URL=https://example.ada.support.
  • ADA_API_KEY: The Ada API key you generated in your Ada Dashboard under Platform > APIs. This authenticates every API request to Ada. Treat it like a password: never commit it to Git.
  • ADA_CHANNEL_ID:The unique ID of the custom channel your app will use to create and manage conversations. You don’t have this yet. We’ll create the channel in Step 1, then come back to this file and fill it in. For now, leave it blank: ADA_CHANNEL_ID=.
  • WEBHOOK_SECRET: The signing secret Ada uses to verify webhook requests. Use the value obtained from the webhook endpoint you created in the previous step in the Ada Dashboard (Platform > Webhooks > Endpoints > Signing Secret).

After you’ve filled in the available values, your .env file should look something like this:

bash
$ADA_BASE_URL=https://example.ada.support
>ADA_API_KEY=abcd1234efgh5678ijkl
>ADA_CHANNEL_ID=
>WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxx

Step 1: Create a custom channel

A custom channel is the connection point between Ada and your integration. It tells Ada where conversations will take place and how messages should flow between your AI Agent and your external system. In other words, you’re creating a bridge that lets Ada talk to your app.

Create your custom channel using the Create a new channel endpoint.

You only need to create a custom channel once per integration, not every time your service runs. Typically, this happens during initial setup or deployment, when you’re wiring up Ada to a new platform or system.

If you’re connecting Ada to multiple external systems (for example, Slack, SMS, or a web chat widget), you’ll create a separate channel for each integration.

Once a channel is created, you’ll reuse its unique channel ID in all future API calls that reference this integration. For example, when starting conversations or sending messages.

To create a channel, call the Create a new channel endpoint and include the applicable fields in the request. At a minimum, you’ll need to provide a name, description, and modality for your new channel.

This example shows the minimal HTTP request needed to create a new custom channel in Ada using the Conversations API.

Replace <handle> with your Ada handle (the same subdomain defined in your .env file). In the Authorization heade, include <your-api-key> that you created earlier to authenticate your request.

Keep the channel’s name and description clear and unique. They’ll help you identify this integration later, especially if you manage multiple channels.
http
1 POST https://<handle>.ada.support/api/v2/channels
2 Authorization: Bearer <your-api-key>
3 Content-Type: application/json
4
5 {
6 "name": "My Custom Channel",
7 "description": "A custom messaging channel for my AI Agent",
8 "modality": "messaging",
9 "metadata":{
10 "webpage_host": "https://lovelace-chat.com"
11 }
12 }

If the call is successful, Ada returns a JSON response with the details of your new channel. Here’s the field that matters most for this step:

json
1 {
2 ...
3 "id": "68f25a35072ad87710c1d96b",
4 ...
5 }

Example abbreviated for clarity. See the full response here.

This id is your channel ID. You’ll need it for all future Conversations API calls. Once added to your configuration, your app will automatically use this channel for every conversation and message it creates.

Now that you’ve created your custom channel and received its ID, you can complete your .env configuration.

Open the file and update the ADA_CHANNEL_ID value using the id returned in Ada’s response. For example: ADA_CHANNEL_ID=68f25a35072ad87710c1d96b.

Once saved, your app will automatically use this channel for all conversations and messages it creates.

Your final .env file should now include all the required values:

bash
$ ADA_BASE_URL=https://example.ada.support
> ADA_API_KEY=abcd1234efgh5678ijkl
> ADA_CHANNEL_ID=68f25a35072ad87710c1d96b
> WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxx

Step 2: Start a conversation

Before you can send or receive messages, you need a conversation in Ada. A conversation acts as the container that holds context — who the user is, what channel they’re on, their current state, and any relevant metadata.

Start a conversation using the Create a new conversation endpoint. When you do that, Ada automatically sends a Greeting message to the user. This initial message is triggered by Ada’s platform APIs each time a conversation is created, even if no user message has been received yet.

Start a new conversation when:

  • You receive the first message from a user on your custom channel and no active conversation exists yet.
  • You want to proactively start a thread or outreach message on behalf of a user.

Each new conversation in Ada establishes the context for all messages that follow. After a conversation starts, all messages from the user, Ada, or a human agent are linked to that conversation ID.

For this step, you’ll only need to include the following fields when creating a conversation:

  • channel_id (required): The custom channel where this conversation will live (from Step 1).
  • end_user_id (optional): The ID of a user that already exists in Ada.
    • If you don’t pass it, Ada will create a new end user and return the newly created end_user_id in the response.
    • If you do pass it, Ada will associate the conversation with that user.

For details on all supported parameters, see the endpoint request fields.

This example shows the minimal HTTP request needed to start a new conversation. It includes only the required fields and uses standard authentication headers.

Replace <handle>, <your-api-key>, <end_user_id>, and <your-channel-id> with your actual values. The end_user_id field is optional. If omitted, Ada automatically creates a new end user and returns the ID in the response.

http
1 POST https://<handle>.ada.support/api/v2/conversations
2 Authorization: Bearer <your-api-key>
3 Content-Type: application/json
4
5 {
6 "channel_id": "<your-channel-id>",
7 "end_user_id": "<optional-user-id>"
8 }

This example uses the standard Python requests library to create a conversation synchronously. It’s the simplest approach: ideal for quick tests, scripts, or integrations where you don’t need asynchronous I/O.

The call includes the required channel_id and, optionally, an end_user_id if you want to associate the conversation with a specific user.

python
1import requests
2
3request_body = {"channel_id": ADA_CHANNEL_ID}
4if user_id:
5 request_body["end_user_id"] = user_id
6
7response = requests.post(
8 f"{ADA_BASE_URL}/api/v2/conversations",
9 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
10 json=request_body,
11)

If your request succeeds, Ada responds with a JSON payload that includes details about the new conversation.

Here are the key fields you’ll need to keep track of:

json
1 {
2 "id": "5df263b7db5a7e6ea03fae9b",
3 "end_user_id": "5df263b7db5a7e6ea03fae9c",
4 ...
5 }

Example abbreviated for clarity. See the full response here.

  • id: The conversation ID. You’ll use this to send messages and end the conversation.
  • end_user_id: Identifies the end user participating in the conversation. You’ll need this when sending messages as that user.

Keep both values stored in your system. They’re required for all subsequent API calls, such as sending user messages or ending the conversation.

Step 3: Send messages

Once a conversation is active, your app needs to send user messages to Ada’s AI Agent. This is how you transmit what the user types in your channel so Ada can process it and respond. Each message is tied to a specific conversation (identified by conversation_id) and a specific user (end_user_id).

You can send end users’ messages using the Create a new message endpoint.

Send a message whenever your integration receives input from the user. For example, a chat message, a Slack thread reply, or a mobile text.

Each message you send must belong to an active conversation.

You’ll reference the conversation_id from Step 2 and the end_user_id associated with that conversation.

If the conversation has already ended, start a new one before sending more messages.

If your integration supports multiple users or threads, keep track of the conversation_id and end_user_id together so you can easily route messages to the right Ada conversation later.

Each message must tell Ada who sent it and what was said. At minimum, include the following fields in your request body:

  • conversation_id: The ID of the conversation.
  • author: The person sending the message, with their role set to end_user.
  • content: The message body.

Optional fields such as display_name, avatar, or additional metadata are described in the endpoint request fields.

This example shows the minimal HTTP request required to send a user message to Ada within an active conversation and trigger a response from the AI Agent. Replace <handle>, <conversation_id>, <your-api-key>, and <end_user_id> with your actual values.

http
1 POST https://<handle>.ada.support/api/v2/conversations/<conversation_id>/messages
2 Authorization: Bearer <your-api-key>
3 Content-Type: application/json
4
5 {
6 "author": {
7 "role": "end_user",
8 "id": "<end_user_id>"
9 },
10 "content": {
11 "type": "text",
12 "body": "I need help with my order."
13 }
14 }

This example uses the standard Python requests library and is ideal for quick tests or simple applications where you don’t need concurrency.

python
1 import requests
2
3 response = requests.post(
4 f"{ADA_BASE_URL}/api/v2/conversations/{conversation_id}/messages",
5 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
6 json={
7 "author": {
8 "role": "end_user",
9 "display_name": display_name,
10 "id": user_id,
11 "avatar": avatar,
12 },
13 "content": {"type": "text", "body": text},
14 },
15 )
16
17 response.raise_for_status()

When your request succeeds, Ada returns a JSON response with details about the new message. Here are the fields that matter most for this step:

json
1 {
2 "id": "6789abcd1234ef567890",
3 "conversation_id": "5df263b7db5a7e6ea03fae9b",
4 ...
5 }

Example abbreviated for clarity. See the full response here.

  • conversation_id: Identifies the conversation that this message belongs to. You’ll use it for all follow-up actions, such as sending additional messages or ending the conversation.
  • id: Identifies this specific message. You can use it for logging or debugging if you need to trace individual messages.

Once the message is processed, Ada sends the AI Agent’s response back to your integration as a v1.conversation.message webhook event.

Your integration should listen for that event (using your webhook endpoint) and display the AI Agent’s reply in your channel or UI.

Step 4: Listen on responses via webhooks

Once your app starts sending messages to Ada, you need a way to listen for Ada’s responses. Ada delivers AI Agent messages, conversation updates, and events as webhooks. A webhook is a secure HTTP callback that notifies your integration in real time when something happens.

When Ada sends a webhook, it’s Ada’s way of saying: Here’s something new that happened — a message, an event, or an update!

Your integration listens for these events, verifies that they came from Ada, and responds accordingly, for example, by displaying the AI Agent’s reply in your channel or forwarding it to another system like Slack.

To receive these webhook events, your app needs to expose a publicly accessible endpoint that Ada can post these webhook events to.

  • In local development, this endpoint is usually made accessible through a tunneling tool such as ngrok (for example, https://1234-56-78-90.ngrok-free.app/webhooks/message).
  • In production, it should live on a public, HTTPS-secured domain (for example, https://api.yourapp.com/webhooks/message).

Ada triggers webhooks whenever events occur in a conversation, such as:

  • When the AI Agent sends a message (v1.conversation.message)
  • When a conversation starts (v1.conversation.created)
  • When a conversation ends (v1.conversation.ended)

Your app can subscribe to these events in the Webhooks > Endpoints section of the Ada Dashboard. Once subscribed, Ada will POST event payloads to your configured webhook URL.

Each webhook event includes a JSON payload that represents the event type and associated data.

json
1 {
2 "type": "v1.conversation.message",
3 "timestamp": "2025-10-21T12:15:00+00:00",
4 "data": {
5 "conversation_id": "5df263b7db5a7e6ea03fae9b",
6 "author": {
7 "role": "ai_agent",
8 "display_name": "Ada"
9 },
10 "content": {
11 "type": "text",
12 "body": "Sure! You can check your order by clicking the link below."
13 }
14 }
15 }

When your app receives this webhook, it can take the message text (content.body) and display it to the user, send it to a connected channel (like Slack), or log it for analytics.

To receive and process webhooks, your app must define an endpoint that matches the URL configured in the Ada Dashboard. This route handles incoming POST requests, verifies that they’re from Ada, and parses the message data.

  • If you want to use a language-specific package, you can use the package provided by Svix as documented here.

  • Alternatively, webhooks can be verified without their library using this manual verification guide.

For more information about how Ada uses webhooks, see this topic.

The following example shows a sample webhook handler from our demo repository, located in app/webhooks.py.

python
1 from fastapi import Request, HTTPException
2 from svix import Webhook, WebhookVerificationError
3
4 @app.post("/webhooks/message")
5 async def handle_message(request: Request):
6 headers = request.headers
7 payload = await request.body()
8
9 try:
10 # Verify that the request really came from Ada using your webhook signing secret
11 webhook = Webhook(WEBHOOK_SECRET)
12 webhook.verify(payload, dict(headers))
13 except WebhookVerificationError:
14 raise HTTPException(status_code=400, detail="Invalid signature")
15
16 msg = json.loads(payload)
17 process_message(msg)
18 return {"status": "ok"}
Ada uses Svix to sign all webhook requests. You must verify the signature using your WEBHOOK_SECRET (from the Ada Dashboard) before trusting the data.

Step 5: Buffer and order messages

Ada delivers each message as its own webhook. Network jitter and parallel processing can prevent those webhooks from arriving in chronological order. The fix is simple: buffer briefly, sort by timestamp, then render.

In a production environment, you’ll likely have multiple web servers. Use a shared store/queue so ordering works across instances.

In our demo repository, incoming webhook messages are received, sorted, and rendered in order to keep the chat experience real-time and conversational.

  1. Webhooks are received one by one.
  2. Each message is stored in a short-lived, in-memory queue per conversation.
  3. A brief delay (e.g., 1–2 seconds) allows messages to accumulate into a micro-batch.
  4. The batch is then sorted by timestamp.
  5. Messages are displayed in the UI in the correct order.

This pattern preserves the conversational flow while still feeling real-time.

The demo implements a simple in-memory batcher using asyncio. You can find this logic in app/webhooks.py.

python
1 # app/webhooks.py (excerpt)
2
3 _global_msg_queue = []
4 _global_batch_task = None
5 _global_batch_lock = asyncio.Lock()
6
7 async def push_message_to_queue(msg):
8 """Batch messages in a queue to be processed after a delay to account for unordered messages"""
9 global _global_batch_lock
10
11 async with _global_batch_lock:
12 global _global_msg_queue, _global_batch_task
13 _global_msg_queue.append(msg)
14
15 if _global_batch_task is not None:
16 _global_batch_task.cancel()
17
18 _global_batch_task = asyncio.create_task(batch_process_messages())
19
20 async def batch_process_messages():
21 """Process all messages in the queue after a delay"""
22 global _global_batch_lock
23
24 await asyncio.sleep(2)
25
26 async with _global_batch_lock:
27 global _global_msg_queue, _global_batch_task
28 messages = _global_msg_queue
29 _global_msg_queue = []
30 _global_batch_task = None
31
32 messages.sort(key=lambda m: m.timestamp)
33 for msg in messages:
34 push_message_to_chat(
35 msg.data.conversation_id,
36 msg.data.author.id,
37 msg.data.author.role,
38 msg.data.content,
39 msg.data.author.display_name,
40 msg.data.author.avatar,
41 )
  • push_message_to_queue() collects incoming webhook messages in a global queue.
  • When new messages arrive, any pending batch task is canceled and rescheduled to include the latest messages.
  • After a short delay, batch_process_messages() runs, sorting messages by timestamp and sending them to the chat UI for rendering.
  • This lightweight batching logic ensures that even if webhook events arrive out of order, messages are displayed in sequence for a smooth, real-time conversation experience.

When flushing a batch to Slack:

  • Use the channel and thread identifier (thread_ts) associated with the Ada conversation id.
  • Post in order after sorting.
  • For user/bot rendering, map Ada’s author.role to Slack’s display (e.g., different user name/icon for ai_agent vs. end_user).

Step 6: Render messages in your UI

Now that your integration can send and receive messages, the next step is to display the conversation in your UI or channel so that users see both their own messages and Ada’s responses in real time.

Your integration’s job here is to take the incoming messages (from webhooks) and render them in order, with clear distinction between:

  • End-user messages (what the user says)
  • AI Agent messages (Ada’s responses)
  • Human agent messages (if your system supports Handoffs)

In our demo repository, incoming webhook messages are processed and displayed in the local chat UI:

  1. The webhook payload is received and verified.
  2. Message details (author, content, role, and so on) are passed to the UI handler.
  3. The handler identifies the correct chat instance, converts the message to a UI element, and renders it in real time.

This flow keeps the chat interface responsive and aligned with incoming webhook events.

In the demo, the following function adds verified messages to the chat UI. You can find this logic in app/webpage/index.py.

python
1 def push_message_to_chat(
2 conversation_id: str,
3 user_id: str | None,
4 role: str,
5 content: MessageContent,
6 display_name: str | None = None,
7 avatar: str | None = None,
8 ):
9 """Convert a message from Ada's webhook into one displayed in the chat UI."""
10
11 chat_ui = get_chat_ui(conversation_id)
12 if not chat_ui or user_id == chat_ui.active_end_user_id:
13 return
14
15 chat_ui.add_message(user_id, role, content, display_name, avatar)
  • push_message_to_chat takes the message payload from the webhook.
  • role (e.g., end_user, ai_agent, human_agent) determines who the message is from.
  • chat_ui.add_message() renders the text, avatar, and display name in the browser window.
  • The UI auto-scrolls to the newest message so the conversation feels natural.
  • Differentiate roles visually:
    • Show the user’s messages aligned right (e.g., blue bubble).
    • Show Ada’s messages aligned left (e.g., gray bubble, Ada avatar).
    • If you support handoffs, render human agent messages in a distinct color or include the agent’s name.
  • Show metadata when useful:
    • Timestamp each message.
    • Optionally show Delivered or Seen indicators.
    • Use display_name and avatar fields from the payload for personalization.
  • Handle link messages:
    • Some messages (for example, CSAT surveys or links) use content.type: "link".
    • Render these as clickable links or buttons rather than plain text.
  • Graceful endings:
    • When you receive a v1.conversation.ended event, disable inputs and show a Conversation closed notice.

If you’re not using a browser UI (for example, you’re integrating with Slack, Teams, or SMS), the same webhook data can be sent to those platforms instead of your UI:

  • Slack: Use the chat.postMessage API to post Ada’s responses in the correct thread (matching the stored conversation_id to thread_ts mapping).
  • SMS: Send Ada’s content.body to your messaging provider’s API.

When Ada sends a webhook containing the AI Agent’s response, your app needs to post that message back to Slack in the correct channel or thread.

python
1 response = slack_client.chat_postMessage(
2 channel=slack_channel_id,
3 text=ada_message_text,
4 thread_ts=slack_thread_ts
5 )

Use your stored mapping between Ada’s conversation id and Slack’s channel/thread_ts to ensure replies appear in the right thread. This keeps the full conversation—both user messages and Ada’s responses—neatly organized within the same Slack thread.

Step 7: End a conversation

Once a conversation has run its course, you can close it using the End a conversation endpoint. Ending a conversation signals to Ada that no further messages will be exchanged. This ensures that sessions are tracked, reported, and summarized correctly.

Not every channel will have an explicit End chat control, but many include one (for example, a Close or End conversation button). In your custom channel, you can call the End a conversation endpoint when:

  • The end user clicks an End Chat or equivalent UI action.
  • The system detects inactivity or a timeout.
  • Your integration’s workflow determines the chat should close automatically (for example, after a successful resolution).

After a conversation ends:

  • No further messages can be sent to that conversation ID.
  • Ada may send a follow-up webhook, such as a CSAT (customer satisfaction) survey link, depending on the AI Agent’s configuration.

A Customer Satisfaction (CSAT) survey lets end users rate their experience or leave comments after interacting with your AI Agent. The feedback helps you measure satisfaction, spot improvement opportunities, and track your AI Agent’s performance.

A CSAT survey is typically sent right after the conversation is closed, either:

  • When your integration calls the End a conversation endpoint,
  • When the conversation ends automatically based on your Agent’s settings (for example, after a period of inactivity or when a workflow rule closes it),
  • After a handoff completes with a human agent.

Both AI Agent CSAT and Human Agent CSAT surveys can be enabled and managed in your AI Agent settings.

The CSAT survey is sent as a webhook event (v1.conversation.message) with a link message type. This allows your integration to display the survey link in your custom channel, such as Slack or a web chat.

Listen for the CSAT webhook event just like any other message event. When you receive a link message (for example, content.type = "link"), render it appropriately in your channel UI. For example, as a clickable link or a button, depending on the channel’s capabilities.

  • Treat CSAT surveys as a special message type. Just display them, don’t respond to them.
  • Render the survey link in a way that fits your channel (like a clickable button or message).
  • If you don’t want Ada to send CSAT surveys, you can turn them off or customize them in your AI Agent settings.

To end a conversation, all you need is the conversation ID of the active session. No request body is required: simply make a POST call to the endpoint that includes the conversation_id in the URL. This tells Ada that the conversation is complete and prevents any further messages from being added to it.

This example shows the minimal HTTP request required to end an active conversation in Ada. Replace <handle>, <conversation_id>, and <your-api-key> with your actual values.

http
1 POST https://<handle>.ada.support/api/v2/conversations/<conversation_id>/end
2 Authorization: Bearer <your-api-key>

This example uses the standard Python requests library to end a conversation synchronously. It’s ideal for simple scripts or applications that don’t rely on asynchronous I/O.

python
1 import requests
2
3 response = requests.post(
4 f"{ADA_BASE_URL}/api/v2/conversations/{conversation_id}/end",
5 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
6 )
7 response.raise_for_status()

Once a conversation is ended:

  • Ada stops processing new messages for that session.
  • If configured, a CSAT survey link or closing message is sent as a v1.conversation.message webhook event.
  • A conversation cannot be reactivated after it’s ended.
  • The conversation ID remains valid for querying past messages or logs.
  • Always ensure your front end reflects the closed state: disable input fields or prompt users to start a new conversation.

Making your integration production-ready

You’ve already seen the note about rate limits and retries earlier in this guide. In production, make sure your retry logic is fully tested, especially for HTTP 429-type responses.

Even with well-formed requests, things can still go wrong. Network issues or invalid payloads can cause occasional hiccups. Here’s how to make your integration resilient when those things happen.

The Conversations API uses standard HTTP conventions for reporting errors. Here are a few best practices for production:

  • Add retries with backoff: Retry failed requests after a short delay, increasing the delay each time.
  • Handle rate limits: When you receive 429 Too Many Requests, check the Retry-After header and wait before retrying.
  • Validate before sending: Double-check request fields and types before making an API call.
  • Log and monitor errors: Capture response codes and request details to help diagnose issues later.
  • Be user-friendly: If something goes wrong, surface a helpful message instead of letting the app fail silently.

At this point, your integration should be ready for production use — it can create conversations, send and receive messages, and handle webhooks reliably. From here, you can:

  • Experiment with additional automation, logging, or analytics for your custom channel.
  • Explore Channels, Conversations, and Webhooks in the sidebar for complete endpoint details.