Creating a Custom Handoff

This guide walks through building a custom handoff integration end to end.

A custom handoff integration allows you to connect Ada to your own support platform, so that when the AI Agent determines a conversation needs a human agent, your system receives the handoff and can manage the live agent experience end to end.

First, it is recommended you first review the Getting started guide. Here, you’ll learn how to detect handoffs, relay messages between the end user and a human agent, handle file attachments, and transfer the end user back to the AI Agent when the handoff is complete.

Some examples in this guide are taken from Ada’s official Handoffs API demo repository, while others were created to illustrate alternative approaches.

Ada’s APIs include rate limits to ensure consistent performance and reliability. You’re unlikely to hit these during normal development, but it’s good practice to handle HTTP 429 Too Many Requests responses and implement retry logic. For more details, see:

Setting up

Before diving into the code, set up everything you need to run our demo locally and connect your environment to Ada’s Conversations API. This ensures your local app can authenticate with Ada, detect handoffs, relay messages, and receive webhook events in real time.

1

Obtain an Ada API key

If you don’t already have one, generate a new API key in the Ada Dashboard. This key lets your integration securely communicate with Ada’s Conversations API.

2

Clone the demo repo

Our demo repository contains a working example of how to detect handoffs, relay messages between end users and human agents, and handle webhook events. You’ll use it both as a reference and a sandbox for testing your integration.

  • Run these commands in your Terminal:

    bash
    $git clone https://github.com/AdaSupport/ada-handoffs-api-demo.git
    $cd ada-handoffs-api-demo

    This will:

    • Download the full demo project to your local machine.
    • Change into the project directory so you can start working inside it.
4

Create a tunnel for Ada's webhooks

Important: This step applies to local testing only. In production, your webhook endpoint should be hosted on a publicly accessible, HTTPS-secured domain that Ada can reach directly.

Ada sends all conversation updates, including handoff events and messages, as webhooks that your integration needs to receive in real time.

Because your computer isn’t publicly accessible during local development, Ada can’t reach localhost directly. To bridge that gap, you can use a tunneling tool. One option is ngrok, which creates a secure, temporary public URL that forwards requests to your local server.

There are other ways to achieve the same result, such as Cloudflare Tunnels, Localtunnel, or even network port forwarding. For simplicity, let’s assume you’re using ngrok.

Run this in a new terminal window:

bash
$ngrok http 8090

This creates a secure public URL that forwards requests to your local server running on port 8090. If everything is working as expected, you will see a newly created forwarding URL, for example: https://1234-56-78-90.ngrok-free.app.

That’s the public address Ada will use to send webhook requests to your local app.

Keep this terminal window open while you’re testing. If you close it, your tunnel (and webhook connectivity) will stop.

If you restart ngrok, it will generate a new URL. You’ll need to update the endpoint in the Ada Dashboard whenever that happens.

Later in this guide, you’ll use this forwarding address when you create a webhook endpoint in the Ada Dashboard (under Config > PLATFORM > Webhooks). For example, your full webhook endpoint URL might look like this: https://1234-56-78-90.ngrok-free.app/webhooks/message.
5

Configure Conversations API webhooks

Ada delivers conversation events, including handoff notifications and messages, to your integration through webhooks. These are secure HTTP callbacks that notify your app when something happens in Ada.

To protect your integration, you need a way to ensure those webhook requests really come from Ada. Ada provides a Signing Secret for each webhook endpoint, which your service can use to verify the authenticity of every incoming request.

  1. In the Ada Dashboard, go to Config > PLATFORM > Webhooks > Endpoints.
  1. Create a new endpoint, or open an existing one if you already have one configured.
  • Make sure the endpoint points to either the temporary public URL that forwards requests to your local server (via ngrok if you’re testing locally) or to your production webhook URL that matches the route your server listens on.

    For example: https://1234-56-78-90.ngrok-free.app/webhooks/message.

  • On the Endpoints tab, under Subscribe to events, make sure to include the Conversations API events. You’ll find them under the v1 > conversation category:

    • v1.conversation.message: Triggers when a message is sent or received.
    • v1.conversation.handoff.ended: Triggers when a handoff ends and control returns to the AI Agent.

    These events ensure your integration receives real-time updates for every handoff and message.

  • Typically a handoff integration will only process the Conversation API events that occur after the AI Agent has handed a conversation off to it. This is done by verifying the handoff_integration value in the event matches the Handoff Integration Identifier configured in your start handoff trigger.
  • If you would like your webhook to only receive events for your specific handoff, on the Endpoint configuration tab, under Channels, add a new value precisely matching the Handoff Integration Identifier you will later configure in your start handoff trigger.
  1. In the right-side navigation panel, locate the Signing Secret.
  2. Reveal and copy the value. You will need it in the next step when updating your .env file: WEBHOOK_SECRET=<your-signing-secret>. Your integration will use this secret to verify that all incoming webhook requests originate from Ada.
6

Set up your .env file

The .env file is where you’ll store the core configuration values that connect your local demo to your Ada instance. It tells your local environment which AI Agent to talk to, how to authenticate, and how to verify incoming webhook requests.

In the demo repository, you’ll find a template called .env.example. Start by duplicating it so you can edit your own version:

bash
$ cp .env.example .env

Now open your newly created .env file in your editor. It will look something like this:

bash
$ ADA_BASE_URL=
$ ADA_API_KEY=
$ WEBHOOK_SECRET=

Here’s what each of these values means and how to update them:

  • ADA_BASE_URL: The base URL for the API of your Ada instance consisting of your agent’s handle and your organization’s domain. For example: ADA_BASE_URL=https://example.ada.support/api.
  • ADA_API_KEY: The Ada API key you generated in your Ada Dashboard under Config > PLATFORM > API keys. This authenticates every API request to Ada. Treat it like a password: never commit it to Git.
  • WEBHOOK_SECRET: The signing secret Ada uses to verify webhook requests. Use the value obtained from the webhook endpoint you created in the previous step in the Ada Dashboard (Config > PLATFORM > Webhooks > Endpoints > Signing Secret).

After you’ve filled in the available values, your .env file should look something like this:

bash
$ADA_BASE_URL=https://example.ada.support/api
$ADA_API_KEY=abcd1234efgh5678ijkl
$WEBHOOK_SECRET=whsec_xxxxxxxxxxxxxxxxxxxxx

Step 1: Configure the handoff trigger

Before your integration can receive handoffs, you need to configure a handoff in the Ada Dashboard that triggers an HTTP request to your system. This tells Ada to notify your integration whenever the AI Agent determines a conversation should be handed off to a human agent.

Configure the HTTP Request Block inside a Handoff flow. Do not place it directly in a Playbook or Action. Starting a handoff outside of a Handoff flow bypasses conversation state management. This can cause the AI Agent to continue responding during the handoff, prevent CSAT from triggering on completion, and interfere with the handoff lifecycle.

  1. On the Ada Dashboard, go to Config > AI AGENT > Handoffs, then open the Handoffs tab.
  2. Click New Handoff.
  3. Give the handoff a descriptive name and description. The AI Agent uses the name and description to reason about when this handoff should be triggered (for example, “Live Agent Support” with a description like “Transfer the end user to a live agent for complex account issues”).

In the handoff content editor, add an HTTP Request block. This block makes an API call to your integration’s endpoint when the handoff is triggered, notifying your system that a conversation needs a human agent.

Configure the block with the following settings:

  • URL: The endpoint on your integration that receives handoff notifications. For example: https://your-server.com/webhooks/start-handoff (or your ngrok URL during local testing).
  • Method: POST
  • Headers: Include any authentication headers your integration requires (not required for the demo repo, but recommended for production—see Authenticating the handoff trigger for details).
  • Body Content: Include the conversation context your human agents need. The demo repo requires a key ada_conversation_id with the conversation_id variable given as a value. We recommend passing on at least the conversation_id so your handoff can identify which conversation was handed off.
  • Pausing the conversation: Ensure the checkbox for pausing the conversation is enabled. This will pause the execution of the AI Agent until the handoff is complete.
  • Handoff Integration Identifier: The value for this field will be the identifier used for your handoff integration. This identifier will help you distinguish the messages intended for your handoff through the event payload. For the demo repo, the expected value to set is “custom-handoff”.
HTTP Request block configured for a custom handoff integration

When the AI Agent determines a conversation should be handed off:

  1. Ada executes the handoff flow, including any blocks you’ve configured (such as text messages or capture blocks that run before the HTTP Request block).
  2. The HTTP Request block sends a POST request to your integration’s endpoint with the conversation details.
  3. Your integration receives the request and can begin managing the handoff, such as routing the conversation to an available human agent.
  4. The conversation enters a handoff state in Ada. While in this state, the AI Agent stops responding to end user messages, and your integration is responsible for relaying messages between the end user and the human agent.

Step 2: Detect when a conversation has been handed off

When your integration’s endpoint receives the HTTP request from the handoff trigger, it needs to identify the conversation and prepare to manage the handoff.

Your endpoint receives the POST request from Ada’s HTTP Request block. The request body contains the data you configured in Step 1, including the conversation_id.

python
1from fastapi import Request, HTTPException
2
3@app.post("/handoffs/start")
4async def handle_handoff_start(request: Request):
5 payload = await request.json()
6
7 conversation_id = payload.get("conversation_id")
8 if not conversation_id:
9 raise HTTPException(status_code=400, detail="Missing conversation_id")
10
11 # Store the conversation as an active handoff
12 active_handoffs[conversation_id] = {
13 "conversation_id": conversation_id,
14 "status": "active",
15 }
16
17 # Notify a human agent that a new handoff is waiting
18 notify_agent(conversation_id)
19
20 return {"status": "ok"}

Your integration should maintain a mapping of active handoffs so it can route messages correctly. At a minimum, track:

  • conversation_id: The Ada conversation that was handed off.
  • The assigned human agent (once one picks up the handoff).
  • The handoff status (waiting, active, ended).

This mapping allows your integration to route incoming webhook events to the correct human agent and ignore events for conversations not managed by your integration.

Step 3: Build a conversation transcript

When a human agent picks up a handoff, they need context about what the end user has already discussed with the AI Agent. You can retrieve the full conversation history using the Get conversation messages endpoint.

Fetch the transcript when a human agent accepts the handoff, so they have the full conversation context before responding to the end user.

Call the Get conversation messages endpoint with the conversation_id from the handoff. The endpoint supports pagination using cursor and limit parameters.

Replace <handle>, <conversation_id>, and <your-api-key> with your actual values.

http
1 GET https://<handle>.ada.support/api/v2/conversations/<conversation_id>/messages
2 Authorization: Bearer <your-api-key>

This example fetches all messages in a conversation, handling pagination to build the complete transcript.

python
1import requests
2
3def get_transcript(conversation_id):
4 messages = []
5 page_url = f"{ADA_BASE_URL}/v2/conversations/{conversation_id}/messages?limit=100"
6
7 while page_url:
8 response = requests.get(
9 page_url,
10 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
11 page_url,
12 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
13 )
14 )
15 response.raise_for_status()
16 data = response.json()
17
18 messages.extend(data["data"])
19
20 page_url = data.get("meta", {}).get("next_page_url")
21
22 return messages

The response contains a list of messages of "type": "message_logs", each with an author (including role and display_name) and content (text, file, or link). Use the author.role field to distinguish between messages from the end user (end_user), the AI Agent (ai_agent), and any human agents (human_agent).

json
1{
2 "data": [
3 {
4 "type": "message_logs",
5 "message_id": "6900e297d458517e4b15787d",
6 "author": {
7 "role": "end_user",
8 "id": "6900e28ff6f931ca8672f6dd"
9 },
10 "content": {
11 "type": "text",
12 "body": "I need help with my subscription."
13 },
14 "created_at": "2025-10-28T15:34:41+00:00"
15 },
16 {
17 "type": "message_logs",
18 "message_id": "6900e299d458517e4b157881",
19 "author": {
20 "role": "ai_agent"
21 },
22 "content": {
23 "type": "text",
24 "body": "I'd be happy to help! Let me connect you with a specialist."
25 },
26 "created_at": "2025-10-28T15:35:05+00:00"
27 }
28 ],
29 "meta": {
30 "next_page_url": "https://example.ada.support/api/v2/conversations/:conversation_id/messages?cursor=6658f91ea88ff7e389eff34d"
31 }
32}

If the response includes a value for meta.next_page_url, it indicates there are still additional messages to pull for the conversation. Ensure you continue calling this API and appending the results until it is null to build a complete transcript. After pulling the raw message data, format them into a human-readable transcript to be presented to the human agent so they have full context before engaging with the end user.

Step 4: Send the human agent’s messages

Once a human agent is ready to respond, your integration sends their messages to the Ada conversation using the Create a new message endpoint. This ensures the end user sees the human agent’s replies in the same conversation thread.

Send a message whenever the human agent types a reply in your support platform. Each message is sent to the active conversation identified by conversation_id.

Each message must specify the author as a human_agent and include a display_name so the end user knows who they’re talking to.

Replace <handle>, <conversation_id>, and <your-api-key> with your actual values.

http
1 POST https://<handle>.ada.support/api/v2/conversations/<conversation_id>/messages
2 Authorization: Bearer <your-api-key>
3 Content-Type: application/json
4
5 {
6 "author": {
7 "role": "human_agent",
8 "display_name": "Alex from Support"
9 },
10 "content": {
11 "type": "text",
12 "body": "Hi Jane! I can see your subscription details. Let me look into this for you."
13 }
14 }
python
1import requests
2
3def send_agent_message(conversation_id, agent_name, message_text):
4 response = requests.post(
5 f"{ADA_BASE_URL}/v2/conversations/{conversation_id}/messages",
6 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
7 json={
8 "author": {
9 "role": "human_agent",
10 "display_name": agent_name,
11 },
12 "content": {"type": "text", "body": message_text},
13 },
14 )
15 response.raise_for_status()
16 return response.json()

A successful response confirms the message was created and returns the message details.

json
1{
2 "id": "6789abcd1234ef567890",
3 "conversation_id": "5df263b7db5a7e6ea03fae9b",
4 "author": {
5 "role": "human_agent",
6 "display_name": "Alex from Support"
7 },
8 "content": {
9 "type": "text",
10 "body": "Hi Jane! I can see your subscription details. Let me look into this for you."
11 },
12 "created_at": "2025-10-21T12:10:00+00:00"
13}

Example abbreviated for clarity. See the full response here.

Human Agent messages you send will be echoed by the v1.conversation.message event, and can be ignored by filtering author.role when it is human_agent.

Step 5: Listen to end user messages via webhooks

While the conversation is in a handoff state, end user messages are delivered to your integration as v1.conversation.message webhook events. Your integration must listen for these events and relay them to the human agent.

Ada triggers a v1.conversation.message webhook whenever the end user sends a message during the handoff. Your integration should listen for these events and forward the message content to the assigned human agent.

Each webhook event includes a JSON payload with the message details. During a handoff, the handoff_integration field identifies which handoff integration the message is associated with.

json
1{
2 "type": "v1.conversation.message",
3 "timestamp": "2025-10-21T12:15:00+00:00",
4 "data": {
5 "message_id": "msg_789",
6 "conversation_id": "5df263b7db5a7e6ea03fae9b",
7 "end_user_id": "5df263b7db5a7e6ea03fae9c",
8 "handoff_integration": "custom-handoff",
9 "author": {
10 "role": "end_user",
11 "display_name": "Jane"
12 },
13 "content": {
14 "type": "text",
15 "body": "Yes, I upgraded my plan last week but I'm still being charged the old rate."
16 },
17 "ai_agent_domain": "example.ada.support"
18 }
19}

To receive and process webhooks, your app must define an endpoint that matches the URL configured in the Ada Dashboard. This route handles incoming POST requests, verifies that they’re from Ada, and routes the message to the correct human agent.

  • If you want to use a language-specific package, you can use the package provided by Svix as documented here.

  • Alternatively, webhooks can be verified without their library using this manual verification guide.

For more information about how Ada uses webhooks, see this topic.

The following example shows a sample webhook handler that routes messages to the correct human agent during a handoff.

python
1from fastapi import Request, HTTPException
2from svix import Webhook, WebhookVerificationError
3
4@app.post("/webhooks/message")
5async def handle_message(request: Request):
6 headers = request.headers
7 payload = await request.body()
8
9 try:
10 webhook = Webhook(WEBHOOK_SECRET)
11 webhook.verify(payload, dict(headers))
12 except WebhookVerificationError:
13 raise HTTPException(status_code=400, detail="Invalid signature")
14
15 msg = json.loads(payload)
16 event_type = msg.get("type")
17
18 if event_type == "v1.conversation.message":
19 handle_webhook_message(msg["data"])
20
21 return {"status": "ok"}

Step 6: Filter webhook events for your integration

Your webhook endpoint receives events for all conversations, not just the ones your custom handoff integration manages. It’s important to filter events so your integration only processes messages intended for it and ignores messages meant for the AI Agent or a different handoff integration.

The v1.conversation.message webhook payload includes a handoff_integration field that identifies which integration the message is associated with:

  • null: The message is for the AI Agent (no active handoff). Your custom handoff integration should ignore these.
  • "custom-handoff": The message is for your custom handoff integration. Process these messages.
  • Other values (e.g., "zendesk_chat", "salesforce"): The message is for a different handoff integration. Your custom handoff integration should ignore these.
python
1HANDOFF_INTEGRATION_NAME = "custom-handoff"
2
3def should_process_message(webhook_data):
4 """Check if this webhook event is for our custom handoff integration."""
5 handoff_integration = webhook_data.get("handoff_integration")
6 return handoff_integration == HANDOFF_INTEGRATION_NAME

To ensure your agent only receives messages they should respond to, combine the handoff_integration check with others, like checking the author.role:

python
1def handle_webhook_message(data):
2 conversation_id = data["conversation_id"]
3 handoff_integration = data.get("handoff_integration")
4 author_role = data["author"]["role"]
5
6 # Only process end user messages designated for our integration
7 if (
8 author_role == "end_user"
9 and handoff_integration == HANDOFF_INTEGRATION_NAME
10 and conversation_id in active_handoffs
11 ):
12 forward_to_human_agent(conversation_id, data)

This ensures your integration only responds to end user messages during handoffs that it is actively managing.

Step 7: Buffer and order messages

Ada delivers each message as its own webhook. Network jitter and parallel processing can prevent those webhooks from arriving in chronological order. The fix is simple: buffer briefly, sort by timestamp, then forward to your agent.

In a production environment, you’ll likely have multiple web servers. Use a shared store/queue so ordering works across instances.

In our demo repository, incoming webhook messages are received, sorted, and forwarded to the agent in order to keep the ticket experience real-time and conversational.

  1. Webhooks are received one by one.
  2. Each message is stored in a short-lived, in-memory queue per conversation.
  3. A brief delay (e.g., 1–2 seconds) allows messages to accumulate into a micro-batch.
  4. The batch is then sorted by timestamp.
  5. Messages are forwarded to the agent in the correct order.

This pattern preserves the conversational flow while still feeling real-time.

The demo implements a simple in-memory batcher using asyncio. You can find this logic in app/server/webhooks.py.

python
1 # app/server/webhooks.py (excerpt)
2
3 _global_event_queue = []
4 _global_batch_task = None
5 _global_batch_lock = asyncio.Lock()
6
7 async def push_event_to_queue(event):
8 """Batch events in a queue to be processed after a delay to account for unordered messages"""
9 global _global_batch_lock
10
11 async with _global_batch_lock:
12 global _global_event_queue, _global_batch_task
13 _global_event_queue.append(event)
14
15 if _global_batch_task is not None:
16 _global_batch_task.cancel()
17
18 _global_batch_task = asyncio.create_task(batch_process_events())
19
20 async def batch_process_events():
21 """Process all messages in the queue after a delay"""
22 global _global_batch_lock
23
24 await asyncio.sleep(2)
25
26 async with _global_batch_lock:
27 global _global_event_queue, _global_batch_task
28 events = _global_event_queue
29 _global_event_queue = []
30 _global_batch_task = None
31
32 events.sort(key=lambda e: e.timestamp)
33 for event in events:
34 if event["type"] == "v1.conversation.message":
35 await process_message_event(event)
36 elif event["type"] == "v1.conversation.handoff.ended":
37 await process_end_handoff_event(event)
  • push_event_to_queue() collects incoming webhook events in a global queue.
  • When new events arrive, any pending batch task is canceled and rescheduled to include the latest events.
  • After a short delay, batch_process_events() runs, sorting events by timestamp and forwarding to the agent in order.
  • This lightweight batching logic ensures that even if webhook events arrive out of order, messages are displayed in sequence for a smooth, real-time conversation experience.

Step 8: Receive file attachments from end users

During a handoff, end users may send file attachments (such as screenshots or documents) to help the human agent understand their issue. These arrive as v1.conversation.message webhook events with content.type set to "file".

When an end user sends a file, the webhook payload includes the file details in the content field.

json
1{
2 "type": "v1.conversation.message",
3 "timestamp": "2025-10-21T12:20:00+00:00",
4 "data": {
5 "message_id": "msg_file_001",
6 "conversation_id": "5df263b7db5a7e6ea03fae9b",
7 "end_user_id": "5df263b7db5a7e6ea03fae9c",
8 "handoff_integration": "custom-handoff",
9 "author": {
10 "role": "end_user",
11 "display_name": "Jane"
12 },
13 "content": {
14 "type": "file",
15 "url": "https://s3.amazonaws.com/bucket/conversations/5df263b7db5a7e6ea03fae9b/5df263b7db5a7e6ea03fae9b/a1b2c3d4-e5f6-7890-abcd-ef1234567890/screenshot.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Signature=...",
16 "mime_type": "image/png",
17 "filename": "screenshot.png"
18 },
19 "ai_agent_domain": "example.ada.support"
20 }
21}

When your integration receives a file message:

  1. Download the file from the content.url (this is a presigned URL valid for 7 days).
  2. Forward the file to the human agent in your support platform.
  3. Store the file reference if your platform needs it for later retrieval.
python
1def handle_file_message(data):
2 content = data["content"]
3 file_url = content["url"]
4 filename = content["filename"]
5 mime_type = content["mime_type"]
6
7 # Download the file
8 file_response = requests.get(file_url)
9 file_response.raise_for_status()
10
11 # Forward to the human agent's platform
12 forward_file_to_agent(
13 conversation_id=data["conversation_id"],
14 file_data=file_response.content,
15 filename=filename,
16 mime_type=mime_type,
17 )

Step 9: Send the human agent’s file attachments

Human agents may also need to send files to end users during a handoff, such as instructions, forms, or reference documents. This is a two-step process: first upload the file, then send it as a message.

Use the Upload an attachment endpoint to upload the file. Attachments can only be uploaded when the conversation is in a handoff state. This endpoint documentation also highlights other restrictions like file size and type.

Replace <handle>, <conversation_id>, and <your-api-key> with your actual values.

http
1 POST https://<handle>.ada.support/api/v2/conversations/<conversation_id>/attachments
2 Authorization: Bearer <your-api-key>
3 Content-Type: multipart/form-data
4
5 file=@/path/to/instructions.pdf
json
1{
2 "url": "https://s3.amazonaws.com/bucket/conversations/5df263b7db5a7e6ea03fae9b/5df263b7db5a7e6ea03fae9b/a1b2c3d4-e5f6-7890-abcd-ef1234567890/instructions.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Signature=...",
3 "mime_type": "application/pdf",
4 "filename": "instructions.pdf"
5}

The response includes a presigned url available for 7 days that you’ll use in the next step to send the file as a message.

Use the Create a new message endpoint with the presigned URL from the upload response.

http
1 POST https://<handle>.ada.support/api/v2/conversations/<conversation_id>/messages
2 Authorization: Bearer <your-api-key>
3 Content-Type: application/json
4
5 {
6 "author": {
7 "role": "human_agent",
8 "display_name": "Alex from Support"
9 },
10 "content": {
11 "type": "file",
12 "url": "https://s3.amazonaws.com/bucket/conversations/5df263b7db5a7e6ea03fae9b/5df263b7db5a7e6ea03fae9b/a1b2c3d4-e5f6-7890-abcd-ef1234567890/instructions.pdf?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Signature=...",
13 "mime_type": "application/pdf",
14 "filename": "instructions.pdf"
15 }
16 }
python
1import requests
2
3def send_agent_file(conversation_id, agent_name, file_path):
4 # Step 1: Upload the attachment
5 with open(file_path, "rb") as f:
6 upload_response = requests.post(
7 f"{ADA_BASE_URL}/v2/conversations/{conversation_id}/attachments",
8 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
9 files={"file": f},
10 )
11 upload_response.raise_for_status()
12 attachment = upload_response.json()
13
14 # Step 2: Send as a message
15 message_response = requests.post(
16 f"{ADA_BASE_URL}/v2/conversations/{conversation_id}/messages",
17 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
18 json={
19 "author": {
20 "role": "human_agent",
21 "display_name": agent_name,
22 },
23 "content": {
24 "type": "file",
25 "url": attachment["url"],
26 "mime_type": attachment["mime_type"],
27 "filename": attachment["filename"],
28 },
29 },
30 )
31 message_response.raise_for_status()
32 return message_response.json()

Step 10: Listen for when the end user ends the handoff

While a handoff is active, the end user may choose to end the handoff from their side. Your integration should listen for the v1.conversation.handoff.ended webhook event to detect this and clean up the handoff session.

This event fires whenever a handoff ends, regardless of who initiated it — the end user, the system, or your integration calling the End a handoff endpoint. The handoff_integration field in the payload identifies which integration the event belongs to, so your integration can filter for only its own handoffs.

The v1.conversation.handoff.ended event includes the conversation_id, end_user_id, handoff_integration, and ai_agent_domain.

json
1{
2 "type": "v1.conversation.handoff.ended",
3 "timestamp": "2025-10-21T12:30:00+00:00",
4 "data": {
5 "conversation_id": "5df263b7db5a7e6ea03fae9b",
6 "end_user_id": "5df263b7db5a7e6ea03fae9c",
7 "handoff_integration": "custom-handoff",
8 "ai_agent_domain": "example.ada.support"
9 }
10}

When you receive this event, verify that the handoff_integration matches your integration, then clean up the handoff:

  1. Notify the human agent that the end user has ended the handoff.
  2. Remove the conversation from your active handoffs tracking.
  3. Clean up any resources associated with the handoff (for example, close the ticket in your support platform).

This example mirrors the pattern used in the demo repository (app/server/webhooks.py), which processes v1.conversation.handoff.ended events alongside message events in the same batching queue.

python
1HANDOFF_INTEGRATION_NAME = "custom-handoff"
2
3def handle_handoff_ended(data):
4 conversation_id = data["conversation_id"]
5 handoff_integration = data.get("handoff_integration")
6
7 # Only process events for our integration
8 if handoff_integration != HANDOFF_INTEGRATION_NAME:
9 return
10
11 if conversation_id in active_handoffs:
12 # Notify the human agent
13 notify_agent_handoff_ended(conversation_id)
14
15 # Clean up the handoff
16 del active_handoffs[conversation_id]

The v1.conversation.handoff.ended event signals that the handoff has ended and control has returned to the AI Agent. The conversation itself may still be active — the end user can continue chatting with the AI Agent after the handoff ends.

If the entire conversation ends (for example, the end user closes the chat), you’ll receive a separate v1.conversation.ended event. Your integration can optionally listen for this event as well, but the v1.conversation.handoff.ended event is the primary signal for managing handoff lifecycle.

Step 11: End the handoff and return to the AI Agent

When the human agent has resolved the end user’s issue, your integration should end the handoff to transfer the end user back to the AI Agent. Use the End a handoff endpoint.

End the handoff when:

  • The human agent resolves the end user’s issue and explicitly closes the handoff.
  • Your integration determines the handoff should end (for example, based on a timeout or routing rule).

After the handoff ends, the AI Agent resumes control of the conversation and responds to any subsequent end user messages.

To end a handoff, make a POST call to the endpoint with the conversation_id in the URL. No request body is required.

Replace <handle>, <conversation_id>, and <your-api-key> with your actual values.

http
1 POST https://<handle>.ada.support/api/v2/conversations/<conversation_id>/end-handoff
2 Authorization: Bearer <your-api-key>
python
1import requests
2
3def end_handoff(conversation_id):
4 response = requests.post(
5 f"{ADA_BASE_URL}/v2/conversations/{conversation_id}/end-handoff",
6 headers={"Authorization": f"Bearer {ADA_API_KEY}"},
7 )
8 response.raise_for_status()
9
10 # Clean up the handoff from tracking
11 if conversation_id in active_handoffs:
12 del active_handoffs[conversation_id]
13
14 return response.json()

Once the handoff ends:

  • Ada sends a v1.conversation.handoff.ended webhook event to confirm the handoff has ended.
  • The AI Agent resumes responding to end user messages.
  • If configured, Ada may send a CSAT survey to the end user.

Making your integration production-ready

You’ve already seen the note about rate limits and retries earlier in this guide. In production, make sure your retry logic is fully tested, especially for HTTP 429-type responses.

Even with well-formed requests, things can still go wrong. Network issues or invalid payloads can cause occasional hiccups. Here’s how to make your integration resilient when those things happen.

The Conversations API uses standard HTTP conventions for reporting errors. Here are a few best practices for production:

  • Add retries with backoff: Retry failed requests after a short delay, increasing the delay each time.
  • Handle rate limits: When you receive 429 Too Many Requests, check the Retry-After header and wait before retrying.
  • Validate before sending: Double-check request fields and types before making an API call.
  • Log and monitor errors: Capture response codes and request details to help diagnose issues later.
  • Handle handoff state errors: If you attempt to send a message or upload an attachment to a conversation that is not in a handoff state, you’ll receive a 422 Unprocessable Entity error. Make sure your integration handles this gracefully.

In a production environment, you should verify that incoming handoff requests actually originate from Ada’s HTTP Request block and not from an unauthorized source.

To do this, configure a shared secret as an authorization header in the HTTP Request block:

  1. In the Ada Dashboard, first securely store your pre-shared key by going to Config > AI Agent > Actions > Manage Tokens
  2. Next, go to Config > AI Agent > Handoffs to open the handoff and edit the HTTP Request block.
  3. Add a header of your choice (Authorization is typical) with the secured variable in the value. For example: Authorization: Bearer @your_secret_key_variable.
  4. In your integration, validate this header on every incoming handoff request.
python
1HANDOFF_AUTH_KEY = os.getenv("HANDOFF_AUTH_KEY")
2
3@app.post("/handoffs/start")
4async def handle_handoff_start(request: Request):
5 # Verify the authorization header
6 auth_header = request.headers.get("Authorization")
7 if auth_header != f"Bearer {HANDOFF_AUTH_KEY}":
8 raise HTTPException(status_code=401, detail="Unauthorized")
9
10 payload = await request.json()
11 conversation_id = payload.get("conversation_id")
12
13 # Process the event...
14 return {"status": "ok"}

The HTTP Request block in the handoff flow can include Ada variables in the request body. This allows you to pass additional context about the end user or conversation to your integration.

For example, you can capture the end user’s email address, account number, or issue category using blocks earlier in the handoff flow, and then include those values in the HTTP Request body:

json
1{
2 "conversation_id": "{conversation_id}",
3 "end_user_email": "{email}",
4 "account_number": "{account_number}",
5 "issue_category": "{issue_category}"
6}

Your integration can use this data to pre-populate fields in your support platform, route the handoff to the right team, or provide the human agent with additional context.

If your organization uses multiple AI Agents (for example, separate Agents for different brands or regions), a single integration can handle handoffs for all of them by using the ai_agent_domain field in webhook events.

The ai_agent_domain field appears in every conversation webhook event and identifies which AI Agent environment the event originated from (for example, acme.ada.support or acme.eu.ada.support).

python
1def handle_webhook_message(data):
2 ai_agent_domain = data.get("ai_agent_domain")
3 conversation_id = data["conversation_id"]
4
5 # Route to the correct support team based on the AI Agent
6 if ai_agent_domain == "acme-us.ada.support":
7 route_to_team("us-support", conversation_id, data)
8 elif ai_agent_domain == "acme-eu.ada.support":
9 route_to_team("eu-support", conversation_id, data)
10 else:
11 route_to_team("default-support", conversation_id, data)

This pattern allows you to:

  • Use a single webhook endpoint for all your AI Agents.
  • Route handoffs to different teams or queues based on which Agent initiated the handoff.
  • Apply different handling logic per Agent (for example, different SLAs or escalation rules).
  • Dynamically determine the correct ADA_BASE_URL when making API calls back to Ada, by constructing it from the ai_agent_domain value.

Ensure your message ordering strategy is suitable for your production environment, where you may have multiple web servers responding to these events. Typically this will come in the shape of a shared buffer store.

At this point, your integration should be ready for production use — it can detect handoffs, relay messages between end users and human agents, handle file attachments, and transfer end users back to the AI Agent. From here, you can:

  • Experiment with additional routing logic, agent assignment, or analytics for your custom handoff.
  • Explore Conversations and Webhooks in the sidebar for complete endpoint details.