Developer guide
This guide shows how to use the End Users API alongside the Conversations API to build context-aware, authenticated custom channel integrations. It covers setting user context before the first AI Agent turn, passing sensitive metadata securely, and common integration patterns.
POST /v2/end-users/ → POST /v2/conversations/ flow described in this guide is for custom channel (Conversations API) integrations only. For native chat, use the Chat SDK setMetaFields() and setSensitiveMetaFields() instead.Channel eligibility
- Native chat: The Chat SDK provides
setSensitiveMetaFields()as the primary path.PATCH /v2/end-users/:idwithsensitive_metadatais also available if you have theend_user_id. - Social and email channels:
PATCH /v2/end-users/:idwithsensitive_metadatais the only API pathway. Obtain theend_user_idfrom av1.end_user.createdorv1.conversation.createdwebhook event.
Before you begin
- An API token
- A custom channel configured in the Ada dashboard
- A webhook endpoint (optional, for receiving events)
Multi-language support from the first turn
This example creates an end user with a language preference and metadata before starting a conversation. The AI Agent receives the correct language and context from the greeting onward.
The AI Agent now has full user context from the first turn. Language is set to pt-BR, knowledge base lookups use the correct locale, and metadata values like region and plan_type are available as metavariables in Playbooks, Actions, and article rules.
Authenticated Actions without re-login
This example passes an auth token securely at user creation time so the AI Agent can run authenticated Actions (such as account lookups or order changes) without prompting the end user to log in again.
Create the end user with sensitive metadata
The sensitive_metadata values are encrypted at rest, redacted from the dashboard, excluded from LLM context, and automatically deleted after 24 hours. They do not appear in the response body.
Campaign-driven personalization
This example sets campaign context on an end user so the AI Agent can personalize the greeting and route the conversation based on the campaign that brought the user in.
Security considerations
sensitive_metadata vs. metadata
Use sensitive_metadata for any value that should not persist long-term or be visible to operators and LLMs. Use standard metadata for non-sensitive context that should be readable and persistent.
End-user scoped storage
sensitive_metadata values are stored at the end-user level, not the conversation level. If an end user has multiple active conversations, a value set in one conversation is accessible in all of them.
For the most common integration pattern (one end user per conversation), this is not observable. For integrations where a single end user has concurrent conversations across channels, be aware that sensitive values propagate to all active conversations for that end user.
Write-only behavior
sensitive_metadata values are never returned in API responses. If your integration needs to reference a token it previously set, store it on your side. Ada stores the value only for use by the AI Agent during the conversation.
FAQ
Can I use pre-greeting context with native chat?
No. The POST /v2/end-users/ → POST /v2/conversations/ flow is for custom channel integrations only. For native chat, use the Chat SDK setMetaFields() to set user context before the conversation starts.
Can I use pre-greeting context with social channels like WhatsApp?
Only if your WhatsApp integration uses the Conversations API (making it a custom channel). Social channels managed through Ada’s native integrations (such as SunCo-based WhatsApp) do not support this flow because the end user initiates the conversation and there is no opportunity to call POST /v2/conversations/.
Can I use secure metadata on native chat, social, or email channels?
Yes. PATCH /v2/end-users/:id with sensitive_metadata is channel-agnostic and works for any end user as long as you have the end_user_id. For native chat, the Chat SDK setSensitiveMetaFields() is the primary path, but the API also works. For social and email channels, the API is the only pathway.
What happens if I put sensitive values in standard metadata instead of sensitive_metadata?
Values in standard metadata are stored unencrypted, visible in the dashboard, included in LLM context, and returned in API responses. Always use sensitive_metadata for auth tokens, session IDs, and personally identifiable information.
What happens to end users created with POST that never start a conversation?
End users created through POST /v2/end-users/ that are not associated with a conversation within 24 hours of creation are automatically deleted. If your integration encounters an error between creating the end user and starting a conversation, reuse the same end_user_id for the retry rather than creating a new one. End users are not billed (billing is per conversation, not per end user).
Is POST /v2/end-users/ idempotent?
No. Each call creates a new end user. Your integration is responsible for storing and reusing the end_user_id from the first successful call to avoid duplicates.
What happens if I omit the language field on POST?
If no language is provided in the profile, the end user’s language defaults to the AI Agent’s configured default language (typically English). If you explicitly provide a language value, that value is used.
Does end-user metadata carry over across conversations?
End-user metadata (set via profile.metadata) persists across conversations for the same end user. Conversation metadata (set on the conversation object) does not carry over. If you need fresh metadata for each conversation, update the end user via PATCH /v2/end-users/:id before starting a new conversation.
How do I remove a sensitive metavariable?
Set the key to null in a PATCH /v2/end-users/:id request: