API Documentation
Current APIs
- Agent
- Making phone calls
- Executions & calls data
- Phone numbers
- Inbound Agents
- Batches
- Knowledgebases
- Providers
- Voice
- User
Legacy APIs (deprecated)
List all Voice AI Agents API (deprecated)
List all Voice AI agents under your account, along with their names, statuses, and creation dates, using Bolna APIs.
curl --request GET \
--url https://api.bolna.dev/agent/all \
--header 'Authorization: Bearer <token>'
[
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"agent_name": "Alfred",
"agent_type": "other",
"agent_status": "processed",
"created_at": "2024-01-23T01:14:37Z",
"updated_at": "2024-01-29T18:31:22Z",
"tasks": [
{
"task_type": "conversation",
"tools_config": {
"llm_agent": {
"model": "gpt-3.5-turbo",
"max_tokens": 100,
"agent_flow_type": "streaming",
"family": "openai",
"provider": "openai",
"base_url": "https://api.openai.com/v1",
"temperature": 0.1,
"request_json": false,
"routes": {
"embedding_model": "snowflake/snowflake-arctic-embed-m",
"routes": [
{
"route_name": "politics",
"utterances": [
"Who do you think will win the elections?",
"Whom would you vote for?"
],
"response": "Hey, thanks but I do not have opinions on politics",
"score_threshold": 0.9
}
]
}
},
"synthesizer": {
"provider": "polly",
"provider_config": {
"voice": "Matthew",
"engine": "generative",
"sampling_rate": "8000",
"language": "en-US"
},
"stream": true,
"buffer_size": 150,
"audio_format": "wav"
},
"transcriber": {
"provider": "deepgram",
"model": "nova-2",
"language": "en",
"stream": true,
"sampling_rate": 16000,
"encoding": "linear16",
"endpointing": 100
},
"input": {
"provider": "twilio",
"format": "wav"
},
"output": {
"provider": "twilio",
"format": "wav"
},
"api_tools": null
},
"toolchain": {
"execution": "parallel",
"pipelines": [
[
"transcriber",
"llm",
"synthesizer"
]
]
},
"task_config": {
"hangup_after_silence": 10,
"incremental_delay": 400,
"number_of_words_for_interruption": 2,
"hangup_after_LLMCall": false,
"call_cancellation_prompt": null,
"backchanneling": false,
"backchanneling_message_gap": 5,
"backchanneling_start_delay": 5,
"ambient_noise": false,
"ambient_noise_track": "office-ambience",
"call_terminate": 90
}
}
],
"agent_prompts": {
"task_1": {
"system_prompt": "What is the Ultimate Question of Life, the Universe, and Everything?"
}
}
}
]
These APIs have now been deprecated.
Please use the latest v2 APIs.
Authorizations
Bearer authentication header of the form Bearer <token>
, where <token>
is your auth token.
Response
Unique identifier for the agent
Human-readable agent name
"Alfred"
Type of agent
"other"
Current status of the agent
seeding
, processed
"processed"
Timestamp of agent creation
"2024-01-23T01:14:37Z"
Timestamp of last update for the agent
"2024-01-29T18:31:22Z"
An array of tasks that the agent can perform
Type of task
conversation
, extraction
, summarization
, webhook
Configuration of multiple tools that form a task
Configuration of LLM model for the agent task
"gpt-3.5-turbo"
streaming
, preprocessed
"openai"
"openai"
"https://api.openai.com/v1"
0.1
Semantic routing layer
Since we use fastembed all models supported by fastembed are supported by us
"snowflake/snowflake-arctic-embed-m"
These are predefined routes that can be used to answer FAQs, or set basic guardrails, or do a static function call.
Configuration of Synthesizer model for the agent task
polly
, elevenlabs
, deepgram
, styletts
Name of voice
Matthew
Engine of voice
generative
Language of voice
en-US
Sampling rate of voice
8000
, 16000
150
wav
Configuration of Transcriber model for the agent task
Identification provider for Deepgram
deepgram
nova-2
, nova-2-meeting
, nova-2-phonecall
, nova-2-finance
, nova-2-conversationalai
, nova-2-medical
, nova-2-drivethru
, nova-2-automotive
"nova-2"
en
, hi
, es
, fr
"en"
16000
linear16
100
Api tools you'd like the agents to have access to
Description of all the tools you'd like to add to the agent. It needs to be a JSON string as this will be passed to LLM.
Any unique name for this function tool
"transfer_call_support"
transfer_call
Use this tool to transfer the call
"Use this tool to transfer the call"
Should be used onkly in conversation task for now and it consists of all the required configuration for conversational nuances
Time to wait in seconds before hanging up in case user doesn't speak a thing
10
Since we work with interim results, this will dictate the linear delay to add before speaking everytime we get a partial transcript from ASR
400
To avoid accidental interruption, how many words should we wait for before interrupting
2
Weather to use LLM prompt to hang up or not. Pretty soon this will be replaced by predefined function
false
null
This will enable agent to acknowledge when user is speaking long sentences
false
Gap between every successive acknowledgement. We will also add a random jitter to this value to make it more random
5
Basic delay after which we should start with backchanneling
5
Toggle to add ambient noise to the call to add more naturalism
false
Track for ambient noise can be coffee-shop, call-center, office-ambience
office-ambience
, coffee-shop
, call-center
The call automatically disconnects reaching this limit
90
curl --request GET \
--url https://api.bolna.dev/agent/all \
--header 'Authorization: Bearer <token>'
[
{
"id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"agent_name": "Alfred",
"agent_type": "other",
"agent_status": "processed",
"created_at": "2024-01-23T01:14:37Z",
"updated_at": "2024-01-29T18:31:22Z",
"tasks": [
{
"task_type": "conversation",
"tools_config": {
"llm_agent": {
"model": "gpt-3.5-turbo",
"max_tokens": 100,
"agent_flow_type": "streaming",
"family": "openai",
"provider": "openai",
"base_url": "https://api.openai.com/v1",
"temperature": 0.1,
"request_json": false,
"routes": {
"embedding_model": "snowflake/snowflake-arctic-embed-m",
"routes": [
{
"route_name": "politics",
"utterances": [
"Who do you think will win the elections?",
"Whom would you vote for?"
],
"response": "Hey, thanks but I do not have opinions on politics",
"score_threshold": 0.9
}
]
}
},
"synthesizer": {
"provider": "polly",
"provider_config": {
"voice": "Matthew",
"engine": "generative",
"sampling_rate": "8000",
"language": "en-US"
},
"stream": true,
"buffer_size": 150,
"audio_format": "wav"
},
"transcriber": {
"provider": "deepgram",
"model": "nova-2",
"language": "en",
"stream": true,
"sampling_rate": 16000,
"encoding": "linear16",
"endpointing": 100
},
"input": {
"provider": "twilio",
"format": "wav"
},
"output": {
"provider": "twilio",
"format": "wav"
},
"api_tools": null
},
"toolchain": {
"execution": "parallel",
"pipelines": [
[
"transcriber",
"llm",
"synthesizer"
]
]
},
"task_config": {
"hangup_after_silence": 10,
"incremental_delay": 400,
"number_of_words_for_interruption": 2,
"hangup_after_LLMCall": false,
"call_cancellation_prompt": null,
"backchanneling": false,
"backchanneling_message_gap": 5,
"backchanneling_start_delay": 5,
"ambient_noise": false,
"ambient_noise_track": "office-ambience",
"call_terminate": 90
}
}
],
"agent_prompts": {
"task_1": {
"system_prompt": "What is the Ultimate Question of Life, the Universe, and Everything?"
}
}
}
]