# Agents Library
Source: https://docs.bolna.ai/agents-library
Browse Bolna's Voice AI agent templates for quick and efficient setup. Customize pre-built agents to create powerful, AI-driven voice agents seamlessly.
## Agents library
Screens **React Native developers for remote roles** by assessing availability, salary, and deep technical expertise.
Languages: `English`
Screens candidates for **exam invigilator roles using Hinglish**, eligibility filters, and role-readiness checks with smart language adaptation.
Languages: `Hindi`
Screens candidates for **telecalling roles using experience-based branching** and strict bilingual response validation.
Languages: `English`, `Hindi`
Screens **travel nurses** by collecting key details on location, specialization, experience, and licensing.
Languages: `English`
Qualifies **HR-tech leads by adapting to English or Hindi**, capturing company details, hiring needs & key pain points.
Languages: `English`, `Hindi`
Screens candidates for Inside Sales Intern roles via eligibility checks and a quick sales pitch task. Ends early if disqualified.
Languages: `English`
## More pre-built agents
| Agent name | Agent link | Description |
| ------------------- | ------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------- |
| Property Tech | [https://bolna.ai/a/d3dbc421-b964-4c12-8afa-e087e440cb3e](https://bolna.ai/a/d3dbc421-b964-4c12-8afa-e087e440cb3e) | Lead Qualification of Owner or Broker and asks further details about property |
| Dentist Appointment | [https://bolna.ai/a/49077539-c821-42d4-84cb-f3522bea3187](https://bolna.ai/a/49077539-c821-42d4-84cb-f3522bea3187) | Front Desk for Dentist ; Schedules Appointments and collects information |
| Salon Booking | [https://bolna.ai/a/547e8f2d-d231-4fc6-a9f1-b90801d672b8](https://bolna.ai/a/547e8f2d-d231-4fc6-a9f1-b90801d672b8) | Front Desk for Salon ; Schedules appointment and collects information |
| Mr Bolna | [https://bolna.ai/a/19235308-2c67-4126-a0e9-9077e07bf4bb](https://bolna.ai/a/19235308-2c67-4126-a0e9-9077e07bf4bb) | Front Desk for Bolna ; Schedules meetings and answers FAQs |
| Weekend Planner | [https://bolna.ai/a/00b05a0f-d451-4afe-b55f-7e2a3fa4896d](https://bolna.ai/a/00b05a0f-d451-4afe-b55f-7e2a3fa4896d) | Plan your weekend with Samantha ; Helps users make weekend and vacation plans |
| Sales - Credit Card | [https://bolna.ai/a/68762ade-7e39-4b06-96e6-0d98863fbd0b](https://bolna.ai/a/68762ade-7e39-4b06-96e6-0d98863fbd0b) | Sales agent for credit cards (Hindi) ; Helps fintech companies sell credit cards |
| Sales - Loans | [https://bolna.ai/a/29780b7b-876e-40a6-96bd-069b8409dedb](https://bolna.ai/a/29780b7b-876e-40a6-96bd-069b8409dedb) | Sales agent for Loans (Hindi) ; Helps fintech companies sell loans |
| KFC Orderer | [https://bolna.ai/a/f57c3754-2cf2-4a7b-be74-e2b1e36aeb2f](https://bolna.ai/a/f57c3754-2cf2-4a7b-be74-e2b1e36aeb2f) | Order Booth Agent for KFC (any QSR Restaurant) ; Takes and confirms restaurant orders |
| Bolna Recruiter | [https://bolna.ai/a/4f00d5b7-b6d6-4651-a2a9-15da20f3656c](https://bolna.ai/a/4f00d5b7-b6d6-4651-a2a9-15da20f3656c) | Screens candidates, answers FAQs and schedules next round of interviews for Bolna |
# Create Voice AI Agent API (deprecated)
Source: https://docs.bolna.ai/api-reference/agent/create
POST /agent
Learn how to create new agents with Bolna APIs, enabling customized tasks, prompts, and configurations for Bolna voice AI agents.
These APIs have now been deprecated.
Please use the latest [**v2 APIs**](/api-reference/agent/v2/overview).
# Retrieve Voice AI Agent Details API (deprecated)
Source: https://docs.bolna.ai/api-reference/agent/get
GET /agent/{agent_id}
Retrieve detailed Voice AI agent information, including configuration, status, and tasks, using Bolna APIs.
These APIs have now been deprecated.
Please use the latest [**v2 APIs**](/api-reference/agent/v2/overview).
# List all Voice AI Agents API (deprecated)
Source: https://docs.bolna.ai/api-reference/agent/get_all
GET /agent/all
List all Voice AI agents under your account, along with their names, statuses, and creation dates, using Bolna APIs.
These APIs have now been deprecated.
Please use the latest [**v2 APIs**](/api-reference/agent/v2/overview).
# Get All Voice AI Agent Executions API
Source: https://docs.bolna.ai/api-reference/agent/get_all_agent_executions
GET /agent/{agent_id}/executions
Access all execution records for a specific agent, providing insights into performance and past interactions with Bolna APIs.
# Bolna Voice AI Agent APIs Overview (deprecated)
Source: https://docs.bolna.ai/api-reference/agent/overview
Explore Bolna Voice AI Agent APIs overview, featuring endpoints for creating, managing, and executing autonomous voice agents.
These APIs have now been deprecated.
Please use the latest [**v2 APIs**](/api-reference/agent/v2/overview).
## Endpoints
```
POST /agent
GET /agent
PUT /agent/:agent_id
PATCH /agent/:agent_id
GET /agent/all
```
## Agent Object Attributes
### `agent_config`
* `agent_name` *string* **(required)**
Name of the agent
* `agent_welcome_message` *string* **(required)**
Initial agent welcome message. you can pass dynamic values here using variables encloed within `{}`
* `webhook_url` *string* **(required)**
Get real-time details of the call progress and call data on a webhook. All supported events are listed in [Poll call data using webhooks](/polling-call-status-webhooks)
* `tasks` *array* **(required)**
Definitions and configuration for the agentic tasks
### `agent_prompts`
Prompts to be provided to the agent.
# Patch Update to Voice AI Agent API (deprecated)
Source: https://docs.bolna.ai/api-reference/agent/patch_update
PATCH /agent/{agent_id}
Learn how to partially update properties. Update Bolna Voice AI agent name, welcome message, webhook URL, voice settings, and prompts, using this endpoint.
These APIs have now been deprecated.
Please use the latest [**v2 APIs**](/api-reference/agent/v2/overview).
# Update Voice AI Agent API (deprecated)
Source: https://docs.bolna.ai/api-reference/agent/update
PUT /agent/{agent_id}
Update agent configurations, tasks, and prompts to refine behavior and capabilities using Bolna Voice AI agent APIs.
These APIs have now been deprecated.
Please use the latest [**v2 APIs**](/api-reference/agent/v2/overview).
# Create Voice AI Agent API
Source: https://docs.bolna.ai/api-reference/agent/v2/create
POST /v2/agent
Learn how to create new agents with Bolna APIs, enabling customized tasks, prompts, and configurations for Bolna voice AI agents.
# Delete Voice AI Agent API
Source: https://docs.bolna.ai/api-reference/agent/v2/delete
DELETE /v2/agent/{agent_id}
Use Bolna APIs to delete agents and their related data, ensuring proper cleanup of batches, executions, and configurations.
This deletes **ALL** agent data including all batches, all executions, etc.
# Retrieve Voice AI Agent Details API
Source: https://docs.bolna.ai/api-reference/agent/v2/get
GET /v2/agent/{agent_id}
Retrieve detailed Voice AI agent information, including configuration, status, and tasks, using Bolna APIs.
# Retrieve Voice AI Agent Execution API
Source: https://docs.bolna.ai/api-reference/agent/v2/get_agent_execution
GET /agent/{agent_id}/execution/{execution_id}
Fetch specific execution details of a Voice AI agent, including conversation times, statuses, and metrics, via Bolna APIs.
# List all Voice AI Agents API
Source: https://docs.bolna.ai/api-reference/agent/v2/get_all
GET /v2/agent/all
List all Voice AI agents under your account, along with their names, statuses, and creation dates, using Bolna APIs.
# Get All Voice AI Agent Executions API
Source: https://docs.bolna.ai/api-reference/agent/v2/get_all_agent_executions
GET /v2/agent/{agent_id}/executions
Access all execution records for a specific agent, providing insights into performance and past interactions with Bolna APIs.
## Pagination
This API supports pagination using the `page_number` and `page_size` query parameters. You can utilize `has_more` in the API response to determine if you should fetch the next page. You can learn more about it from the [pagination documentation](/api-reference/pagination).
# Bolna Voice AI Agent APIs Overview
Source: https://docs.bolna.ai/api-reference/agent/v2/overview
Explore Bolna Voice AI Agent APIs overview, featuring endpoints for creating, managing, and executing autonomous voice agents.
## Endpoints
```
POST /v2/agent
GET /v2/agent
PUT /v2/agent/:agent_id
GET /v2/agent/all
```
## Agent Object Attributes
### `agent_config`
* `agent_name` *string* **(required)**
Name of the agent
* `agent_welcome_message` *string* **(required)**
Initial agent welcome message. you can pass dynamic values here using variables encloed within `{}`
* `webhook_url` *string* **(required)**
Get real-time details of the call progress and call data on a webhook. All supported events are listed in [Poll call data using webhooks](/polling-call-status-webhooks)
* `tasks` *array* **(required)**
Definitions and configuration for the agentic tasks
### `agent_prompts`
Prompts to be provided to the agent.
# Patch Update to Voice AI Agent API
Source: https://docs.bolna.ai/api-reference/agent/v2/patch_update
PATCH /v2/agent/{agent_id}
Learn how to partially update properties. Update Bolna Voice AI agent name, welcome message, webhook URL, voice settings, and prompts, using this endpoint.
Currently, only the following agent attributes can be updated for the `PATCH` update.
* `agent_name`
* `agent_welcome_message`
* `webhook_url`
* `synthesizer`
* `agent_prompts`
* `ingest_source_config`
# Update Voice AI Agent API
Source: https://docs.bolna.ai/api-reference/agent/v2/update
PUT /v2/agent/{agent_id}
Update agent configurations, tasks, and prompts to refine behavior and capabilities using Bolna Voice AI agent APIs.
# Create Batch API
Source: https://docs.bolna.ai/api-reference/batches/create
POST /batches
Discover how to create a batch for Bolna Voice AI agent by uploading a CSV file containing user contact numbers and prompt variable details for users.
# Delete Batch API
Source: https://docs.bolna.ai/api-reference/batches/delete
DELETE /batches/{batch_id}
Understand how to delete a specific batch using its ID, effectively removing it from your scheduled or active batches.
# List Batch Executions API
Source: https://docs.bolna.ai/api-reference/batches/executions
GET /batches/{batch_id}/executions
Learn how to retrieve all executions from a batch, providing detailed information on each call's outcome and metrics.
# Get Batch API
Source: https://docs.bolna.ai/api-reference/batches/get_batch
GET /batches/{batch_id}
Find out how to retrieve details of a specific batch, including its creation time, status, call status and scheduled execution time.
# List All Batches API
Source: https://docs.bolna.ai/api-reference/batches/get_batches
GET /batches/{agent_id}/all
Explore how to list all batches associated with a particular Bolna Voice AI agent, providing an overview of their statuses, schedules and other relevant details
# Batch APIs Overview
Source: https://docs.bolna.ai/api-reference/batches/overview
Understand how to create and schedule multiple Bolna Voice AI calls together using Bolna Batch APIs for efficient call management.
## Endpoints
```
POST /batches
POST /batches/schedule
POST /batches/:batch_id/stop
GET /batches/:batch_id
GET /batches/:batch_id/executions
GET /batches/:agent_id
DELETE /batches/:batch_id
```
# Schedule Batch API
Source: https://docs.bolna.ai/api-reference/batches/schedule
POST /batches/{batch_id}/schedule
Learn how to schedule a batch for calling via Bolna Voice AI agent by specifying the batch ID and the desired execution time.
# Stop Batch API
Source: https://docs.bolna.ai/api-reference/batches/stop
POST /batches/{batch_id}/stop
Understand how to stop a running batch using its ID, allowing you to halt ongoing calls in the batch.
# Make Voice AI Call API
Source: https://docs.bolna.ai/api-reference/calls/make
POST /call
Learn how to initiate outbound phone calls using Bolna Voice AI agents. Start making phone calls using the agent ID and recipient's phone number.
# Calling APIs overview
Source: https://docs.bolna.ai/api-reference/calls/overview
Explore Bolna Calling APIs to invoke outbound Voice AI phone calls from your agents. This overview provides the available endpoints and their functionalities.
## Endpoints
```
POST /call
```
# Get Batch Executions API
Source: https://docs.bolna.ai/api-reference/executions/get_batch_executions
GET /batches/{batch_id}/executions
Retrieve all executions for specific batches using Bolna APIs. This endpoint provides detailed information on each call's outcome and metrics within the batch.
# Retrieve Voice AI Execution API
Source: https://docs.bolna.ai/api-reference/executions/get_execution
GET /executions/{execution_id}
Fetch details of a specific phone call execution by its ID using Bolna APIs. This includes information such as conversation time, status, and telephony data.
# Retrieve Voice AI Execution Raw Logs API
Source: https://docs.bolna.ai/api-reference/executions/get_execution_raw_logs
GET /executions/{execution_id}/log
Fetch raw logs of a specific phone call execution by its ID using Bolna APIs. This includes information such as prompts, requests & responses by the models
# Get All Voice AI Agent Executions API
Source: https://docs.bolna.ai/api-reference/executions/get_executions
GET /agent/{agent_id}/executions
Retrieve all executions performed by a specific agent using Bolna APIs. This endpoint provides a comprehensive history of the agent's calls and conversations.
# Executions APIs overview
Source: https://docs.bolna.ai/api-reference/executions/overview
Access your Voice AI agents call and conversation history using Bolna Executions APIs. This page details the available endpoints for managing call executions.
## Endpoints
```
GET /executions/:execution_id
GET /batch/:batch_id/executions
GET /agent/:agent_id/executions
```
# Set Inbound Agent API
Source: https://docs.bolna.ai/api-reference/inbound/agent
POST /inbound/setup
Configure Bolna Voice AI agent to handle inbound calls automatically by associating it with a specific phone number using Bolna APIs.
# Inbound Bolna Voice AI Agent APIs Overview
Source: https://docs.bolna.ai/api-reference/inbound/overview
Discover how to set up Bolna Voice AI agents to answer inbound calls, enabling responsive communication channels.
## Endpoints
```
POST /inbound/setup
```
# Bolna API Documentation
Source: https://docs.bolna.ai/api-reference/introduction
Use and leverage Bolna Voice AI using APIs through HTTP requests from any language in your applications and workflows.
Bolna API features consistent, resource-oriented URLs, handles application/json request bodies, returns responses in JSON format, and utilizes standard HTTP response codes, authentication methods, and HTTP verbs.
You must have a valid Bolna account to generate and use APIs
## Authentication
* Login to the dashboard at [https://platform.bolna.ai](https://platform.bolna.ai)
* Navigate to [Developers](https://platform.bolna.ai/developers) tab from the left menu bar after login
* Click the button `Generate a new API Key` to generate a key
* Save your API Key
The API Key will be shown only once. Hence, please save it somewhere secure.
## Using the API Key
To authenticate your API requests, you must include your `API Key` in the Authorization header of HTTP requests made as a `Bearer` token
```
Authorization: Bearer
```
## Example of an Authenticated API Request
Following is an example of making a GET request to Bolna API using the API key:
```
GET https://api.bolna.ai/agent/all
Headers:
Authorization: Bearer
```
# Create Knowledgebase API
Source: https://docs.bolna.ai/api-reference/knowledgebase/create
POST /knowledgebase
Upload a PDF document to create a knowledgebase, enhancing your Bolna Voice AI agent's information base and response accuracy.
# Delete Knowledgebase API
Source: https://docs.bolna.ai/api-reference/knowledgebase/delete
DELETE /knowledgebase/{rag_id}
Remove and delete an existing knowledgebase from your Bolna account maintaining your Bolna Voice AI agents upto date.
# Get Knowledgebase API
Source: https://docs.bolna.ai/api-reference/knowledgebase/get_knowledgebase
GET /knowledgebase/{rag_id}
Retrieve details of a specific knowledgebase, including its ID, file name, creation time, and status, using Bolna APIs.
# List All Knowledgebases API
Source: https://docs.bolna.ai/api-reference/knowledgebase/get_knowledgebases
GET /knowledgebase/all
Retrieve all knowledgebases associated with your account, including their status and creation dates.
# Knowledgebases Overview
Source: https://docs.bolna.ai/api-reference/knowledgebase/overview
Learn how to ingest and add knowledgebases to Bolna Voice AI agents, enhancing their information base and response accuracy.
## Endpoints
```
POST /knowledgebase
GET /knowledgebase/:rag_id
GET /knowledgebase/all
DELETE /knowledgebase/:rag_id
```
# Pagination in Bolna API
Source: https://docs.bolna.ai/api-reference/pagination
Learn how to use pagination in Bolna Voice AI APIs using `page_number` and `page_size` to fetch results efficiently and build scalable workflows.
The endpoints also support pagination using the `page_number` and `page_size` query parameters. This allows you to fetch large sets of results in smaller, manageable chunks.
## Query Parameters
* `page_number` (integer, optional): The page of results to retrieve. Defaults to `1`. The first page starts at `1`.
* `page_size` (integer, optional): The number of results per page. Defaults to `20`. You can request up to `50` results per page.
## How it works
The API uses offset-based pagination under the hood. For example:
| page\_number | page\_size | Returned records |
| ------------ | ---------- | ---------------- |
| 1 | 10 | Records 1–10 |
| 2 | 10 | Records 11–20 |
| 3 | 5 | Records 11–15 |
## Example Request
```curl example-request
GET /v2/agent/1234/executions?page_number=2&page_size=5
```
```json example-response
{
"total": 38,
"page": 2,
"page_size": 5,
"has_more": true,
"data": [
{ "id": "ex_101", "status": "success", "created_at": "..." },
{ "id": "ex_102", "status": "failed", "created_at": "..." },
...
]
}
```
## Tips
* Use `has_more` to determine if you should fetch the next page.
* Combine pagination with filters supported in the API to narrow results efficiently.
# List Phone Numbers API
Source: https://docs.bolna.ai/api-reference/phone-numbers/get_all
GET /phone-numbers/all
Retrieve all phone numbers associated with your account, including details like creation date and telephony provider like Twilio, Plivo, etc.
# Phone Numbers APIs Overview
Source: https://docs.bolna.ai/api-reference/phone-numbers/overview
Manage your phone numbers effectively using Bolna APIs, including listing and associating numbers with Bolna Voice AI agents.
## Endpoints
```
POST /phone-numbers/all
```
# Add a New Provider API
Source: https://docs.bolna.ai/api-reference/providers/add
POST /providers
Learn how to securely add a new provider to your Bolna account by specifying the provider's name and associated credentials.
You can add your own providers securely in Bolna. Please [read this page](/providers) for more information about all current supported providers.
# List Providers API
Source: https://docs.bolna.ai/api-reference/providers/get
GET /providers
Retrieve all providers associated with your Bolna account, including their IDs, names, and creation timestamps.
# Providers APIs overview
Source: https://docs.bolna.ai/api-reference/providers/overview
Add and manage your own providers securely in Bolna, supporting various telephony and voice services.
You can add your own providers securely in Bolna.
Please [read this page](/providers) for more information about all current supported providers.
## Endpoints
```
POST /providers
GET /providers
DELETE /providers/:provider_key_name
```
# Remove a Provider API
Source: https://docs.bolna.ai/api-reference/providers/remove
DELETE /providers/{provider_key_name}
Delete a previously added provider from your Bolna account, ensuring your integrations remain current.
# Create a new Sub-Account API
Source: https://docs.bolna.ai/api-reference/sub-accounts/create
POST /sub-accounts/create
Create a new sub-account to define separate workspaces with dedicated configurations
This is an `enterprise` feature.
You can read more about our enterprise offering here [Bolna enterprise](/pricing/enterprise-pricing).
# List all Sub-Accounts API
Source: https://docs.bolna.ai/api-reference/sub-accounts/get_all
GET /sub-accounts/all
Retrieve all sub-accounts linked to your main account enabling centralized visibility and management.
This is an `enterprise` feature.
You can read more about our enterprise offering here [Bolna enterprise](/pricing/enterprise-pricing).
# Sub accounts APIs overview
Source: https://docs.bolna.ai/api-reference/sub-accounts/overview
Add and manage sub-accounts in Bolna to organize clients, or business units under a single main account. Use sub-accounts to maintain clear separation of Bolna agents, calls, logs, recordings, usage.
This is an `enterprise` feature.
You can read more about our enterprise offering here [Bolna enterprise](/pricing/enterprise-pricing).
## Endpoints
```
POST /sub-accounts/create
GET /sub-accounts/all
GET /sub-accounts/:sub_account_id/usage
```
# Track Sub-Account Usage API
Source: https://docs.bolna.ai/api-reference/sub-accounts/usage
GET /sub-accounts/{sub_account_id}/usage
Track usage for a specific sub-account giving you fine-grained insights into usage, consumption and billing.
This is an `enterprise` feature.
You can read more about our enterprise offering here [Bolna enterprise](/pricing/enterprise-pricing).
# Add a New Custom LLM Model
Source: https://docs.bolna.ai/api-reference/user/add_model
POST /user/model/custom
Learn how to integrate your custom Large Language Model (LLM) with Bolna Voice AI agents using Bolna APIs.
This request specifies how to add your own Custom LLM Models and use it with Bolna Voice AI agents. Please read about it more from [using-custom-llm](/customizations/using-custom-llm)
# User information
Source: https://docs.bolna.ai/api-reference/user/info
GET /user/me
Get details like name, email, current wallet balance, concurrency limits using this API
# User APIs Overview
Source: https://docs.bolna.ai/api-reference/user/overview
Explore APIs related to user and account information for Bolna Voice AI agents, including endpoints for adding custom LLM models.
## Endpoints
```
GET /user/me
POST /user/model/custom
```
# Generate TTS Sample API
Source: https://docs.bolna.ai/api-reference/voice/generate_tts
POST /user/tts_sample
Generate audio from text using various voice providers, enhancing your Bolna Voice AI agent vocal capabilities.
Generating text-to-speech samples using APIs is currently live for Enterprises only.
We are slowly rolling it out to all users.
Please contact us at [founders@bolna.dev](mailto:founders@bolna.dev) for activation.
# List All Voices API
Source: https://docs.bolna.ai/api-reference/voice/get_all
GET /me/voices
Retrieve a list of all available voices for your account, including details like provider, language, and accent.
# Voice APIs Overview
Source: https://docs.bolna.ai/api-reference/voice/overview
APIs for accessing voices and generating test transcripts which can be utilized for Bolna Voice AI agents.
## Endpoints
```
GET /me/voices
POST /user/tts_sample
```
# Automate and schedule calls using Batches
Source: https://docs.bolna.ai/batch-calling
Learn how to schedule and manage batch calls using Bolna's Voice AI agents. Upload CSV files, set call parameters, and monitor execution for efficient outreach.
## 1. Introduction about Batch structure
1. All phone numbers should include the country prefix in [E.164](https://en.wikipedia.org/wiki/E.164) format
2. All phone numbers should have `contact_number` as the header
3. All other variables can be included in the CSV file in separate coloumns
###
```csv example_batch_file.csv
contact_number,first_name,last_name
+11231237890,Bruce,Wayne
+91012345678,Bruce,Lee
+00021000000,Satoshi,Nakamoto
+44999999007,James,Bond
```
## 2. How to Export a CSV file from Excel or Google Sheets for Batch
In Excel, when you type a `+` at the beginning of a cell, Excel interprets it as a formula.
To ensure the plus sign `+` is retained when entering phone numbers with country codes,
**please add an apostrophe (`'`) before the plus sign.**
[Download an example CSV file](https://bolna-public.s3.amazonaws.com/Bolna+batch+calling+example+csv.csv)
***
## 3. Step by step tutorial to use Batch APIs
### i. Create a batch for agent
Once the CSV file is ready, upload it using the [Create Batch API](/api-reference/batches/create)
```bash request
curl --location 'https://api.bolna.ai/batches' \
--header 'Authorization: Bearer ' \
--form 'agent_id="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"' \
--form 'file=@"/my-first-batch.csv"'
```
```bash response
{
"batch_id": "abcdefghijklmnopqrstuvwxyz012345",
"state": "created"
}
```
### ii. Scheduling the batch
After receiving your `batch_id`, you can schedule a batch using [Schedule Batch API](/api-reference/batches/schedule)
The scheduled date and time should be in **ISO 8601** format with time zone.
```bash request
curl --location 'https://api.bolna.ai/batches/abcdefghijklmnopqrstuvwxyz012345/schedule' \
--header 'Authorization: Bearer ' \
--form 'scheduled_at="2024-03-20T04:05:00+00:00"'
```
```bash response
{
"message": "success",
"state": "scheduled at 2024-03-20T04:10:00+00:00"
}
```
### iii. Retrieving batch status
Check the status of the batch using [Get Batch API](/api-reference/batches/get_batch)
```bash request
curl --location 'https://api.bolna.ai/batches/abcdefghijklmnopqrstuvwxyz012345' \
--header 'Authorization: Bearer '
```
```bash response
{
"batch_id": "abcdefghijklmnopqrstuvwxyz012345",
"humanized_created_at": "19 minutes ago",
"created_at": "2024-03-13T14:12:50.596315",
"updated_at": "2024-03-13T14:19:13.115411",
"status": "scheduled",
"scheduled_at": "2024-03-20T04:10:00+05:30"
}
```
### iv. Retrieving all batch executions
Once the batch has run, you can check all executions by the agent using [List Batch Executions API](/api-reference/batches/executions)
```bash request
curl --location 'https://api.bolna.ai/batches/abcdefghijklmnopqrstuvwxyz012345/executions' \
--header 'Authorization: Bearer '
```
```bash response
[
{
"id": 7432382142914,
"conversation_time": 123,
"total_cost": 123,
"transcript": "",
"createdAt": "2024-01-23T01:14:37Z",
"updatedAt": "2024-01-29T18:31:22Z",
"usage_breakdown": {
"synthesizerCharacters": 123,
"synthesizerModel": "polly",
"transcriberDuration": 123,
"transcriberModel": "deepgram",
"llmTokens": 123,
"llmModel": {
"gpt-3.5-turbo-16k": {
"output": 28,
"input": 1826
},
"gpt-3.5-standard-8k": {
"output": 20,
"input": 1234
}
}
}
},
{...},
{...},
{...},
{...}
]
```
## 4. Example Batch Application using the above flow with APIs
```python batch_script.py
import asyncio
import os
from dotenv import load_dotenv
import aiohttp
# Load environment variables from .env file
load_dotenv()
# Load from .env
host = "https://api.bolna.ai"
api_key = os.getenv("api_key", None)
agent_id = 'ee153a6c-19f8-3a61-989a-9146a31c7834' #agent_id in which we want to create the batch
file_path = '/path/of/csv/file'
schedule_time = '2024-06-01T04:10:00+05:30'
async def schedule_batch(api_key, batch_id, scheduled_at):
print("now scheduling batch for batch id : {}".format(batch_id))
url = f"{host}/batches/{batch_id}/schedule"
headers = {'Authorization': f'Bearer {api_key}'}
data = {
'scheduled_at': scheduled_at
}
try:
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, data=data) as response:
response_data = await response.json()
if response.status == 200:
return response_data
else:
raise Exception(f"Error scheduling batch: {response_data}")
except aiohttp.ClientError as e:
print(f"HTTP Client Error: {str(e)}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
async def get_batch_status(api_key, batch_id):
print("now getting batch status for batch id : {}".format(batch_id))
url = f"{host}/batches/{batch_id}"
headers = {'Authorization': f'Bearer {api_key}'}
try:
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=headers) as response:
response_data = await response.json()
if response.status == 200:
return response_data
else:
raise Exception(f"Error getting batch status: {response_data}")
except aiohttp.ClientError as e:
print(f"HTTP Client Error: {str(e)}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
async def get_batch_executions(api_key, batch_id):
print("now getting batch executions for batch id : {}".format(batch_id))
url = f"{host}/batches/{batch_id}/executions"
headers = {'Authorization': f'Bearer {api_key}'}
try:
async with aiohttp.ClientSession() as session:
async with session.get(url, headers=headers) as response:
response_data = await response.json()
if response.status == 200:
return response_data
else:
raise Exception(f"Error getting batch executions: {response_data}")
except aiohttp.ClientError as e:
print(f"HTTP Client Error: {str(e)}")
except Exception as e:
print(f"Unexpected error: {str(e)}")
async def create_batch():
url = f"{host}/batches"
headers = {'Authorization': f'Bearer {api_key}'}
with open(file_path, 'rb') as f:
form_data = aiohttp.FormData()
form_data.add_field('agent_id', agent_id)
form_data.add_field('file', f, filename=os.path.basename(file_path))
async with aiohttp.ClientSession() as session:
async with session.post(url, headers=headers, data=form_data) as response:
response_data = await response.json()
if response_data.get('state') == 'created':
batch_id = response_data.get('batch_id')
res = await schedule_batch(api_key, batch_id, scheduled_at=schedule_time)
if res.get('state') == 'scheduled':
check = True
while check:
# Checking the current status every 1 minute
await asyncio.sleep(60)
res = await get_batch_status(api_key, batch_id)
if res.get('status') == 'completed':
check = False
break
if not check:
res = await get_batch_executions(api_key, batch_id)
print(res)
return res
if __name__ == "__main__":
asyncio.run(create_batch())
```
# Acquire Dedicated Phone Numbers through Bolna
Source: https://docs.bolna.ai/buying-phone-numbers
Purchase and manage phone numbers directly from Bolna's dashboard. Follow step-by-step instructions to secure numbers for your Voice AI agents.
Buy and view your Phone numbers on [https://platform.bolna.ai/phone-numbers](https://platform.bolna.ai/phone-numbers).
## Detailed steps to purchase phone numbers
All phone numbers are purchased for a monthly recurring cost using Stripe.
# Extract Structured Data from Conversations in Bolna Voice AI
Source: https://docs.bolna.ai/call-details
Access detailed insights into call logs and data with Bolna Voice AI. Learn how to analyze and utilize call details for better decision-making.
## Extract calls details in structured JSON format
By defining any relevant information you wish to extract from the conversation, you can use `Extraction prompt`.
Post every call, you'll get this data in the [Execution](/api-reference/executions/get_execution) payload in `extracted_data` key.
### Example
```text extraction prompt
user_name : Yield the name of the user.
payment_mode : If user is paying by cash, yield cash. If they are paying by card yield card. Else yield NA
payment_date: yield payment date by the user in YYYY-MM-DD format
```
```json response
...
...
"extracted_data": {
"user_name": "Bruce",
"payment_mode": "paypal,
"payment_date": "2024-12-30"
},
...
...
```
# Bolna AI Updates for April, 2025
Source: https://docs.bolna.ai/changelog/april-2025
Explore the latest features, improvements, and API updates introduced in April 2025 for Bolna Voice AI agents.
## Features
* **Call Frequency Limiting**: Set a limit on the maximum number of inbound calls allowed from a unique phone number to a given destination number. This helps prevent spam, abuse, or unintended repeated calls.
## Improvements
* All inbound and outbound calls to have a maximum limit of `25KB` for injecting context.
* Learn more on [injecting context for inbound calls](/customizations/identify-incoming-callers) for inbound calls.
* Learn more on [injecting context for outbound calls](/using-context) for outbound calls).
## Features
* **Inbound Whitelist Control**: You can now configure inbound rules to allow calls only from specific whitelisted phone numbers. This ensures that only trusted callers can initiate conversations with your agents.
## Improvements
* Latency improvements for agents using [guardrails](/guardrails).
* Tool information will now be available in the post call analysis like [extraction](/call-details) & summarization.
## Features
* Support for [Deepgram's](/providers/voice/deepgram) Aura-2 TTS model `aura-2`.
## Improvements
* Transcripts will now be more [accurate by incorporating interruptions](/customizations/capturing-precise-transcripts).
## Features
* Support for OpenAI's **GPT-4.1** family of models: `gpt-4.1`, `gpt-4.1-mini` & `gpt-4.1-nano`.
## Improvements
* Ability to remove voices from your account
* Audio recordings are now stored in `dual` (stereo) mode for both inbound & outbound calls.
# Bolna AI Updates for December, 2024
Source: https://docs.bolna.ai/changelog/december-2024
Explore the latest features, improvements, and API updates introduced in December 2024 for Bolna Voice AI agents.
## Features
* Downloading batches which have been uploaded
* Displaying batch call status breakdown
## API Updates
* Batches APIs - Added breakdown for batches executions ([API doc](/api-reference/batches/get_batch))
## Features
* Cartesia TTS support for voice
* Voicemail detection for Twilio & Plivo calls
* Call hangup using prompts (ref. [hangup live calls on Bolna](/hangup-calls))
* Building multi-agent prompts (ref. [multi-agent prompt](/multi-agent-prompt))
## Features
* Support for over 40+ languages (ref. [supported languages](/customizations/multilingual-languages-support))
* Knowledgebases: are now functional in all supported language and work together with LLM-driven context (ref. [ingesting and using KBs](/using-your-knowledgebases))
* Batches revamp for simpler processing and management (ref. [using batches](/batch-calling))
* Adding call hangup information for all calls
## Improvements & migrations
* Change of `execution_id` notation from `{agent_id}#{timestamp}` to a unique `{uuid}`.
This was a major overhaul due to which we were facing scaling issues and product complications.
## API Updates
#### New APIs
* Get call details using only `execution_id` ([API doc](/api-reference/executions/get_execution))
* Set inbound agent programmatically ([API doc](/api-reference/inbound/agent))
* Get a list of all added voices for your account ([API doc](/api-reference/voice/get_all))
#### API changes
* Call APIs - Outbound calls will now return the unique `execution_id` ([API doc](/api-reference/calls/make))
* Batches APIs - removed redundant need of `agent_id` wherever applicable ([API doc](/api-reference/batches/overview))
* Execution APIs - removed redundant need of `agent_id` wherever applicable ([API doc](/api-reference/executions/overview))
# Bolna AI Updates for February, 2025
Source: https://docs.bolna.ai/changelog/february-2025
Explore the latest features, improvements, and API updates introduced in February 2025 for Bolna Voice AI agents.
## Features
* Adding webcall support which will help users to build and test their Bolna Voice AI agents.
## Features
* We've integrated support for [importing voices](/import-voices) from multiple providers like ElevenLabs and Cartesia, along with custom voice options — making voice agent personalization on Bolna AI smoother and more flexible than ever.
## Features
* Adding [Deepgram `nova-3` model](/providers/transcriber/deepgram#4-list-of-deepgram-models-supported-on-bolna-ai) for speech to text capabilities.
## Features
* Adding [ElevenLabs `eleven_flash_v2_5` model](/providers/voice/elevenlabs#4-list-of-elevenlabs-models-supported-on-bolna-ai) for text to speech capabilities.
## Improvements
* Hangup live calls automatically on [detecting silence](/hangup-calls#1-using-time-based-call-hangup) and [using LLM prompts](/hangup-calls#2-using-prompts-to-hangup-calls).
* Add a [hangup message](/hangup-calls#adding-a-hangup-message) to be spoken while disconnecting the call
# Bolna AI Updates for January, 2025
Source: https://docs.bolna.ai/changelog/january-2025
Explore the latest features, improvements, and API updates introduced in January 2025 for Bolna Voice AI agents.
## Bug fixes
* Execution `status` wasn't getting updated for few incoming calls with connected Twilio telephony
## API Updates
* Agent APIs - Exposed functionality to programmatically delete agents via APIs ([API doc](/api-reference/agent/v2/delete))
## Bug fixes
* Few executions were erroneously loosing the `batch_id` mapping
# Bolna AI Updates for July, 2025
Source: https://docs.bolna.ai/changelog/july-2025
Explore the latest features, improvements, and API updates introduced in July 2025 for Bolna Voice AI agents.
## Adding [Rime TTS](/providers/voice/rime) voices and models.
* `arcana` models and voices
* `mistv2` models and voices
## Incorporating websockets for Sarvam `bulbul:v2` model.
# Bolna AI Updates for June, 2025
Source: https://docs.bolna.ai/changelog/june-2025
Explore the latest features, improvements, and API updates introduced in June 2025 for Bolna Voice AI agents.
## Rolling out Bolna AI [On-Premise offering](/enterprise/on-premise-deployments) in Private Beta.
## Exposed `ingest_source_config` for agents, enabling inbound calls to ingest user data via APIs
List of APIs Updated to reflect these changes:
* Get Agent API [API reference doc](/api-reference/agent/v2/get)
* Create Agent API [API reference doc](/api-reference/agent/v2/create)
* Update Agent API [API reference doc](/api-reference/agent/v2/update)
* Patch update Agent API [API reference doc](/api-reference/agent/v2/patch_update)
* List Agents API [API reference doc](/api-reference/agent/v2/get_all)
## Enabled TTS model switching from the UI Dashboard
## Added Sub-Account APIs
List of APIs added for sub-accounts:
* Create sub-account API [API reference doc](/api-reference/sub-accounts/create)
* List all sub-accounts API [API reference doc](/api-reference/sub-accounts/get_all)
* Track sub-accounts usage API [API reference doc](/api-reference/sub-accounts/usage)
# Bolna AI Updates for March, 2025
Source: https://docs.bolna.ai/changelog/march-2025
Explore the latest features, improvements, and API updates introduced in March 2025 for Bolna Voice AI agents.
## Improvements
* Enabling `strict` mode for custom tools to ensure function calls reliably adhere to the function schema, instead of being best effort.
* Updating the UI for the custom tools and improving the [documentation with examples](/tool-calling/custom-function-calls).
## Features
* Bolna Voice AI agents can now dynamically identify incoming callers in real time via:
1. using [your internal APIs](/customizations/identify-incoming-callers#1-internal-api-integration-real-time-lookup) which returns records mapped to a phone number,
2. using uploaded [CSV files](/customizations/identify-incoming-callers#2-csv-uploads) or
3. using publicly linked [Google Sheets](/customizations/identify-incoming-callers#3-google-sheets-integration).
## Features
* Adding Azure's transcriber models for speech to text.
## Improvements
* Infrastructure changes & updates to improve initial conversational latencies.
## Features
* Adding [Bolna Status page](https://status.bolna.ai) where one can track real-time system updates, maintenance notices and any ongoing outage updates.
## Bug fixes
* Bolna Voice AI agents will now have the context about current `timestamp` & `timezone` automatically by default which can be used. This helps the agent compute times accurately based on your local setting.
* This can be [overridden](/using-context#injecting-current-time) by passing the `timezone` as well.
# Bolna AI Updates for May, 2025
Source: https://docs.bolna.ai/changelog/may-2025
Explore the latest features, improvements, and API updates introduced in May 2025 for Bolna Voice AI agents.
## Latency improvements for `azure` TTS.
* We've optimized our Azure TTS integration for significantly lower end-to-end latency. This means faster voice generation and snappier response times for live calls.
## Added support for Smallest's latest `lightning-v2` TTS model.
* You can now use Smallest.ai’s new `lightning-v2` voices with Bolna AI voice agents.
## Extended maximum call duration to `40 minutes`.
* Calls can now last up to 40 minutes, allowing for longer interviews or conversations without interruption.
## Updated Azure clusters
* Support for the following Azure clusters
1. `gpt-4.1` Azure OpenAI
2. `gpt-4.1-mini` Azure OpenAI
3. `gpt-4.1-nano` Azure OpenAI
4. `gpt-4o` Azure OpenAI
5. `gpt-4o-mini` Azure OpenAI
6. `gpt-4` Azure OpenAI
## Updated Elevenlabs to use [`Multi-Context Websocket`](https://elevenlabs.io/docs/cookbooks/multi-context-web-socket) for improved latency and fluency.
* This greatly improves the websocket handlings, closures and session contexts.
## Added languages support
* Bolna now supports over 100+ different languages include English (India), English (United States), English (United Kingdom), etc.
## Updates
* Users can now top up for **\$1000 USD** in one go.
* Users can now opt for auto recharge.
## Improvements
* Latency improvements across the AI voice call stack.
* Execution pages now support filters and column selections.
## Features
* Added [viaSocket](/integrations#external-integrations) integration with Bolna Voice AI agents. Added following supporting tutorials:
1. Go [through this tutorial](/tutorials/viasocket/create-bolna-api-connection) to create Bolna API connection with viaSocket.
## Features
* Added [Sarvam TTS](/providers/voice/sarvam) `bulbul-v2` model for Bolna Voice AI agents.
# Import agents
Source: https://docs.bolna.ai/copy-import-agent
Learn how to import pre-built agents on the Bolna Voice AI platform
# Capturing precise transcripts in Bolna Voice AI
Source: https://docs.bolna.ai/customizations/capturing-precise-transcripts
Bolna Voice AI enables to capture actual transcripts when the conversations involve interruptions to improve call accuracy and experience.
## Overview
Bolna AI incorporates an **advanced interruption handling** mechanism that ensures accurate and contextually relevant transcripts during voice agent interactions.
This feature is currently in beta. Please use it with caution.
When a user interrupts the AI agent mid-conversation, rather than logging the full transcript generated by Large Language Models (LLMs), Bolna intelligently computes the actual transcript by filtering out incomplete or overridden responses. This enhances clarity, ensuring that only the final, meaningful exchange is stored, processed and used for the conversations.
## How It Works
Bolna AI’s interruption handling system functions through a three-step process:
* **Detection of Interruptions**: The system continuously monitors speech input to detect when the user starts speaking while the Voice agent is still speaking.
* **Contextual Computation**: Whenever an interruption is detected, Bolna AI determines whether the user’s input should overrides the Voice agent's response.
* **Final Transcript Adjustment**: Bolna then reconstructs the conversation transcript to exclude everything after the interruption, ensuring that only the final & meaningful parts of the dialogue are retained, processed and used for further processing.
## Example
| Without precise transcript generation | Using precise transcript generation |
| ------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------- |
| **Assistant:** "Hello, Thank you for calling Wayne Enterprises. How can we help you today?" | **Assistant:** "Hello, Thank you for calling Wayne Enterprises. How can we help you today?" |
| **User:** "hello" | **User:** "hello" |
| **Assistant:** "Hello! How can I assist you today?" | **Assistant:** "Hello! How can I ~~assist you today?~~" |
| **User:** "yeah where are you calling from" | **User:** "yeah where are you calling from" |
| **Assistant:** "I'm here to support you regarding your recent order from Wayne Enterprises. How can I assist you?" | **Assistant:** "I'm here to support you regarding your recent order ~~from Wayne Enterprises. How can I assist you?~~" |
| **User:** "yeah i'm facing an issue with the item i purchased" | **User:** "yeah i'm facing an issue with the item i purchased" |
| ... | ... |
In the above example, the strikethrough text is only for representation purposes. In practice, you'll see only the transcripts till the interruptions if the `precise transcript generation` is `enabled`.
## Conclusion
Bolna AI’s interruption handling feature ensures that conversation transcripts reflect actual user intent rather than an unfiltered log of AI responses. By dynamically computing the actual transcript, this feature enhances the efficiency of voice AI applications, making conversations more human-like and structured.
# Dynamically identify incoming callers
Source: https://docs.bolna.ai/customizations/identify-incoming-callers
Use Bolna Voice AI agents to identify callers in real time via API, CSV, or Google Sheets and personalize calls with automatic user data injection.
You can link your incoming phone numbers to custom data sources. When a call comes in, your Bolna Voice AI agent automatically identifies the caller from the linked data source, matches it and pulls in relevant details—*name, address, preferences, past history, anything you’ve got*.
This data is seamlessly injected into the Voice AI agent's prompt so every interaction feels personalized, targeted and on-point.
## 1. Internal API Integration (Real-Time Lookup)
Perfect for teams with existing databases.
* Provide an API endpoint that accepts the caller’s phone number.
* We automatically send the following information to this API which your application can consume and use:
1. the incoming caller's `contact_number`
2. the `agent_id` to identify the agent
3. the `execution_id` to identify the unique call
For example: `https://api.your-domain.com/api/customers?contact_number=+19876543210&agent_id=06f64cb2-31cd-49eb-8f81-5be803e12163&execution_id=c4be1d0b-c6bd-489e-9d38-9c15b80ff87c`
* We’ll call this API when a call comes in.
* The returned data (JSON) is automatically merged into the AI prompt just before the call.
Please Note:
* The endpoint has to be a **GET** endpoint
* The supported authentication is **Bearer Token**
## 2. CSV Uploads
For smaller teams who prefer simple and no-code data management for quick deployments and testing.
* Upload a CSV file with `contact_number` having the phone numbers (with country code) and associated user info.
* Bolna agents will automatically look up the incoming number and inject the matching row details into the prompt.
```
contact_number,first_name,last_name
+11231237890,Bruce,Wayne
+91012345678,Bruce,Lee
+00021000000,Satoshi,Nakamoto
+44999999007,James,Bond
```
## 3. Google Sheets Integration
The best of both worlds: real-time sync with spreadsheet simplicity.
* Link a **publicly accessible** Google Sheet with user data and their details.
* Bolna agents auto-syncs and looks up for the incoming number to pull the latest data associated with that phone number.
* Your Google Sheet can continue updating the data —no re-uploads needed. Bolna agents will pick up the real-time available data automatically.
| contact\_number | first\_name | last\_name |
| --------------- | ----------- | ---------- |
| +11231237890 | Bruce | Wayne |
| +91012345678 | Bruce | Lee |
| +00021000000 | Satoshi | Nakamoto |
| +44999999007 | James | Bond |
# Multilingual Language Support in Bolna Voice AI
Source: https://docs.bolna.ai/customizations/multilingual-languages-support
Discover how Bolna Voice AI supports multiple languages. Enable global interactions with multilingual capabilities tailored to your audience.
## List of languages supported on Bolna
| Language | Language code | BCP Format |
| ------------------------ | ------------- | ---------- |
| Arabic | ar | ar-AE |
| Bengali | bn | bn-IN |
| Bulgarian | bg | bg-BG |
| Catalan | ca | ca-ES |
| Czech | cs | cs-CZ |
| Danish | da | da-DK |
| Dutch | nl | nl-NL |
| English (Australia) | en-AU | en-AU |
| English + French | multi-fr | multi-fr |
| English + German | multi-de | multi-de |
| English + Hindi | multi-hi | multi-hi |
| English (India) | en-IN | en-IN |
| English (New Zealand) | en-NZ | en-NZ |
| English + Spanish | multi-es | multi-es |
| English (United Kingdom) | en-GB | en-GB |
| English (United States) | en | en-US |
| Estonian | et | et-EE |
| Finnish | fi | fi-FI |
| Flemish | nl-BE | nl-BE |
| French | fr | fr-FR |
| French (Canada) | fr | fr-CA |
| German | de | de-DE |
| German (Switzerland) | de-CH | de-CH |
| Greek | el | el-GR |
| Gujarati | gu | gu-IN |
| Hindi | hi | hi-IN |
| Hungarian | hu | hu-HU |
| Indonesian | id | id-ID |
| Italian | it | it-IT |
| Japanese | ja | ja-JP |
| Kannada | kn | kn-IN |
| Khmer (Cambodia) | km | km-KH |
| Korean | ko | ko-KR |
| Latvian | lv | lv-LV |
| Lithuanian | lt | lt-LT |
| Malay | ms | ms-MY |
| Malayalam | ml | ml-IN |
| Marathi | mr | mr-IN |
| Norwegian | no | nb-NO |
| Polish | pl | pl-PL |
| Portuguese (Brazil) | pt-BR | pt-BR |
| Portuguese (Portugal) | pt | pt-PT |
| Punjabi (India) | pa | pa-IN |
| Romanian | ro | ro-RO |
| Russian | ru | ru-RU |
| Slovak | sk | sk-SK |
| Spanish | es | es-ES |
| Swedish | sv | sv-SE |
| Tamil | ta | ta-IN |
| Telugu | te | te-IN |
| Thai | th | th-TH |
| Turkish | tr | tr-TR |
| Ukrainian | uk | uk-UA |
| Vietnamese | vi | vi-VN |
## Prompting Guide
We've put together a document outlining best practices and recommendations with example prompts alongwith common mistaks to avoid. You can go through the [guide for writing prompts in non-english languages](/guides/writing-prompts-in-non-english-languages).
# Using Custom LLMs with Bolna Voice AI
Source: https://docs.bolna.ai/customizations/using-custom-llm
Integrate custom large language models (LLMs) into Bolna Voice AI to enhance agent capabilities and tailor responses to your unique requirements
We expect your custom LLM to be an OpenAI compatible server.
* [https://platform.openai.com/docs/api-reference/chat/create](https://platform.openai.com/docs/api-reference/chat/create)
* [https://platform.openai.com/docs/api-reference/chat/streaming](https://platform.openai.com/docs/api-reference/chat/streaming)
## Adding your Custom LLM using dashboard
1. Click on LLM select dropdown as shown in the image
2. From the dropdown click on `Add your own LLM`.
3. A dialog box will be displayed. Fill in the following details:
* `LLM URL`: the endpoint of your custom LLM
* `LLM Name`: a name for your custom LLM
click on `Add Custom LLM` to connect this LLM to Bolna
4. **Refresh the page**
5. In the LLM settings tab, choose `Custom` in the first dropdown to select LLM Providers
6. In the LLM settings tab, you'l now see your custom LLM model name appearing. Select this and save the agent.
**Using the above steps will make sure the agent uses your Custom LLM URL**.
## Demo video
Here's a working video highlighting the flow:
# Terminate Bolna Voice AI calls
Source: https://docs.bolna.ai/disconnect-calls
Optimize call lengths with Bolna Voice AI by setting duration limits. Automatically terminate calls exceeding limits for better resource management.
## Terminate calls
### Terminating live calls
| Call Type | Compatibiliy and support |
| -------------- | ------------------------ |
| Outbound calls | ✓ |
| Inbound calls | ✓ |
Users can configurea time limit for calls which allows them to define the maximum duration (in seconds) for a call.
Once the set time limit is reached, the system will automatically terminate the call, ensuring calls are restricted to a predefined duration.
This is useful for users to protect their calls and bills in cases where due to some reason the calls go on for long durations.
# Bolna AI On-Prem for Enterprise
Source: https://docs.bolna.ai/enterprise/on-premise-deployments
Discover Bolna Enterprise solutions for large-scale businesses, offering scalable Voice AI agents, advanced integrations, and custom seamless solutions.
**Bolna AI On-Prem** empowers your organization to deploy our best-in-class voice AI infrastructure. It is fully containerized and runs entirely within your cloud or data center. Designed for high-security, high-performance workloads, it's ideal for industries with stringent data requirements.
Please reach out to us at [enterprise@bolna.ai](mailto:enterprise@bolna.ai) or schedule a call [https://bolna.ai/meet](https://bolna.ai/meet) for more information about Bolna On-Premise deployments.
## Deployment Anywhere, Full Control
* Deploy **dockerized containers** or use **Kubernetes** across any cloud or on-prem environment.
* Supports deployment on **your own servers** fully leveraging your existing infrastructure.
* Choose your preferred region and provider (AWS, GCP, Azure, bare metal, private cloud).
## Data Privacy & Compliance
* **Complete data sovereignty**: All audio, requests, logs and transcripts remain within your environment. Nothing is sent to Bolna's servers. This ensurs compliance with healthcare, financial, and legal regulations.
* Regular performance and usage metrics are securely sent to Bolna cloud solely for billing and system optimizations.
* Full audit logs for monitoring all outbound activity, giving your security team complete visibility.
## Performance, Scalability & Reliability
* Achieve **ultra-low latency** by co-locating inference with your existing application stack.
* Scale horizontally and separate API and websocket containers, and auto-scale based on demand.
## Enterprise-Grade Operations
* Fully compatible with Bolna APIs, without any changes to your integration. You can simply point to your self-hosted endpoint.
# Bolna AI Enterprise Plan
Source: https://docs.bolna.ai/enterprise/plan
Discover Bolna Enterprise solutions for large-scale businesses, offering scalable Voice AI agents, advanced integrations, and custom seamless solutions.
As you build your application on Bolna to solve your use-case, we partner with you throughout the journey - from early concept to enterprise-grade deployment.
Bolna's Enterprise Plan is built for organizations with high-volume, mission-critical voice needs. It includes:
* **Elevated concurrency limits**: Scale beyond the default 10 concurrent calls.
* **Priority in processing your calls and requests**: Enterprise customers skip queues during peak usage.
* **Premium Slack support and regular check-ins**: Access guaranteed support and a dedicated engineer.
* **Customized volume-based discounts**: Competitive pricing that improves as your call volume grows.
Please reach out to us at [enterprise@bolna.ai](mailto:enterprise@bolna.ai) or schedule a call [https://bolna.ai/meet](https://bolna.ai/meet) for more information.
# Frequently Asked Questions
Source: https://docs.bolna.ai/frequently-asked-questions
Frequently asked questions about Bolna AI
Bolna supports a wide range of customizable voice agents. From free-flowing conversational assistants to structured IVR-style bots.
You can build agents for use cases like lead qualification, customer support, interviews, bookings, schedulings, dating, and more.
You can refer to Bolna's [Agent template library](/agents-library) to get started.
Bolna offers transparent usage-based pricing:
* **Call pricing**: \$0.02/min platform fee (plus provider charges).
Please refer to the [cost & pricing documentation](/pricing/call-pricing) for more information.
By default, Bolna allows up to **10 concurrent calls** for paid users. You can request higher limits via the [Enterprise Plan](/pricing/enterprise-pricing) for large-scale deployments.
**Yes**. You can either:
* **Buy phone numbers directly** from the [Bolna Dashboard](/buying-phone-numbers).
* **Use your own telephony account** (e.g., [Twilio](/twilio-connect-provider) or [Plivo](/plivo-connect-provider)) to connect and use your own manageed dedicated phone numbers.
Absolutely. Bolna integrates seamlessly with third-party telephony providers like [Twilio](/twilio-connect-provider) and [Plivo](/plivo-connect-provider), allowing you to use your own account and phone numbers.
Yes. Bolna supports multiple languages and voices. You can create agents in various languages (e.g., English, Hindi) using built-in multilingual support across speech-to-text, LLM, and text-to-speech components.
Please find the [list of all supported languages](/customizations/multilingual-languages-support) and a guide to [write prompts for multilingual agents](/guides/writing-prompts-in-non-english-languages).
Yes, definitely. Bolna AI is an API-first platform providing a comprehensive API suite to:
* Create, update, list, and delete voice agents via [Agent APIs](/api-reference/agent/v2/overview).
* Trigger calls via [Call APIs](/api-reference/calls/overview).
* Manage executions and logs via [Executions APIs](/api-reference/executions/overview).
* Do bulk calls using batches via [Batches APIs](/api-reference/batches/overview).
* Manage phone numbers via [Phone numbers APIs](/api-reference/phone-numbers/overview).
* Create, list and manage sub‑accounts via [Sub-Account APIs](/api-reference/sub-accounts/overview).
Yes. The platform supports shared access where you can add your team (developers, operators, analysts, etc.) to collaborate within the Bolna dashboard. APIs also allow scoped access through sub‑accounts.
Yes. Bolna supports multiple sub-accounts, designed for enterprise-level teams to isolate projects, billing, and permissions—fully manageable via the API.
Yes - Bolna AI supports on-premise deployments.
You can run the complete Bolna platform on your own infrastructure (e.g., private cloud or on-premise servers) instead of the hosted Bolna service.
On-premise is available only for enterprise customers. Please reach out to us at [enterprise@bolna.ai](mailto:enterprise@bolna.ai) or schedule a call [https://bolna.ai/meet](https://bolna.ai/meet) for more information.
# Implementing Guardrails for Bolna Voice AI Agents
Source: https://docs.bolna.ai/guardrails
Discover how to set guardrails for Bolna Voice AI agents, ensuring safe, reliable, and compliant interactions tailored to your business needs.
## Adding Guardrails
Give your config an identifiable name
This is the action or message that gets triggered when any unwanted or inappropriate phrases or sentences are detected.
This is the level or limit that determines when the system should react to unwanted words or phrases.
If the threshold is set low, the system will react to even slightly inappropriate language. If it's set higher, only more severe cases will trigger a response.
These are the unwanted or inappropriate utterences which you want to guard.
# Writing Prompts in Non-English Languages (Using Native Scripts)
Source: https://docs.bolna.ai/guides/writing-prompts-in-non-english-languages
Learn to write prompts using Devanagari for Hindi, accented Latin for French and Spanish, etc. for accurate pronunciation and natural responses with Bolna Voice AI agents.
Bolna Voice AI agents have multilingual support and can have conversations in serveral languages ([see list of all support languages](/customizations/multilingual-languages-support)). To ensure **natural speech output**, it is important to write your prompts in the **native script** of the target language, rather than phonetically using the English alphabet.
## Important note on multilingual setup
Bolna supports multilingual configurations, but with a key restriction:
> You can use English plus one additional language in a single agent.
>
> Examples of valid combinations:
>
> * "English + Hindi"
> * "English + French"
> * "English + Spanish"
> * "English + Marathi"
> * "English + X"
>
> Examples of invalid combinations:
>
> * "English + Hindi + Marathi"
## Prompting for non-english language
If you want to switch languages dynamically you can instruct the prompt to follow the customer's language. For example, for Spanish you may write:
> You will keep your sentences short and crisp. You will never reply with more than 2 sentences at a time.
> You will stick to context throughout. You must speak in Spanish but if the customer wishes to communicate in English, you will immediately shift your language to English and then remain in english.
> Generate the text response in the same language as the customer.
***
## Write the prompt in the native script
Using the correct script:
* Enables more accurate pronunciation
* Helps the AI identify the intended language
* Improves contextual understanding and tone
* Prevents misclassification as English
## Tips for Writing in native scripts
* Use Google Input Tools or built-in language keyboards on your phone/laptop.
* For European languages, make sure to include accented characters (like é, ñ, ü, ¿, ç, etc.).
* Double-check spellings and punctuation using tools like [Google Translate](https://translate.google.com/), but avoid relying on it for full sentence correctness.
## Examples of prompts in native scripts
❌ Incorrect
> Bonjour! Comment ca va? Nous allons commencer l'entretien maintenant.
✅ Correct
> Bonjour ! Comment ça va ? Nous allons commencer l’entretien maintenant.
Notice the accents (ç, é, ’). These help the AI pronounce words like a native speaker.
❌ Incorrect
> Hola! Como estas? Vamos a comenzar la entrevista ahora.
✅ Correct
> ¡Hola! ¿Cómo estás? Vamos a comenzar la entrevista ahora.
Accents and inverted punctuation (¿, ¡) matter for tone and pronunciation accuracy.
❌ Incorrect
> Namaste! Aap kaise ho? Ham aapka interview lene wale hain.
✅ Correct
> नमस्ते! आप कैसे हैं? हम आपका इंटरव्यू लेने वाले हैं।
Accents and inverted punctuation (¿, ¡) matter for tone and pronunciation accuracy.
***
## Common Mistake to Avoid
Don’t write in "English-style" phonetic spelling for non-English prompts.
> ❌ Kaise ho?
> ✅ कैसे हो?
> ❌ Como estas?
> ✅ ¿Cómo estás?
## FAQs
No. Currently, each agent supports only English plus one other language. Supporting more than two languages in a single agent may lead to confusion in language detection and inconsistent delivery.
Bolna agents can dynamically switch between English and the configured second language, but only between these two. If a customer speaks an unsupported third language, the agent will not be able to understand or reply appropriately.
It depends on your audience and brand tone. Ensure your prompts reflect the appropriate politeness level (e.g., “vous” vs. “tu” in French, or “आप” vs. “तुम” in Hindi) for a consistent and professional experience.
Use Bolna's Preview Voice feature in \[Voice Labs]\[[https://platform.bolna.ai/voices](https://platform.bolna.ai/voices)] to test generated responses before finalizing your prompts. Adjust words and punctuation if needed for more natural pronunciation.
# Hangup and Disconnect Bolna Voice AI calls
Source: https://docs.bolna.ai/hangup-calls
Discover methods to disconnect live Bolna Voice AI calls. Implement time-based hangups, custom prompts, and personalized messages for seamless call termination.
## Hangup calls
### 1. Using time based call hangup
You can add a `silence time` threshold which allows you to set a configurable threshold (in seconds) for detecting user inactivity during a call. If no audio is detected from the user for the specified duration, the call will automatically disconnect. This helps streamline conversations and prevent unnecessary call durations.
### 2. Using prompts to hangup calls
You can choose to add a custom prompt which determines whether to disconnect the call or not.
Since this is prompt based, it might not be 100% accurate and require tuning of the prompt depending on your use-case.
#### Example hangup prompt
```text hangup prompt example
You are an AI assistant determining if a conversation is complete. A conversation is complete if:
1. The user explicitly says they want to stop (e.g., "That's all," "I'm done," "Goodbye," "thank you").
2. The user seems satisfied, and their goal appears to be achieved.
3. The user's goal appears achieved based on the conversation history, even without explicit confirmation.
If none of these apply, the conversation is not complete.
```
***
## Adding a hangup message
You can enhance the overall experience by adding a hangup message, spoken by the voice AI agent as the final message before the call ends.
This also accepts [dynamic context](/using-context#custom-variables) as variables using `{}` to craft a personalized closing statement.
# Importing your voices to use with Bolna Voice AI
Source: https://docs.bolna.ai/import-voices
Easily import voices from multiple providers like ElevenLabs, Cartesia including custom voices into Bolna for seamless voice agent personalization.
## Importing voices using dashboard
1. Navigate to [Voice lab](https://platform.bolna.ai/voices) in the dashboard and click `Import Voices`.
2. Select your Voice Provider from the list.
3. Provide the `Voice ID` which you want to import onto Bolna:
4. Enable import from your own connected account if you wish to import any custom voice or your own account voice
5. Click on `Import`. Your voice will get imported and enabled for your account within seconds!
# Home
Source: https://docs.bolna.ai/index
Create and deploy Conversational Voice AI Agents
export const HeroCard = ({iconName, title, description, href}) => {
return
Learn how to create conversational voice agents with Bolna AI to **qualify leads**, **boost sales**, **automate customer support**, and **streamline recruitment and hiring**.
# Bolna Voice AI Integrations
Source: https://docs.bolna.ai/integrations
Integrate Bolna with your providers and services
export const ViaSocketIcon = ({size = "24"}) => ;
export const CalComIcon = ({size = "24"}) => ;
export const MakeComIcon = ({size = "24"}) => ;
export const ZapierIcon = ({size = "24"}) => ;
export const CartesiaIcon = ({size = "24"}) => ;
export const AzureOpenAIIcon = ({size = "24"}) => ;
export const DeepseekIcon = ({size = "24"}) => ;
export const DeepgramIcon = ({size = "24"}) => ;
export const ElevenLabsIcon = ({size = "24"}) => ;
export const OpenAIIcon = ({size = "24"}) => ;
export const PlivoIcon = ({size = "24"}) => ;
export const TwilioIcon = ({size = "24"}) => ;
## Telephony Integrations
>
}
href="/twilio-connect-provider"
>
Connect your Twilio phone numbers with Bolna
>
}
href="/plivo-connect-provider"
>
Connect your Plivo phone numbers with Bolna
## Model Integrations
>
}
href="https://platform.bolna.ai/auth/openai"
>
Connect your OpenAI account with Bolna
>
}
href="https://github.com/deepseek-ai/awesome-deepseek-integration/?tab=readme-ov-file#others"
>
Connect your Deepseek account with Bolna
>
}
href="https://platform.bolna.ai/auth/elevenLabs"
>
Connect your ElevenLabs account with Bolna
>
}
href="https://platform.bolna.ai/auth/cartesia"
>
Connect your Cartesia account with Bolna
>
}
href="https://platform.bolna.ai/auth/deepgram"
>
Connect your Deepgram account with Bolna
>
}
href="https://platform.bolna.ai/auth/azure"
>
Connect your Azure account with Bolna
## External Integrations
>
}
href="https://zapier.com/apps/bolna/integrations"
>
Connect your Zapier account with Bolna
>
}
href="https://www.make.com/en/integrations/bolna"
>
Connect your Make.com account with Bolna
>
}
href="https://platform.bolna.ai/auth/calcom"
>
Connect your Cal.com account with Bolna
>
}
href="https://viasocket.com/integrations/bolna"
>
Connect your viaSocket account with Bolna
# Bolna AI: Create and deploy Voice AI Agents
Source: https://docs.bolna.ai/introduction
Learn how to create conversational voice agents with Bolna AI to qualify leads, boost sales, automate customer support, and streamline recruitment and hiring
## Demo
Bolna API documentation
Try Bolna on Playground
Pre built agents for your use cases
Talk to us for customized solutions
Scale to millions of calls
Generate API keys
# List of status for Bolna Voice AI calls
Source: https://docs.bolna.ai/list-phone-call-status
Learn about the various type of call statuses associated with Bolna Voice AI conversations
## Introduction
The every conversation is associated with:
* `status`: Conversation realtime status
* `error_message`: An explanatory error message in case of errors or failed calls.
## Anatomy of a Bolna Voice AI call
Following diagram illustrates a basic flow of the call as it progresses from beginning to end.
```mermaid
flowchart LR
ringing --> no-answer
queued --> initiated --> ringing --> in-progress --> call-disconnected --> completed
ringing --> busy
%% Greenish styling for the main flow
style queued fill:#e0f8e0,stroke:#2e8b57,stroke-width:2px
style initiated fill:#e0f8e0,stroke:#2e8b57,stroke-width:2px
style ringing fill:#e0f8e0,stroke:#2e8b57,stroke-width:2px
style in-progress fill:#e0f8e0,stroke:#2e8b57,stroke-width:2px
style call-disconnected fill:#e0f8e0,stroke:#2e8b57,stroke-width:2px
style completed fill:#e0f8e0,stroke:#2e8b57,stroke-width:2px
```
The `completed` status is the final and end status of the conversation.
## List of successful call status
The following successful events are listed in chronological order.
| Event name | Description |
| ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `queued` | The call is received by Bolna and is now queued |
| `initiated` | The call has been initiated from Bolna's servers |
| `ringing` | The call is now ringing |
| `in-progress` | The call has been answered and is now in progress |
| `call-disconnected` | The call is now disconnected |
| `completed` | Processing of the call, recordings, etc has been completed post call disconnection. There might be some lag (\~2-3 minutes) for receiving `completed` event since processing of call data and the recordings might take some time) |
## List of unanswered call status
| Event name | Description |
| ------------- | ------------------------------------------------------------ |
| `balance-low` | The call cannot be initiated since your Bolna balance is low |
| `busy` | The callee was busy |
| `no-answer` | The phone was ringing but the callee did not answer the call |
## List of unsuccessful call status
| Event name | Description |
| ---------- | ---------------------------------------------------------------------------------- |
| `canceled` | The call was canceled |
| `failed` | The call failed |
| `stopped` | The call was stopped by the user or due to no response from the telephony provider |
| `error` | An error occured while placing the call |
The payloads for all the above events will follow the same structure as that of [Agent Execution](api-reference/executions/get_execution) response.
```json {7, 8}
{
"id": 7432382142914,
"agent_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"batch_id": "d12abbbe-d16d-4c51-b18c-c7d5c3807962",
"conversation_time": 123,
"total_cost": 123,
"status": "completed",
"error_message": null,
"answered_by_voice_mail": true,
"transcript": "",
"created_at": "2024-01-23T01:14:37Z",
"updated_at": "2024-01-29T18:31:22Z",
"usage_breakdown": {
"synthesizer_characters": 123,
"synthesizer_model": "polly",
"transcriber_duration": 123,
"transcriber_model": "deepgram",
"llm_tokens": 123,
"llm_model": {
"gpt-3.5-turbo-16k": {
"output": 28,
"input": 1826
},
"gpt-3.5-standard-8k": {
"output": 20,
"input": 1234
}
}
},
"telephony_data": {
"duration": 42,
"to_number": "+10123456789",
"from_number": "+1987654007",
"recording_url": "https://bolna-call-recordings.s3.us-east-1.amazonaws.com/AC1f3285e7c353c7d4036544f8dac36b98/REb1c182ccde4ddf7969a511a267d3c669",
"hosted_telephony": true,
"provider_call_id": "CA42fb13614bfcfeccd94cf33befe14s2f",
"call_type": "outbound",
"provider": "twilio"
},
"transfer_call_data": {
"provider_call_id": "CA42fb13614bfcfeccd94cf33befe14s2f",
"status": "completed",
"duration": 42,
"cost": 123,
"to_number": "+10123456789",
"from_number": "+1987654007",
"recording_url": "https://bolna-call-recordings.s3.us-east-1.amazonaws.com/AC1f3285e7c353c7d4036544f8dac36b98/REb1c182ccde4ddf7969a511a267d3c669",
"hangup_by": "Caller",
"hangup_reason": "Normal Hangup"
},
"batch_run_details": {
"status": "completed",
"created_at": "2024-01-23T01:14:37Z",
"updated_at": "2024-01-29T18:31:22Z",
"retried": 0
},
"extracted_data": {
"user_interested": true,
"callback_user": false,
"address": "42 world lane",
"salary_expected": "42 bitcoins"
},
"context_details": {},
"extraction_webhook_status": true
}
```
# Make Outbound Calls Using Bolna Voice AI Agents
Source: https://docs.bolna.ai/making-outgoing-calls
Make outbound Voice AI calls with Bolna using default or dedicated phone numbers. Integrate telephony providers and automate calls via dashboard and APIs.
export const MakeComIcon = ({size = "24"}) => ;
export const ZapierIcon = ({size = "24"}) => ;
export const PlivoIcon = ({size = "24"}) => ;
export const TwilioIcon = ({size = "24"}) => ;
## Using Bolna numbers for making outgoing calls
By default, you can make outbound calls using Bolna's centralized phone numbers.
| Callee country | Phone number prefix |
| ------------------- | ---------------------------------------------------------- |
| 🇺🇸 United States | Callee will recieve the phone call from `+1` prefix phone |
| 🇬🇧 United Kingdom | Callee will recieve the phone call from `+1` prefix phone |
| 🇦🇺 Australia | Callee will recieve the phone call from `+1` prefix phone |
| 🇮🇳 India | Callee will recieve the phone call from `+91` prefix phone |
| 🌍 Others | Callee will recieve the phone call from `+1` prefix phone |
## Use your own dedicated phone number
### Method 1. Purchase a phone number from the [Bolna Dashboard](https://platform.bolna.ai/phone-numbers).
Please refer to a [step by step tutorial for purchasing phone numbers on Bolna](/buying-phone-numbers).
### Method 2. Connect your Telephony account and use your own phone numbers.
>
}
href="/twilio-connect-provider"
>
Use your own Twilio phone numbers with Bolna
>
}
href="/plivo-connect-provider"
>
Use your own Plivo phone numbers with Bolna
***
## Making outbound calls from dashboard
## Making outbound calls Using APIs
Use [`/call` API](api-reference/calls/make) to place the call to the agent
```curl default-centralized-phone-numbers
# No need to add `from_phone_number`
curl --request POST \
--url https://api.bolna.ai/call \
--header 'Authorization: ' \
--header 'Content-Type: application/json' \
--data '{
"agent_id": "123e4567-e89b-12d3-a456-426655440000",
"recipient_phone_number": "+10123456789"
}'
```
```curl dedicated-phone-numbers
# Add your purchased phone number or your own connected phone number in `from_phone_number` field
curl --request POST \
--url https://api.bolna.ai/call \
--header 'Authorization: ' \
--header 'Content-Type: application/json' \
--data '{
"agent_id": "123e4567-e89b-12d3-a456-426655440000",
"recipient_phone_number": "+10123456789",
"from_phone_number": "+1987654321"
}'
```
## Making outbound calls Using Zapier & Make.com
>
}
href="https://zapier.com/apps/bolna/integrations"
>
Connect Zapier to start making outbound calls using Bolna Voice AI agents
>
}
href="https://www.make.com/en/integrations/bolna"
>
Connect Make.com to start making outbound calls using Bolna Voice AI agents
# Creating Multi-Agent Prompts in Bolna Voice AI
Source: https://docs.bolna.ai/multi-agent-prompt
Explore multi-agent prompt setups in Bolna Voice AI to enable dynamic conversations between multiple agents for advanced use cases.
This feature is still in Beta and is only available via [Create agent API](api-reference/agent/v2/create).
## Example agent payload
```json multi-agent-payload.json
{
"agent_config": {
"agent_name": "Recruitment multi agent",
"agent_welcome_message": "Hey!this is a Recruitment call! please speak now",
"tasks": [
{
"tools_config": {
"output": {
"provider": "twilio"
},
"input": {
"provider": "twilio"
},
"synthesizer": {
"provider": "polly",
"stream": true,
"caching": true,
"provider_config": {
"voice": "Danielle",
"engine": "neural",
"language": "en-US"
},
"buffer_size": 100.0
},
"llm_agent": {
"agent_flow_type": "streaming",
"agent_type": "graph_agent",
"llm_config": {
"provider": "openai",
"model": "gpt-4o-mini",
"agent_information": "recruitment system",
"nodes": [
{
"id": "root",
"content": "Welcome to our recruitment portal!",
"prompt": "You are an AI assistant helping with recruitment. Greet the user and ask if they're interested in applying for a job, learning about open positions, or seeking general information about the company.",
"edges": [
{
"to_node_id": "open_positions",
"condition": "user wants to learn about open positions"
},
{
"to_node_id": "end",
"condition": "user is not interested"
}
]
},
{
"id": "open_positions",
"content": "Here are the current open positions.",
"prompt": "You're providing a list of current open positions. Ask the user if they are interested in a specific role and offer details about job descriptions, qualifications, and application steps.",
"edges": [
{
"to_node_id": "end",
"condition": "user is not interested in applying"
}
]
},
{
"id": "end",
"content": "Thank you for your time!",
"prompt": "End the conversation when the user has no further questions or has completed their task, such as submitting an application.",
"edges": []
}
],
"current_node_id": "root",
"context_data": {}
}
},
"transcriber": {
"endpointing": 123.0,
"stream": true,
"provider": "deepgram",
"model": "nova2",
"language": "en"
},
"api_tools": null
},
"task_config": {
"hangup_after_LLMCall": false,
"hangup_after_silence": 10.0,
"ambient_noise": false,
"interruption_backoff_period": 0.0,
"backchanneling": false,
"backchanneling_start_delay": 5.0,
"optimize_latency": true,
"incremental_delay": 100.0,
"call_cancellation_prompt": null,
"number_of_words_for_interruption": 3.0,
"backchanneling_message_gap": 5.0,
"use_fillers": false,
"call_terminate": 300
},
"task_type": "conversation",
"toolchain": {
"execution": "parallel",
"pipelines": [
[
"transcriber",
"llm",
"synthesizer"
]
]
}
}
],
"agent_type": "Lead Qualification"
},
"agent_prompts": {
"task_1": {
"system_prompt": "You're a helpful assistant that books appointments for people."
}
}
}
```
# Account based concurrency tiers
Source: https://docs.bolna.ai/outbound-calling-concurrency
Discover how different account tiers on Bolna Voice AI impact concurrency for outbound phone calls
## Concurrent calling for outbound calls
By default, we allow upto 10 concurrent calls for all **paid users**. Calls over this limit automatically get queued up and get processed one-by-one as the slots are available in the queue for your account.
All **trial users** who haven't purchased any credits can do upto 2 concurrent calls only to their **verified phone numbers**.
| Account type | Monthly minutes of calls | Concurrent calls |
| ------------ | ------------------------ | ---------------- |
| `trial` | n/a | `2` |
| `paid` | `<10000` | `10` |
| `paid` | `10000 - 25000` | `20` |
| `paid` | `25000 - 50000` | `30` |
| `enterprise` | `>100000` | `unlimited` |
You can read more about our enterprise offering here [Bolna enterprise](/pricing/enterprise-pricing).
## Concurrent calling for inbound calls
There are no concurrent limits or any restrictions on inbound calls.
## Checking your account's concurrency limits
We'll monitor your monthly volumes and adjust concurrency as needed to ensure you experience no disruptions - while maintaining smooth operations for all our customers.
1. Go to your **Workplace settings**
2. See your **Account limits**
# Platform Concepts
Source: https://docs.bolna.ai/platform-concepts
An overview of the various components that make up Bolna Voice AI agents, along with the key tasks these agents are designed to perform.
### Agent:
Bolna helps you create AI Agents which can be instructed to do tasks beginning with:
* An Input medium
* For voice based conversations the `agent` input could be a microphone or a phone call
* For text based conversations the `agent` could take inputs via keyboard
* For visual based conversations the `agent` could take inputs in the form of images (Coming soon)
* An ASR
* ASR converts the input to a LLM compatible format so it can pass it to the chosen LLM
* A LLM
* LLM takes the input from ASR and generates the appropriate response and passes it to the TTS or Image Generation model depending on the ttype of conversation the Agent is being built for
* A TTS / Image Generation Model
* Takes the LLM response and generates a compatible output to pass on to the output component
* An Output component
* Similar to the input component, this will pass the compatible text/voice/image to the output medium
### Tasks
Bolna provides the functionality to instruct your `agent` to execute tasks once the conversation has ended.
* Summarization task
* Extraction task
* Function tools
# Bolna Playground Overview
Source: https://docs.bolna.ai/playground
Learn to create, modify, and test Bolna Voice AI agents using the various capabilities like Agent Setup, Executions, Batches, Voice Lab, Phone Numbers and more.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
## Bolna Playground Overview
### 1. Agent Setup
This is your home page! [Create, modify or test your agents](https://platform.bolna.ai/)
### 2. Agent Executions
[See all conversations](https://platform.bolna.ai/agent-executions) carried out using your agents
### 3. Batches
Upload, Schedule and Manage outbound [calling campaigns](https://platform.bolna.ai/batches)
### 4. Voice Lab
[Experience and choose](https://platform.bolna.ai/voices) between hundreds of voices from multiple text-to-speech providers
### 5. Developers
[Create and manage API keys](https://platform.bolna.ai/developers) for accessing and building using Bolna hosted APIs
### 6. Providers
[Connect your own providers](https://platform.bolna.ai/providers) like Twilio, Plivo, OpenAI, ElevenLabs, Deepgram, etc.
### 7. Available Credits
Number of credits remaining (1 credit \~ 1c). Credits are consumer per conversation with your agent
### 8. Add more Credits
Replenish your credits (Contact us for enterprise discounts at scale)
### 9. Read API Docs to carry out all actions (plus more) using APIs
Link to the API reference
### 10. Join our Discord Community and give us a star on github!
Talk to us on Discord
Show your love on Github
# Agent executions
Source: https://docs.bolna.ai/playground/agent-executions
Access and analyze conversation logs, including call recordings, transcripts, and summaries, for your deployed Bolna Voice AI agents.
Access Call logs on playground from [https://platform.bolna.ai/agent-executions](https://platform.bolna.ai/agent-executions).
1. Choose your agent and batch (if required) of conversations you want to analyse
2. Learn how to call all details using simple APIs. You can use this to link your analytics with your database. Contact us if you need our services in connecting these
3. Columns of executions table
* `exectuion_id` is a unique ID given to each conversation
* Conversation type is between `Websocket chat`, or `telephony`
* `Duration` is duration of conversation from start to end
* `Cost in credits` is the total spent credits for that conversation
4. Clicking on conversation details opens a tab where you can see the following for each conversation
* `Recording` of the call
* `Transcrip`t of the call
* `Summary` / `Extraction` of the call (if set up in Tasks)
# Creating your Bolna Voice AI agent
Source: https://docs.bolna.ai/playground/agent-setup
Step-by-step guide to creating, importing, and managing your Bolna Voice AI agents within the Playground.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. **Create an agent** - Don’t forget to click on ‘Create Agent’ on the right to complete creation
2. **Import agent** - Put in an agent link and import a pre-built agent. For example, `https://bolna.ai/a/e3602854-ed7b-49da-a329-99f53710a0d7`
3. List of all agents that you have created.
4. **Share agent** - Get a link that you can share with the world (They can import your agent)
5. Get agent link that can be pasted in your [Twilio account to set up an inbound agent](/receiving-incoming-calls)
6. Start outbound calls by entering numbers (including country code) in the `recipient` textbox.
7. Schedule your calls or batches from the [Batches](https://platform.bolna.ai/agent-executions) tab
8. See all Executions in the [Agent Executions](https://platform.bolna.ai/agent-executions) tab
9. `Save Agent` - Your changes will only be reflected in conversations after you click on save agent
10. Test your agent’s intelligence and responses by chatting with it on this screen using our chat option
# Agents Tab
Source: https://docs.bolna.ai/playground/agent-tab
Central hub for creating, modifying, and testing your Bolna Voice AI agents, including prompt customization and variable management.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. **Text-to-speech Voice** - Shortcut to select voice (Can also be done from the Voices tab)
2. **LLM** - Shortcut to select LLM (can also be done from the LLM tab)
3. Scroll between all tabs
4. **Agent Welcome Message** - This is the first message that the agent will speak as soon as the call is picked up. This message can also be interrupted by the user. (Hot tip : Unless you have a clear announcement / disclaimer to start with, keep this message short - `Hello!`)
5. **Agent Prompt** - This is the text box in which you will write the entire prompt that your agent will follow.
Make sure your prompt is clear and to the point (Hot tip : if you have a transcript, or a rough prompt in mind, access our Custom GPT, add your transcript / thoughts in there and you will get a refined prompt that you can use - [https://chatgpt.com/g/g-7hDrhJaDl-bolna-bot-builder](https://chatgpt.com/g/g-7hDrhJaDl-bolna-bot-builder)
6. **Variables** - Whenever you write as a `{variable}` this becomes a custom variable that you can assign. Whatever you write in the variable text box will be what the agent considers when conversing. For example, in the prompt you can write `You are speaking to {name}` and in the text box, write Rahul to tell the agent who they are speaking with
# Call Tab
Source: https://docs.bolna.ai/playground/call-tab
Manage telephony settings, including providers, call hangup prompts, and termination time limits to be used for phone calls with Bolna Voice AI agents.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. Our telephony provider partnerships are with [Twilio](/twilio) and [Plivo](/plivo). They support both inbound and outbound calling
2. **Call hangup** - Use a prompt or silence timer to instruct the agent when to end the call. Make sure your prompt is very clear and to the point to avoid chances of the agent ending the call at the wrong time
3. **Call termination** - Choose a max time limit for each call, beyond which the call will automatically get cut
# Functions Tab
Source: https://docs.bolna.ai/playground/functions-tab
Integrate function calling capabilities, such as appointment scheduling and call transfers, into your Bolna Voice AI agents.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. Choose [desired functions](/function-calling), customise and add
2. Connect using [cal.com](https://cal.com) API (Calendly / Google Calendar coming soon) to check availability of slots for selected event type
3. Transfer call to one (or multiple) human phone numbers on meeting decided conditions. Make sure your prompts are clear to avoid chances of agent transferring calls when not necessary
4. Book appointments in free slots using [cal.com](https://cal.com) API (Calendly / Google Calendar coming soon)
# LLM Tab
Source: https://docs.bolna.ai/playground/llm-tab
Configure Large Language Model (LLM) settings for your agents, including provider selection, token limits, and response creativity.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. **Choose your LLM Provider** - (`OpenAI`, `DeepInfra`, `Groq`) and respective model (`gpt-4o`, `Meta Llama 3 70B instruct`, `Gemma - 7b`, etc.)
2. **Tokens** - Increasing this number enables longer responses to be queued before sending to the synthesiser but slightly increases latency
3. **Temperature** - Increasing temperature enables heightened creativity, but increases chance of deviation from prompt. Keep temperature as low if you want more control over how your AI will converse
4. **Filler words** - reduce perceived latency by smarty responding `<300ms` after user stops speaking, but recipients can feel that the AI agent is not letting them complete their sentence
# Tasks Tab
Source: https://docs.bolna.ai/playground/tasks-tab
Add follow-up tasks like conversation summaries, information extraction, and custom webhooks for post-call actions to be performed by Bolna Voice AI agents.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. Generate a generic summary of the conversation
2. Extract structured information from the conversation.
Write your prompt in the following template:
```
variable_name (example ; Payment_mode) : Clear actionable on what to yield and when (Yield the type of payment that the user agrees to pay with). Actionable could be open-ended or classified (If user wants to pay by cash, yield cash. Else yield NA)
```
3. Create your own webhook to ingest or send out information post closure of conversation
# Transcriber Tab
Source: https://docs.bolna.ai/playground/transcriber-tab
Set up Speech-to-Text (STT) configurations, choose transcriber providers, and define language and endpointing settings to be set for Bolna Voice AI agents.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. Choose your **Transcriber Provider** and **model**
* Deepgram (Default transcriber, most tried and tested)
* Whisper (open-source, cheapest)
2. **Language** - By default the agent can only transcribe English language. By choosing any other language, the agent will be able to transcribe sentences spoken in chosen language + English
3. **Endponting** - Number of milliseconds your agent will wait before generating response. Lower endpointing reduces latency could lead to agent interrupting mid-sentence. If you want quick short responses, keep a low (`100ms`) endpoint. If you are expecting users to speak longer sentences, keep a higher (`500ms`) endpoint.
4. **Linear Delay** - Linear delay accounts for long pauses mid-sentence. If the recipient is expected to speak long sentences, increase value of linear delay
5. **Interruption settings** - Agent will not consider interruption until human speaks these number of words. Ideal to prevent Agent pausing when human is actively listening by saying `Oh`, `yes` etc. (If the user says a Stopword, such as `stop`, `wait`, etc., the agent will automatically pause regardless of the settings)
6. **Backchanneling** - Switch on only if user is expected to speak long sentences. Agent will show they are listening by give soft verbal nudges of acknowledgement. You can change the time to wait before the agent gives the first filler, as well as the time between subsequent fillers
# Voice Tab
Source: https://docs.bolna.ai/playground/voice-tab
Customize Text-to-Speech (TTS) settings, select voice providers, and adjust buffer sizes for optimal Bolna Voice AI agent performance.
Access Bolna playground from [https://platform.bolna.ai/](https://platform.bolna.ai/).
1. Choose your **TTS Provider** and **Voice**
* `ElevenLabs` is the most realistic and costliest voice
* `Deepgram` and `Azure TTS` are the quickest and cheapest providers.
2. Play around with more voices from each provider in Voice Labs before finalising on the voice you want. Pressing the **play button** will enable your selected voice to speak out the `Welcome Message` that you have set
3. **Increasing buffer size** enables agent to speak long responses fluently, but increases latency. Buffer sizes of \~250 are ideal for most conversations
4. **Ambient noise** removes the pin-drop silence between a conversation and makes it more realistic. However, be careful not to let the background noise be a distraction
5. Agent will **check if user is still active in the call** after a fixed time that you can decide. You can customise the message the user will use to ask
# Enhance Call Capabilities with Bolna's Plivo Integration
Source: https://docs.bolna.ai/plivo
Integrate Plivo with Bolna to manage outbound and inbound calls. Access setup guides for seamless Voice AI agent communication using your Plivo numbers.
Bolna agents make phone calls using Plivo numbers
Bolna agents receive phone calls on Plivo numbers and answers them
Use your own Plivo account with Bolna
# Link Your Plivo Account to Bolna for Voice AI
Source: https://docs.bolna.ai/plivo-connect-provider
Securely connect your Plivo account with Bolna. Enable your Voice AI agents to utilize Plivo phone numbers for managing inbound and outbound calls.
## Use your own Plivo account to make outbound calls
We connect your `Plivo` account securely via using [infisical](https://infisical.com/).
You can connect your own Plivo account and start using it on Bolna. All calls initiated from Bolna will be from your own Plivo account and use your own Plivo phone numbers.
1. Navigate to `Providers` tab from the left menu bar & Click **Plivo connect button**.
2. Fill in the required details.
3. Save details by clicking on the **connect button**.
4. You'll see that your Plivo account was successfully connected. All your calls will now go via your own Plivo account and phone numbers.
# Initiate Outbound Calls via Plivo with Bolna Voice AI
Source: https://docs.bolna.ai/plivo-outbound-calls
Configure Bolna Voice AI agents to make outbound calls through Plivo. Learn to set up calls using the dashboard and APIs for effective outreach.
## Making outbound calls from dashboard
1. Login to the dashboard at [https://platform.bolna.ai](https://platform.bolna.ai) using your account credentials
2. Choose `Plivo` as the Call provider for your agent and save it
3. Start placing phone calls by providing the recipient phone numbers.
Bolna will place the calls to the provided phone numbers.
You can place calls using your own custom Plivo phone numbers only if you've connected your Plivo account.
You can read more on how to connect your Plivo account [here](/providers).
## Making outbound calls Using APIs
1. Generate and save your [Bolna API Key](/api-reference/introduction#steps-to-generate-your-api-key)
2. Set your agent `input` and `output` tools as `plivo` while using [`/create` Agent API](/api-reference/agent/create)
```create-agent.json
...
...
"tools_config": {
"output": {
"format": "wav",
"provider": "plivo"
},
"input": {
"format": "wav",
"provider": "plivo"
},
"synthesizer": {...},
"llm_agent": {...},
"transcriber": {...},
"api_tools": {...}
}
...
...
```
3. Use [`/call` API](api-reference/calls/make) to place the call to the agent
```call.json
curl --request POST \
--url https://api.bolna.ai/call \
--header 'Authorization: ' \
--header 'Content-Type: application/json' \
--data '{
"agent_id": "123e4567-e89b-12d3-a456-426655440000",
"recipient_phone_number": "+10123456789"
}'
```
# Receive Bolna Voice AI call updates
Source: https://docs.bolna.ai/polling-call-status-webhooks
Learn how to receive real-time call data updates from Bolna Voice AI using webhooks. Monitor and handle call scenarios effectively.
Learn about the various type of call statuses associated with Bolna Voice AI conversations
# Bolna Voice AI usage pricing
Source: https://docs.bolna.ai/pricing/call-pricing
Discover detailed insights into Bolna Voice AI's pricing structure. Learn about cost breakdowns and flexible plans tailored to your business needs.
Bolna charges a flat **2 cents** platform fees per minute of calls.
## Call pricing
Bolna AI costs consist of:
1. Voice AI charges (STT + LLM + TTS)
2. Telephony charges (billed per minute)
3. Bolna Platform fees (billed per minute)
### Voice AI charges
Your choice of Speech to Text (STT) model and provider.
Depends on the **duration of calls** (rounded to seconds).
Your choice of Large Language Model (LLM) and provider.
Depends on the **total LLM tokens generated**.
Your choice of Speech to Text (STT) model and provider.
Depends on the **characters**.
### Telephony charges
Your telephony provider and the country/region of the phone numbers.
Depends on the **duration of calls** (rounded to minutes).
### Bolna Platform charges
Flat **\$0.02/minute**
***
## Connect providers to decrease costs
You can connect your Providers securely to transfer and offset your costs. We don’t charge for any usage for providers that you have connected to Bolna.
Head over to [Providers page](https://platform.bolna.ai/providers) to connect your accounts.
[Navigate](/providers) here to learn more about Supported Providers and connecting your accounts.
## Examples
The dashboard displays a near approximate pricing for your agent depending on your chosen providers and models.
Here are few examples of different scenarios for the calls costs and how connecting your own accounts can help you lower the costs.
An example were there are no connected accounts.
You can make the calls go as low as **\$0.02/minute** by connecting your own accounts.
## View call costs from the executions history
After each conversation, go to the [Agent Executions page](https://platform.bolna.ai/agent-executions) to see how many credits the conversation consumed.
You can connect your own [Providers](/providers) like [Telephony](/supported-telephony-providers), [Transcriber](/providers/transcriber/deepgram), [LLMs](/providers/llm-model/openai), [Text-to-Speech](/providers/voice/aws-polly) and lower the costs. In that case Bolna will not charge for that component.
Please reach out to us at [founders@bolna.ai](mailto:founders@bolna.ai) for customized volume-based pricing.
# Supported Providers for Bolna Voice AI
Source: https://docs.bolna.ai/providers
Explore the list of providers supported by Bolna Voice AI, including integrations for telephony, transcription, and text-to-speech services to lower your costs
We don't charge for any usage for providers that you have connected to Bolna.
We connect all your `Provider` accounts securely via using [infisical](https://infisical.com/).
### Steps to add your own Provider credentials:
Login to the dashboard at [https://platform.bolna.ai](https://platform.bolna.ai)
Navigate to `Developers` tab from the left menu bar
Head over to the `Provider Keys` tab
Click the button `Add Provider Key` to your Provider key-value pair
Save your Provider
We currently have the following providers which you can connect to Bolna.
All these keys **must** be added for the respective provider.
| Property | Description |
| --------------------- | ------------------- |
| `TWILIO_ACCOUNT_SID` | Twilio account SID |
| `TWILIO_AUTH_TOKEN` | Twilio token |
| `TWILIO_PHONE_NUMBER` | Twilio phone number |
For creating a free Twilio Account you can checkout their blog [How to Work with your Free Twilio Trial Account](https://www.twilio.com/docs/messaging/guides/how-to-use-your-free-trial-account)
| Property | Description |
| -------------------- | ------------------ |
| `PLIVO_AUTH_ID` | Plivo auth ID |
| `PLIVO_AUTH_TOKEN` | Plivo auth token |
| `PLIVO_PHONE_NUMBER` | Plivo phone number |
| Property | Description |
| -------- | --------------- |
| `OPENAI` | Your OpenAI key |
| Property | Description |
| ------------ | ------------------- |
| `PERPLEXITY` | Your Perplexity key |
For custom llm simply keep provider in the `llm_agent` key as `custom` and add a openai compatible `base_url`
#### Example LLM Agent key for the
```
"llm_agent": {
"max_tokens": 100.0,
"presence_penalty": 0.0,
"base_url": "https://custom.llm.model/v1",
"extraction_details": null,
"top_p": 0.9,
"agent_flow_type": "streaming",
"request_json": false,
"routes": null,
"min_p": 0.1,
"frequency_penalty": 0.0,
"stop": null,
"provider": "custom",
"top_k": 0.0,
"temperature": 0.2,
"model": "custom-llm-model",
"family": "llama"
}
```
| Property | Description |
| ------------ | ----------------------- |
| `ELEVENLABS` | Your Elevenlabs API key |
| Property | Description |
| ---------- | --------------------- |
| `CARTESIA` | Your Cartesia API key |
| Property | Description |
| -------- | ------------------- |
| `SARVAM` | Your Sarvam API key |
| Property | Description |
| ---------- | --------------------- |
| `SMALLEST` | Your Smallest API key |
| Property | Description |
| ---------- | ----------------- |
| `DEEPGRAM` | Your Deepgram key |
# Use Azure OpenAI with Bolna Voice AI
Source: https://docs.bolna.ai/providers/llm-model/azure-openai
Azure OpenAI dedicated clusters for GPT-4.1, GPT-4o, GPT-4, and GPT-3.5-turbo models and deploy powerful conversational Voice AI applications.
## Azure OpenAI API Integration for Voice AI Applications
[Azure OpenAI Service](https://azure.microsoft.com/en-us/products/ai-services/openai-service) provides enterprise-grade access to OpenAI's powerful Large Language Models (LLMs) through Microsoft's secure, compliant, and scalable cloud infrastructure. This comprehensive guide covers Azure OpenAI API integration with Bolna, including authentication, model selection, and implementation best practices for enterprise conversational AI applications.
## Why Choose Azure OpenAI Models for Voice AI Agents?
Azure OpenAI offers the same cutting-edge OpenAI models with additional enterprise benefits for voice AI applications:
### 1. Enterprise-Grade Security and Compliance
* **Data residency control**: Keep your voice AI data within specific geographic regions
* **Private networking**: VNet integration and private endpoints for secure connections
* **Compliance certifications**: SOC 2, ISO 27001, HIPAA, and other enterprise standards
* **Customer-managed keys**: Full control over encryption keys for sensitive voice data
### 2. Advanced Natural Language Understanding (NLU)
* **Same OpenAI models**: Access to GPT-4o, GPT-4, and GPT-3.5-turbo with identical capabilities
* **Multi-turn conversation handling**: Maintains context across extended voice interactions
* **Intent recognition**: Accurately identifies user intentions from spoken language
* **Multilingual support**: Processes voice inputs in 50+ languages
* **Semantic understanding**: Comprehends nuanced meaning and context in conversations
### 3. Enterprise Infrastructure and Reliability
* **99.9% uptime SLA**: Ensures consistent availability for production voice AI systems
* **Global scale**: Leverage Microsoft's worldwide data center network
* **Integrated monitoring**: Azure Monitor and Application Insights for comprehensive observability
* **Cost management**: Built-in Azure cost controls and budgeting tools
### 4. Advanced AI Capabilities with Azure Integration
* **Function calling**: Integrates with Azure services and external APIs seamlessly
* **Azure AI services integration**: Combine with Azure Speech, Translator, and other AI services
* **Structured output**: Returns JSON responses for seamless integration
* **Custom fine-tuning**: Train models on your specific voice AI use cases
* **Content filtering**: Built-in responsible AI content filtering and safety measures
### Model Selection Guide
Choose the optimal Azure OpenAI model based on your voice AI requirements:
#### GPT-4o (Recommended for Production)
* **Best for**: High-quality conversational AI with complex reasoning
* **Use cases**: Customer service, sales calls, technical support
* **Performance**: Fastest response times with superior accuracy
* **Azure benefits**: Enhanced security, compliance, and monitoring
#### GPT-4o-mini (Cost-Effective Option)
* **Best for**: High-volume applications requiring cost optimization
* **Use cases**: Lead qualification, appointment scheduling, basic inquiries
* **Performance**: Balanced speed and quality
* **Cost**: 60% lower cost than GPT-4o with Azure pricing tiers
#### GPT-4 (Maximum Reasoning)
* **Best for**: Applications requiring maximum reasoning capability
* **Use cases**: Complex problem-solving, detailed analysis
* **Performance**: Highest quality with comprehensive reasoning
* **Azure benefits**: Enterprise-grade deployment and management
#### GPT-3.5-turbo (Budget Option)
* **Best for**: Simple conversational tasks and prototyping
* **Use cases**: Basic chatbots, simple Q\&A systems
* **Performance**: Fast responses with good quality
* **Cost**: Most economical option with Azure cost controls
## Implementation Best Practices
### Optimizing for Voice AI Performance
1. **Prompt Engineering for Voice**
* Design prompts specifically for spoken interactions
* Include context about voice communication style
* Optimize for concise, natural-sounding responses
2. **Azure-Specific Optimizations**
* Implement Azure AD authentication for enhanced security
* Use Azure Key Vault for secure credential management
* Configure Azure Monitor for performance tracking
3. **Error Handling and Resilience**
* Implement fallback responses for API failures
* Handle rate limiting gracefully with Azure quotas
* Use Azure Service Bus for reliable message queuing
4. **Performance Monitoring**
* Track response times and quality metrics with Azure Monitor
* Monitor API usage and costs through Azure Cost Management
* Implement comprehensive logging with Azure Application Insights
## Supported Azure OpenAI Models on Bolna AI
| Model | Context Window | Best Use Case | Azure Benefits |
| ----------------- | -------------- | ------------------------------------------ | ---------------------------------- |
| **gpt-4o** | 128K tokens | Production voice AI, complex conversations | Enterprise security, compliance |
| **gpt-4o-mini** | 128K tokens | Cost-effective voice applications | Azure cost controls, monitoring |
| **gpt-4** | 8K tokens | Maximum reasoning capability | Private deployment, data residency |
| **gpt-3.5-turbo** | 4K tokens | Simple conversations, prototyping | Budget-friendly with Azure pricing |
## Next Steps
Ready to integrate Azure OpenAI with your voice AI agent? [Contact our team](mailto:support@bolna.ai) for personalized setup assistance or explore our [API documentation](/api-reference) for advanced configuration options. Take advantage of Azure's enterprise-grade infrastructure to build secure, scalable, and compliant voice AI solutions.
# Use DeepSeek with Bolna Voice AI
Source: https://docs.bolna.ai/providers/llm-model/deepseek
Complete Bolna AI Voice Agents with DeepSeek. Build cost-effective voice AI agents for conversational Voice AI applications with DeepSeek-Chat models.
## DeepSeek API Integration for Voice AI Applications
[DeepSeek](https://www.deepseek.com/) provides advanced Large Language Models (LLMs) with competitive pricing and powerful reasoning capabilities for building intelligent voice AI agents. This comprehensive guide covers DeepSeek API integration with Bolna, including authentication, model selection, and implementation best practices for cost-effective conversational AI applications.
## Why Choose DeepSeek Models for Voice AI Agents?
DeepSeek offers compelling advantages for voice AI applications through innovative models and competitive pricing:
### 1. Advanced Reasoning Capabilities
* **DeepSeek-Reasoner (R1)**: State-of-the-art reasoning model with enhanced problem-solving abilities
* **Multi-step reasoning**: Handles complex logical chains in voice conversations
* **Context understanding**: Maintains sophisticated reasoning across extended interactions
* **Analytical thinking**: Provides detailed explanations and step-by-step problem solving
### 2. Cost-Effective Performance
* **Competitive pricing**: Significantly lower costs compared to premium alternatives
* **Cache optimization**: Reduced costs with cache hit/miss pricing tiers
* **Discount pricing**: Special pricing during off-peak hours (UTC 16:30-00:30)
* **Flexible pricing tiers**: Standard and discount pricing options for different use cases
### 3. OpenAI-Compatible API
* **Seamless integration**: Drop-in replacement for OpenAI API calls
* **Familiar interface**: Same API format and parameters as OpenAI
* **Easy migration**: Simple transition from other OpenAI-compatible providers
* **Standard features**: JSON output, function calling, and streaming responses
### 4. Advanced AI Features
* **Function calling**: Integrates with external APIs and databases
* **JSON output**: Structured responses for seamless integration
* **Chat prefix completion**: Enhanced conversation flow capabilities
* **FIM completion**: Fill-in-the-middle completion for code and text
* **Streaming responses**: Real-time response generation for natural conversations
## Implementation Best Practices
### Optimizing for Voice AI Performance
1. **Model Selection Strategy**
* Use DeepSeek-Chat for general conversational tasks
* Deploy DeepSeek-Reasoner for complex problem-solving scenarios
* Consider hybrid approaches for different conversation types
2. **Cost Optimization**
* Leverage cache hit pricing for repeated queries
* Schedule high-volume operations during discount hours
* Implement intelligent caching strategies
* Monitor token usage and optimize prompt length
3. **Performance Tuning**
* Configure appropriate temperature settings for voice interactions
* Implement streaming for real-time conversation flow
* Use function calling for external service integration
* Optimize context window usage for conversation memory
4. **Error Handling**
* Implement fallback responses for API failures
* Handle rate limiting gracefully
* Provide clear error messages for users
* Monitor API status and performance metrics
## Supported DeepSeek Models on Bolna AI
| Model | Context Window | Best Use Case | Pricing Advantage |
| ----------------- | -------------- | --------------------------------------- | -------------------------------------- |
| **deepseek-chat** | 64K tokens | General conversations, customer service | Highly cost-effective for volume usage |
## Next Steps
Ready to integrate DeepSeek with your voice AI agent? [Contact our team](mailto:support@bolna.ai) for personalized setup assistance or explore our [API documentation](/api-reference) for advanced configuration options. Take advantage of DeepSeek's cost-effective pricing and advanced reasoning capabilities to build powerful, affordable voice AI solutions.
# Use OpenAI with Bolna Voice AI
Source: https://docs.bolna.ai/providers/llm-model/openai
Build powerful voice AI agents using GPT-4.1, GPT-4o, and GPT-3.5-turbo. Create enterprise-grade conversational AI and LLM-powered voice assistants.
## OpenAI API Integration for Voice AI Applications
[OpenAI's](https://openai.com/) Large Language Models (LLMs) provide state-of-the-art natural language processing capabilities for building intelligent voice AI agents. This comprehensive guide covers OpenAI API integration with Bolna, including authentication, model selection, and implementation best practices for conversational AI applications.
## Why Choose OpenAI Models for Voice AI Agents?
OpenAI's GPT models offer superior performance for voice AI applications through:
### 1. Advanced Natural Language Understanding (NLU)
* **Multi-turn conversation handling**: Maintains context across extended voice interactions
* **Intent recognition**: Accurately identifies user intentions from spoken language
* **Multilingual support**: Processes voice inputs in 50+ languages
* **Semantic understanding**: Comprehends nuanced meaning and context in conversations
### 2. Real-time Response Generation
* **Low latency processing**: Optimized for real-time voice applications
* **Streaming responses**: Enables natural conversation flow
* **Context-aware replies**: Generates relevant responses based on conversation history
* **Adaptive tone matching**: Adjusts communication style to match user preferences
### 3. Enterprise-Grade Reliability
* **99.9% uptime SLA**: Ensures consistent availability for production voice AI systems
* **Scalable infrastructure**: Handles high-volume concurrent voice interactions
* **Security compliance**: SOC 2 Type II certified with enterprise security standards
* **Rate limiting management**: Built-in controls for cost optimization
### 4. Advanced AI Capabilities
* **Function calling**: Integrates with external APIs and databases
* **Code interpretation**: Processes and generates code snippets during conversations
* **Structured output**: Returns JSON responses for seamless integration
* **Custom instructions**: Tailors behavior for specific use cases and industries
### Model Selection Guide
Choose the optimal OpenAI model based on your voice AI requirements:
#### GPT-4.1 (Latest Enhanced Model)
* **Best for**: Applications requiring enhanced reasoning with improved accuracy
* **Use cases**: Complex analysis, advanced problem-solving, detailed conversations
* **Performance**: Superior reasoning capabilities with optimized response times
* **Cost**: Premium pricing for advanced AI capabilities
#### GPT-4o (Recommended for Production)
* **Best for**: High-quality conversational AI with complex reasoning
* **Use cases**: Customer service, sales calls, technical support
* **Performance**: Fastest response times with superior accuracy
* **Cost**: Premium pricing for enterprise applications
#### GPT-4o-mini (Cost-Effective Option)
* **Best for**: High-volume applications requiring cost optimization
* **Use cases**: Lead qualification, appointment scheduling, basic inquiries
* **Performance**: Balanced speed and quality
* **Cost**: 60% lower cost than GPT-4o
#### GPT-4 (Legacy Model)
* **Best for**: Applications requiring maximum reasoning capability
* **Use cases**: Complex problem-solving, detailed analysis
* **Performance**: Highest quality with slower response times
* **Cost**: Higher latency may impact voice experience
#### GPT-3.5-turbo (Budget Option)
* **Best for**: Simple conversational tasks and prototyping
* **Use cases**: Basic chatbots, simple Q\&A systems
* **Performance**: Fast responses with good quality
* **Cost**: Most economical option
## Implementation Best Practices
### Optimizing for Voice AI Performance
1. **Prompt Engineering for Voice**
* Design prompts specifically for spoken interactions
* Include context about voice communication style
* Optimize for concise, natural-sounding responses
2. **Context Management**
* Implement conversation memory for multi-turn interactions
* Maintain user preferences across sessions
* Handle interruptions and conversation flow naturally
3. **Error Handling**
* Implement fallback responses for API failures
* Handle rate limiting gracefully
* Provide clear error messages for users
4. **Performance Monitoring**
* Track response times and quality metrics
* Monitor API usage and costs
* Implement logging for debugging and optimization
## Supported OpenAI Models on Bolna AI
| Model | Context Window | Best Use Case | Relative Cost |
| ----------------- | -------------- | ------------------------------------------ | ------------- |
| **gpt-4.1** | 32K tokens | Enhanced reasoning with improved accuracy | Medium |
| **gpt-4o** | 128K tokens | Production voice AI, complex conversations | High |
| **gpt-4o-mini** | 128K tokens | Cost-effective voice applications | Medium |
| **gpt-4** | 8K tokens | Maximum reasoning capability | High |
| **gpt-3.5-turbo** | 4K tokens | Simple conversations, prototyping | Low |
## Next Steps
Ready to integrate OpenAI with your voice AI agent? [Contact our team](mailto:support@bolna.ai) for personalized setup assistance or explore our [API documentation](/api-reference) for advanced configuration and integration options.
# Azure Transcriber (Speech to Text)
Source: https://docs.bolna.ai/providers/transcriber/azure
Learn how to integrate Azure Speech-to-Text with Bolna Voice AI agents to enable real-time, accurate, and multilingual transcriptions, improve conversational quality, and support enterprise-grade scalability.
## 1. What is Azure Speech-to-Text?
Azure Speech-to-Text, part of Microsoft Azure Cognitive Services, offers cloud-based automatic speech recognition (ASR). It converts spoken language into text using advanced deep learning models—enabling real-time transcription, batch processing, and support for custom model training. It’s designed to handle enterprise-grade workloads with high accuracy and multi-language capabilities.
## 2. Key Features of Azure STT
Azure offers a variety of features that make it a leading STT solution:
* **Real-Time Streaming & Batch Transcription**: Supports both low-latency streaming for live interactions and batch processing for recorded files.
* **Speaker Diarization & Language Identification**: Detects speaker turns and identifies languages in multi-party, multilingual scenarios.
* **Noise Reduction**: Advanced noise suppression techniques improve transcription accuracy in challenging audio conditions.
* **Secure & Scalable**: Fully managed service with options for resource control, webhook callbacks, and deployment across regions.
## 3. How Bolna Uses Azure for STT
Bolna AI integrates Azure’s STT technology to enable real-time, high-accuracy speech transcription for its AI-powered voice agents. Here’s how Bolna leverages Azure:
* **Live Conversation Transcription**:
Bolna uses Azure's real-time streaming to convert user speech into text with minimal delay, enabling dynamic agent interaction.
* **Multi-Language, Multi-Speaker Context**:
With speaker diarization and language detection, Bolna agents accurately follow multilingual or multi-party calls.
* **Speaker Identification and Context Retention**:
Bolna uses Azure’s speaker diarization capabilities to differentiate between the agent and the caller in conversations. This feature helps in maintaining context and structuring responses effectively.
* **Recording & Post-Call Analysis**:
Bolna supports batch transcription of stored calls via REST, using callbacks/webhooks to asynchronously retrieve results for insights, compliance, and analytics.
## Conclusion
Integrating Azure Speech-to-Text with Bolna empowers voice AI agents to deliver seamless, real-time, and highly accurate transcriptions across diverse languages and speaker scenarios. Its enterprise-grade scalability, security, and support for custom models make it ideal for dynamic, high-volume interactions. By leveraging Azure’s advanced capabilities, Bolna ensures more natural, human-like conversations and richer post-call insights. This combination strengthens customer experiences and unlocks deeper operational intelligence.
# Deepgram Transcriber (Speech to Text)
Source: https://docs.bolna.ai/providers/transcriber/deepgram
Integrate Deepgram with your Bolna Voice AI agents for fast, accurate streaming transcription. Supports both Nova-3 and Nova-2 speech models.
## 1. What is Deepgram STT?
[Deepgram](https://deepgram.com/) Speech-to-Text (STT) is an advanced automatic speech recognition (ASR) platform that leverages deep learning and artificial intelligence to transcribe spoken language into text with high accuracy.
Deepgram is designed for real-time and batch transcription, making it a powerful solution for applications requiring voice-driven automation, such as virtual assistants, customer support systems, and conversational AI agents.
## 2. Key Features of Deepgram STT
Deepgram offers a variety of features that make it a leading STT solution:
* **High Accuracy**: Deepgram uses deep neural networks trained on diverse datasets, achieving state-of-the-art transcription accuracy even in noisy environments.
* **Low Latency**: Designed for real-time processing, Deepgram provides near-instantaneous transcription, making it ideal for live applications like customer support and interactive voice agents.
* **Multi-Language Support**: It supports multiple languages and dialects, catering to a global audience.
* **Speaker Diarization**: Automatically detects and differentiates between multiple speakers in an audio stream.
* **Noise Reduction**: Advanced noise suppression techniques improve transcription accuracy in challenging audio conditions.
* **Keyword Boosting**: Allows prioritization of specific words or phrases to ensure better recognition of important terms.
* **Cost-Effective**: Compared to traditional ASR solutions, Deepgram offers competitive pricing with high performance and scalability.
## 3. How Bolna Uses Deepgram for STT
Bolna AI integrates Deepgram’s STT technology to enable real-time, high-accuracy speech transcription for its AI-powered voice agents. Here’s how Bolna leverages Deepgram:
* **Real-Time Speech Processing**:
Bolna uses Deepgram's streaming STT API to convert spoken language into text in real time. This allows the AI agent to understand and process user input without significant delays, ensuring a smooth and natural conversation flow.
* **Multilingual Voice Agent Support**:
Given Bolna’s multilingual capabilities, Deepgram's support for various languages ensures that voice interactions can be transcribed accurately, regardless of the language or accent used by the caller.
* **Noise-Resistant Transcription for High Accuracy**:
Bolna agents often handle calls in diverse environments where background noise can be an issue. By leveraging Deepgram’s noise reduction features, Bolna ensures that transcriptions remain accurate, even in challenging conditions.
* **Speaker Identification and Context Retention**:
Bolna uses Deepgram’s speaker diarization capabilities to differentiate between the agent and the caller in conversations. This feature helps in maintaining context and structuring responses effectively.
* **Custom Vocabulary and Industry-Specific Terms**:
Since Bolna AI is used in industries such as recruitment, customer support, and e-commerce, it benefits from Deepgram’s keyword boosting and custom model training to improve recognition of specific industry terms, technical jargon, and company names.
* **Call Recording and Post-Processing**:
In addition to real-time transcription, Bolna also uses Deepgram for batch transcription of recorded calls. These transcriptions are later analyzed for insights, compliance checks, and improving the AI model’s response accuracy.
## 4. List of Deepgram models supported on Bolna AI
| Model |
| ----------------------- |
| nova-3 |
| nova-3-medical |
| nova-2 |
| nova-2-atc |
| nova-2-meeting |
| nova-2-phonecall |
| nova-2-finance |
| nova-2-conversationalai |
| nova-2-medical |
| nova-2-drivethru |
| nova-2-automotive |
## Conclusion
Deepgram’s STT capabilities empower Bolna AI to deliver highly accurate, real-time speech-to-text transcription, making voice interactions seamless and efficient. By integrating Deepgram’s advanced ASR technology, Bolna enhances its ability to process diverse accents, handle noisy environments, and understand complex conversations, thereby improving the overall performance and reliability of its voice AI solutions.
# AWS Polly (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/aws-polly
Learn how to integrate and use AWS Polly TTS with Bolna Voice AI agents including Amazon's neural, generative and standard models.
## 1. What is AWS Polly TTS?
[AWS Polly](https://aws.amazon.com/polly/) is a cloud-based text-to-speech (TTS) service powered by Amazon Web Services (AWS). It uses deep learning technologies to convert text into natural-sounding speech, making it ideal for applications requiring high-quality voice synthesis.
AWS Polly supports a wide range of languages and voices, offering both **standard TTS** and **neural TTS (NTTS)**, which enhances the realism of speech output. Designed for real-time and batch processing, AWS Polly enables applications to deliver engaging voice experiences across various industries, including customer service, e-learning, and automated assistants.
## 2. Key Features of AWS Polly TTS
AWS Polly offers several advanced features that make it a powerful choice for AI-driven voice applications:
**Natural-Sounding Speech**: Utilizes neural TTS (NTTS) to enhance realism, reducing robotic-sounding speech.
**Multiple Languages and Voices**: Supports a wide range of languages and accents, allowing for global reach.
**Real-Time Speech Synthesis**: Generates speech quickly with low latency, making it suitable for interactive applications.
**Neural and Standard TTS Options**: Offers high-quality neural TTS as well as cost-effective standard TTS for scalable deployment.
## 3. How Bolna Uses AWS Polly for TTS
Bolna AI integrates AWS Polly’s TTS capabilities to deliver high-quality, real-time speech synthesis for its voice AI agents. Here’s how Bolna leverages AWS Polly:
**Generating Lifelike Speech for Voice AI Agents**:
Bolna AI uses AWS Polly to convert AI-generated text responses into human-like speech, ensuring a more natural interaction experience for users.
**Low-Latency Voice Synthesis for Real-Time Conversations**:
With AWS Polly’s low-latency capabilities, Bolna AI ensures real-time speech generation, allowing its voice agents to respond without noticeable delays.
**Multilingual and Accent Customization**:
AWS Polly’s extensive language and voice options allow Bolna AI to cater to a global audience by providing speech output in multiple languages and accents.
**Scalable and Cost-Effective Deployment**:
As a cloud-based service, AWS Polly allows Bolna AI to scale its voice synthesis needs based on demand while maintaining cost efficiency.
## 4. List of AWS Polly models supported on Bolna AI
| Model |
| ---------- |
| neural |
| generative |
| standard |
## Conclusion
AWS Polly’s TTS capabilities enhance Bolna AI’s ability to deliver **realistic, engaging, and highly responsive voice interactions**. By integrating AWS Polly, Bolna ensures high-quality speech output with multilingual support, real-time performance, and customizable pronunciation—making its voice AI agents more effective for industries such as customer service, recruitment, and automated assistants. This integration empowers businesses to provide a seamless and human-like conversational experience through Bolna AI.
# Azure (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/azure
Integrate Microsoft Azure Text-to-Speech with Bolna to create natural, expressive Voice AI agents. Supports neural voices and multilingual output.
## 1. What is Azure TTS?
Azure Text-to-Speech (TTS) is a cloud-based speech synthesis service offered by Microsoft as part of its Azure Cognitive Services. It uses advanced deep learning models to generate realistic and natural-sounding speech from text. Designed for enterprise-grade applications, Azure TTS enables businesses to create interactive voice experiences, enhance accessibility, and automate customer interactions with high-fidelity voice output.
Azure TTS provides **neural voice synthesis**, offering near-human pronunciation, tone, and emotion control. This technology is widely used in virtual assistants, automated call centers, media narration, and real-time conversational AI applications.
## 2. Key Features of Azure TTS
Azure Text-to-Speech stands out with the following capabilities:
**Neural TTS for Human-Like Speech**: Uses deep neural networks to create speech that closely mimics human intonation and expressiveness.
**Extensive Language & Voice Support**: Supports over 140 languages and multiple voice options, making it a powerful tool for global reach.
**Real-Time & Batch Processing**: Enables both live interaction and bulk conversion of text to speech.
**AI-Driven Emotion Infusion**: Adjusts emotional expression in speech (e.g., happy, neutral, sad) to improve engagement.
**Latency-Optimized Speech Processing**: Ensures minimal lag, making it suitable for real-time conversational AI applications.
## 3. How Bolna Uses Azure for TTS
Bolna AI integrates Azure Text-to-Speech to deliver high-quality, human-like speech output for its AI-driven voice agents. Azure TTS enhances Bolna’s ability to conduct seamless, engaging, and contextually aware voice interactions. Here’s how Bolna leverages this technology:
**Lifelike Speech for Interactive AI Conversations**:
Azure’s Neural TTS allows Bolna AI to generate speech that mirrors human conversation patterns, improving user experience and making voice AI interactions more natural.
**Multi-Language and Multimodal Conversational AI**:
Since Bolna serves a global user base, Azure’s extensive language and accent library helps deliver culturally relevant and clear speech output tailored to different regions.
**Adaptive Speech Based on User Interaction**:
Azure TTS enables Bolna AI to modify speech output dynamically based on conversational context. For instance, the AI can adjust intonation when emphasizing key details in recruitment interviews or customer support interactions.
**Emotionally Intelligent Voice AI**:
By leveraging Azure’s emotion-infused speech synthesis, Bolna AI ensures that the voice agent sounds empathetic, enthusiastic, or neutral based on the conversation’s nature. This is especially useful in customer service and human resource automation.
**Enhanced Pronunciation for Industry-Specific Terms**:
Azure’s custom lexicons and SSML-based pronunciation adjustments help Bolna AI deliver precise pronunciation for technical terms, job roles, and company names, ensuring clarity in voice interactions.
**Real-Time Speech Output for Seamless Conversations**:
Azure’s low-latency synthesis ensures that Bolna AI voice agents can provide instant responses, making them highly effective in real-time support scenarios such as call handling, interview assistance, and virtual customer service.
## Conclusion
Azure TTS plays a crucial role in enhancing **Bolna AI’s voice-driven experiences**, offering superior **speech quality, multilingual support, real-time processing, and brand customization**. With its advanced neural synthesis, adaptive speech features, and seamless integration, Azure TTS empowers Bolna to create immersive and intelligent voice AI solutions across industries such as **customer support, recruitment, and business automation**. This integration ensures Bolna’s voice agents deliver a **human-like, emotionally aware, and efficient conversational experience** for users worldwide.
# Cartesia Synthesizer (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/cartesia
Enable Cartesia voices in Bolna Voice AI agents for expressive, customizable AI voices using their latest voice models like sonic.
## 1. What is Cartesia TTS?
[Cartesia](https://cartesia.ai/) TTS is an advanced speech synthesis engine designed to generate **high-fidelity, natural-sounding speech** for AI-driven applications. Unlike traditional TTS systems, Cartesia employs **deep neural network models** that replicate human speech patterns, ensuring more expressive and realistic audio output.
Cartesia TTS is optimized for **real-time processing**, offering low-latency voice synthesis for applications like **AI voice assistants, virtual customer support, conversational AI, and automated business interactions**. With a focus on **scalability, multilingual capabilities, and high-quality prosody**, Cartesia TTS provides enterprises with an **efficient and adaptive speech generation solution**.
## 2. Key Features of Cartesia TTS
Cartesia TTS provides several innovative features that enhance voice-based AI applications:
**Neural Voice Synthesis**: Uses deep learning to produce smooth, expressive, and human-like speech output.
**Multilingual and Multi-Accent Support**: Provides a broad range of voices across multiple languages and regional accents.
**Custom Voice Creation**: Enables businesses to develop unique voice identities tailored to their brand’s personality.
**Low Latency and Real-Time Processing**: Optimized for instant voice responses, making it suitable for interactive AI applications.
**Adaptive Speech Intonation**: Dynamically adjusts speech tone and pitch based on contextual relevance.
**Cloud-Based and On-Premise Deployment**: Offers flexible deployment models for various enterprise requirements.
## 3. How Bolna Uses Cartesia for TTS
Bolna AI leverages Cartesia’s cutting-edge TTS technology to create engaging, interactive, and lifelike voice responses for its AI-powered virtual agents. Here’s how Bolna AI integrates Cartesia TTS:
**Lifelike Voice Output for AI Assistants**:
Bolna AI uses Cartesia’s neural voice synthesis to ensure that its AI-driven voice agents produce clear, natural, and emotionally appropriate speech during interactions. This enhances user engagement and fosters more intuitive communication between AI and humans.
**Real-Time Conversational AI with Low Latency**:
Cartesia’s low-latency processing ensures that Bolna AI voice agents deliver instantaneous responses during live interactions, eliminating unnatural delays and improving conversational flow.
**Multilingual and Regional Voice Adaptation**:
To serve a global customer base, Bolna AI utilizes Cartesia’s multilingual voice models to provide speech output in multiple languages and regional accents, ensuring clear communication for diverse audiences.
**Emotionally Expressive Speech for Enhanced Engagement**:
Bolna AI takes advantage of Cartesia’s emotion-infused TTS, enabling its AI agents to adjust their tone based on conversation context. For example:
* **Customer Support Agents**: Can sound empathetic or professional, depending on the nature of the query.
* **Recruitment AI Assistants**: Can use a neutral yet engaging tone to provide job-related information.
* **E-commerce AI Representatives**: Can adopt a persuasive tone to enhance user engagement and sales.
**Custom Voice Models for Brand Identity**:
For businesses looking to create a unique auditory identity, Bolna AI integrates Cartesia’s custom voice training models, ensuring that enterprises have a distinct and recognizable voice persona for their AI interactions.
## 4. List of Cartesia models supported on Bolna AI
| Model |
| ----- |
| sonic |
## Conclusion
By integrating **Cartesia TTS**, Bolna AI significantly enhances its **conversational AI capabilities**, ensuring **realistic, engaging, and context-aware voice output**. With its **real-time synthesis, multilingual adaptability, and emotional intelligence**, Cartesia TTS enables Bolna to deliver **seamless, human-like AI interactions** across industries such as **customer service, recruitment, and e-commerce**. This powerful TTS integration allows Bolna AI to offer **more natural, scalable, and brand-customizable voice AI solutions** to its users worldwide.
# Deepgram Synthesizer (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/deepgram
Integrate and use your Bolna Voice AI agents with high-quality neural voices from Deepgram for natural, human-like conversational experiences.
## 1. What is Deepgram TTS?
[Deepgram](https://deepgram.com/) Text-to-Speech (TTS) is an AI-driven speech synthesis technology designed to generate highly realistic, human-like voices. Built using deep learning models, Deepgram TTS offers natural-sounding speech output with expressive intonations, making it suitable for applications that require high-quality voice interactions.
Deepgram TTS is optimized for real-time processing and supports multiple languages, accents, and emotions, allowing businesses to deliver personalized and engaging voice experiences. Compared to traditional TTS solutions, Deepgram leverages end-to-end neural speech synthesis, reducing latency and improving the naturalness of generated speech.
## 2. Key Features of Deepgram TTS
Deepgram TTS provides several advanced features that enhance voice AI applications:
**Human-Like Speech Output**: Produces clear, natural, and expressive speech that closely mimics human intonation and pacing.
**Real-Time Speech Generation**: Optimized for low-latency responses, ensuring a seamless conversational flow.
**Multilingual and Accent Support**: Provides high-quality speech synthesis in multiple languages, allowing for global reach.
**Noise Reduction & Clarity Enhancement**: Ensures crisp and intelligible speech output even in challenging audio environments.
## 3. How Bolna Uses Deepgram for TTS
Bolna AI integrates Deepgram’s TTS technology to power its voice AI agents, enabling them to deliver lifelike speech responses during conversations. Here’s how Bolna leverages Deepgram TTS:
**Generating High-Quality Speech for AI Conversations**:
Bolna AI utilizes Deepgram TTS to convert AI-generated text responses into natural-sounding speech. This enables voice agents to interact seamlessly with users, improving engagement and usability.
**Real-Time Voice Synthesis for Smooth Interactions**:
With Deepgram’s low-latency processing, Bolna AI ensures real-time speech synthesis, eliminating delays and making voice interactions feel more natural and responsive.
**Multilingual and Accent Adaptation for Global Users**:
Bolna AI serves customers across different regions, requiring multilingual voice capabilities. Deepgram’s support for multiple languages and accents allows Bolna to offer voice AI solutions tailored to diverse user bases.
**Emotionally Expressive Speech for Personalized Interactions**:
Bolna AI leverages Deepgram’s emotion control feature to adjust the tone and expressiveness of speech output. This ensures that AI responses sound more engaging and contextually appropriate, whether for customer support, recruitment, or e-commerce applications.
**Handling Complex Pronunciations and Technical Terms**:
Deepgram TTS helps Bolna AI correctly pronounce names, technical jargon, and industry-specific terminology, ensuring clarity and accuracy in conversations.
## 4. List of Deepgram TTS models supported on Bolna AI
| Model |
| ------ |
| aura |
| aura-2 |
## Conclusion
Deepgram’s advanced TTS technology enhances Bolna AI’s ability to deliver **realistic, engaging, and context-aware speech output** in voice-driven applications. By integrating Deepgram TTS, Bolna ensures smooth, natural, and multilingual conversations across industries such as recruitment, customer service, and e-commerce. This integration significantly improves the quality of voice interactions, making AI agents sound more human-like and responsive.
# ElevenLabs Synthesizer (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/elevenlabs
Enhance your Bolna Voice AI agents using ElevenLabs ultra-realistic voices, featuring multilingual support through the latest Turbo and Flash models.
## 1. What is ElevenLabs TTS?
[ElevenLabs](https://elevenlabs.io/) Text-to-Speech (TTS) is an advanced AI-powered speech synthesis platform designed to generate high-quality, natural-sounding voices. Using deep learning models, ElevenLabs replicates human-like speech with remarkable accuracy, making it an ideal solution for applications requiring realistic voice interactions.
Unlike traditional TTS systems that rely on rule-based or concatenative synthesis, ElevenLabs leverages deep neural networks to analyze and generate speech in a way that mimics human intonation, pacing, and expressiveness. This makes it particularly useful for AI-driven applications such as virtual assistants, audiobooks, dubbing, and interactive voice agents.
## 2. Key Features of ElevenLabs TTS
ElevenLabs offers several cutting-edge features that set it apart from traditional text-to-speech engines:
* **Human-Like Speech Quality**: Produces natural-sounding voices with expressive intonations, eliminating robotic-sounding speech.
* **Multi-Language Support**: Supports multiple languages and accents, enabling seamless localization for global applications.
* **Voice Cloning**: Allows users to create AI-generated voices that closely match specific speakers with minimal data.
* **Real-Time Synthesis**: Generates speech with minimal latency, making it suitable for real-time applications such as AI voice assistants.
* **Custom Voice Models**: Provides options to train and fine-tune voice models for industry-specific or brand-personalized voices.
## 3. How Bolna Uses ElevenLabs for TTS
Bolna AI integrates ElevenLabs' TTS technology to enhance its voice AI agents, providing realistic and natural speech output for seamless user interactions. Here’s how Bolna leverages ElevenLabs TTS:
* **Generating Human-Like Voice Responses**:
Bolna AI uses ElevenLabs to convert AI-generated text responses into high-quality, lifelike speech. This allows users to interact with Bolna’s voice agents in a more natural and engaging manner.
* **Multi-Language and Accent Adaptation**:
Given Bolna’s need to cater to diverse global audiences, ElevenLabs’ multilingual capabilities ensure that voice agents can communicate fluently in multiple languages and accents, enhancing user accessibility and comprehension.
* **Real-Time Voice Processing for Conversations**:
Bolna’s AI-driven voice agents operate in real-time, requiring low-latency speech synthesis. ElevenLabs' real-time TTS API ensures that responses are generated instantly, maintaining a smooth conversational flow.
* **Custom Voice Models for Brand Identity**:
For businesses using Bolna AI, ElevenLabs’ custom voice models allow for the creation of distinct and brand-aligned voice personas. This helps companies establish a unique audio identity that resonates with their audience.
* **Handling Complex Pronunciations and Domain-Specific Vocabulary**:
Bolna AI works in industries such as recruitment, customer support, and e-commerce, where precise pronunciation of names, technical jargon, and domain-specific terms is crucial. ElevenLabs helps Bolna generate accurate speech outputs by recognizing and adjusting for industry-specific vocabulary.
## 4. List of ElevenLabs models supported on Bolna AI
| Model |
| -------------------- |
| eleven\_turbo\_v2\_5 |
| eleven\_flash\_v2\_5 |
## Conclusion
ElevenLabs advanced TTS technology enables Bolna AI to deliver **realistic, engaging, and context-aware speech output** for voice-driven applications. By integrating ElevenLabs, Bolna enhances its conversational AI capabilities, ensuring natural human-like interactions, real-time responses, and multilingual accessibility. This integration strengthens Bolna’s ability to provide superior voice AI experiences across industries such as recruitment, customer service, and e-commerce.
# Rime Synthesizer (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/rime
Integrate Rime TTS with Bolna Voice AI agents for ultra-fast, expressive speech synthesis with sub-200ms latency and diverse voice options for conversational AI.
## 1. What is Rime TTS?
[Rime](https://www.rime.ai/) TTS is an advanced AI-powered speech synthesis platform designed to deliver **ultra-fast, highly expressive, and natural-sounding voices** for conversational AI applications. Rime provides speech synthesis technologies that perfectly balance quality, customizability, and speed for building conversational applications.
Rime TTS is specifically optimized for **real-time conversational AI**, offering **sub-200 millisecond speech synthesis speeds** with their flagship models. With a focus on **emotional expressiveness, demographic diversity, and lightning-fast processing**, Rime TTS enables enterprises to create **engaging, responsive, and human-like voice interactions** across various industries and use cases.
## 2. Key Features of Rime TTS
Rime TTS provides several cutting-edge features that enhance conversational AI applications:
**Ultra-Fast Speech Synthesis**: Delivers sub-200 millisecond synthesis speeds, with Mist v2 achieving \~70ms latency for real-time applications.
**Highly Expressive Speech Output**: Arcana model pushes the boundary of naturalness and emotional depth in synthesized speech with fine-grained prosody control.
**Multilingual and Demographic Diversity**: Supports multiple languages (English, Spanish, with more coming soon) and offers voices across many different demographic categories including age ranges, accents, and cultural backgrounds.
**Wide Range of Voice Options**: Features flagship voices like luna, celeste, orion, ursa, astra, esther, estelle, and andromeda across different speaking styles and demographics.
**Genre-Specific Optimization**: Provides specialized models for General, Conversational, Narration, and IVR use cases.
**Advanced Pronunciation Control**: Offers sophisticated control over speech performance using linguistically-aware markup and contextual nuances.
**Real-Time Processing Capabilities**: Engineered specifically for interactive applications requiring instant voice responses.
## 3. How Bolna Uses Rime for TTS
Bolna AI leverages Rime's cutting-edge TTS technology to create ultra-responsive, engaging, and lifelike voice responses for its AI-powered conversational agents. Here's how Bolna AI integrates Rime TTS:
**Ultra-Fast Voice Output for Real-Time Conversations**:
Bolna AI utilizes Rime's industry-leading synthesis speeds to ensure that its AI-driven voice agents deliver instantaneous responses during live interactions. With sub-200ms latency, Bolna eliminates unnatural delays and creates seamless conversational flow that feels natural and responsive.
**Highly Expressive Speech for Enhanced User Engagement**:
Bolna AI takes advantage of Rime's Arcana model to produce emotionally nuanced and expressive speech output. This enables AI agents to adjust their tone and emotional delivery based on conversation context, creating more engaging and human-like interactions.
**Diverse Voice Demographics for Global Accessibility**:
To serve diverse customer bases, Bolna AI utilizes Rime's wide range of voice demographics and accents, ensuring clear communication across different user populations. This demographic diversity helps businesses create more inclusive and accessible voice AI experiences.
**Multilingual Support for International Applications**:
Bolna AI leverages Rime's multilingual capabilities (English, Spanish, with expanding language support) to provide voice AI solutions that can serve global markets with native-sounding speech in multiple languages.
**Genre-Optimized Speech for Specific Use Cases**:
Bolna AI integrates Rime's genre-specific optimizations to deliver contextually appropriate speech output. For example:
* **Customer Support Agents**: Use conversational-optimized voices that sound empathetic and professional during support interactions.
* **Recruitment AI Assistants**: Employ general-purpose voices with neutral yet engaging tones for job-related communications.
* **E-commerce AI Representatives**: Utilize expressive voices that can adapt tone to enhance user engagement and sales conversations.
* **IVR Systems**: Deploy IVR-optimized voices for clear, professional automated phone system interactions.
**Advanced Prosody Control for Brand Customization**:
For businesses looking to create distinctive voice experiences, Bolna AI integrates Rime's advanced prosody and pronunciation controls, enabling fine-tuned speech output that aligns with specific brand personalities and communication styles.
## 4. List of Rime models supported on Bolna AI
| Model |
| ------ |
| arcana |
| mistv2 |
## Conclusion
By integrating **Rime TTS**, Bolna AI significantly enhances its **conversational AI capabilities**, delivering **ultra-fast, expressive, and demographically diverse voice output**. With its **sub-200ms synthesis speeds, emotional expressiveness, and multilingual adaptability**, Rime TTS enables Bolna to provide **seamless, human-like AI interactions** across industries such as **customer service, recruitment, and e-commerce**. This powerful TTS integration allows Bolna AI to offer **more responsive, natural, and inclusive voice AI solutions** that meet the demanding requirements of real-time conversational applications worldwide.
# Sarvam Synthesizer (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/sarvam
Integrate and use your Bolna Voice AI agents with high-quality neural voices from Sarvam for natural, human-like conversational experiences.
## 1. What is Sarvam TTS?
[Sarvam](https://www.sarvam.ai/) TTS is a high-performance text-to-speech service developed by Sarvam AI, designed specifically for Indian languages. It delivers natural and expressive voice synthesis optimized for conversational use cases such as virtual assistants, IVRs, and customer support bots. Built using advanced generative AI techniques, Sarvam TTS offers real-time streaming capabilities and supports deployment at scale across multilingual environments.
## 2. Key Features of Sarvam TTS
Sarvam TTS provides several advanced features that enhance Bolna Voice AI applications:
**Multilingual Support**: Specially optimized for Indian languages such as Hindi, Telugu, Tamil, Kannada, and more.
**Natural-Sounding Voices**: Trained on diverse datasets to produce lifelike speech with proper intonation and pronunciation.
**Low Latency Streaming**: Designed for real-time use cases, ensuring smooth conversational flow in interactive systems.
**Custom Voice Options**: Ability to fine-tune or adapt voices for enterprise-specific needs.
## 3. How Bolna Uses Sarvam for TTS
Bolna Voice AI integrates Sarvam TTS to power Indian-language voice agents across recruitment, sales, and support workflows. The TTS system is used to generate real-time voice prompts, questions, and responses in native languages, ensuring better engagement and understanding, especially in Tier 2/3 regions.
**Real-Time Speech for Seamless Conversations**:
Sarvam’s low-latency streaming capabilities enable Bolna agents to synthesize speech in real time. This ensures a smooth, uninterrupted flow of conversation, making interactions feel natural and responsive for users.
**Multilingual & Accent-Aware Voice Support**:
Bolna uses Sarvam to serve candidates and customers in Hindi, Telugu, Tamil, and other Indian languages. The multilingual support allows each voice agent to adapt to the preferred language and accent of the user, improving comprehension and engagement—especially in Tier 2/3 regions.
**Handling Complex Pronunciations and Technical Terms**:
From candidate names to role-specific jargon, Sarvam TTS enables accurate pronunciation of complex or technical terms. This ensures that Bolna’s agents sound professional and easy to understand across varied use cases.
## 4. List of Sarvam TTS models supported on Bolna AI
| Model |
| ----------- |
| `bulbul-v2` |
| `bulbul-v1` |
## Conclusion
Sarvam TTS brings localized voice synthesis to the forefront of conversational AI in India. By integrating Sarvam, Bolna ensures its voice agents are not only intelligent but also relatable and linguistically inclusive. This helps improve candidate experience, increase response rates, and expand accessibility across diverse demographics.
# Smallest Synthesizer (Text to Speech)
Source: https://docs.bolna.ai/providers/voice/smallest
Integrate and use Smallest voices with Bolna Voice AI agents for lightweight and efficient text-to-speech solutions.
## 1. What is Smallest TTS?
[Smallest AI](https://smallest.ai/) TTS is an ultra-lightweight, high-efficiency speech synthesis engine designed for low-resource environments and edge computing applications. Unlike traditional cloud-based TTS solutions, Smallest AI focuses on delivering **fast, memory-efficient, and locally deployable speech synthesis** without sacrificing voice quality. It is ideal for AI-driven systems that require **real-time voice synthesis** on resource-constrained devices, such as mobile applications, IoT devices, and offline virtual assistants.
## 2. Key Features of Smallest TTS
Smallest AI TTS offers several unique features that make it an attractive option for AI-driven voice interactions:
**Lightweight and Efficient**: Optimized for low-power devices, embedded systems, and mobile applications, ensuring smooth performance on minimal hardware.
**Low-Latency, Real-Time Speech Generation**: Unlike cloud-based TTS solutions, Smallest AI offers instant voice synthesis with near-zero delay.
**Offline and On-Device Processing**: Supports fully offline speech generation without requiring an internet connection.
**Neural Compression for Compact Model Size**: Uses advanced compression techniques to reduce the model footprint while maintaining high-quality speech output.
**Multilingual Support with Minimal Resource Consumption**: Provides high-quality voice synthesis across multiple languages without requiring large storage or compute resources.
## 3. How Bolna Uses Smallest for TTS
Bolna AI integrates Smallest AI’s ultra-efficient speech synthesis technology to enhance its real-time conversational AI experience, particularly in low-resource and privacy-sensitive environments. Here’s how Bolna leverages Smallest AI TTS:
**Lightning-Fast Voice Responses for Instant AI Interactions**:
With Smallest AI’s low-latency TTS, Bolna ensures that its voice agents respond instantly, making interactions feel seamless, natural, and fluid.
**Efficient Multilingual Speech Processing with Minimal Compute**:
Bolna AI utilizes Smallest AI’s multilingual synthesis to generate speech without the overhead of large AI models, making it scalable for voice automation across multiple regions and languages.
**Customizable Voices for Enterprise Branding**:
Smallest AI supports lightweight, trainable voice models, allowing Bolna AI to provide custom-branded voices for businesses, ensuring a unique and recognizable AI-driven voice identity.
## 4. List of Smallest models supported on Bolna AI
| Model |
| ------------ |
| lightning-v2 |
## Conclusion
Smallest AI TTS enhances Bolna AI’s ability to deliver **ultra-fast voice interactions**. By integrating **Smallest AI’s lightweight and highly efficient speech synthesis**, Bolna ensures seamless **real-time AI conversations with low-latency responses**. This makes Bolna’s AI voice agents highly scalable for industries requiring **compact, high-performance voice AI solutions in customer service, healthcare and enterprise automation**.
# Handle Inbound Calls with Bolna Voice AI Agents
Source: https://docs.bolna.ai/receiving-incoming-calls
Set up Bolna Voice AI agents to answer incoming calls. Assign phone numbers, configure settings via the dashboard or API, and enhance customer interactions.
export const MakeComIcon = ({size = "24"}) => ;
export const ZapierIcon = ({size = "24"}) => ;
export const PlivoIcon = ({size = "24"}) => ;
export const TwilioIcon = ({size = "24"}) => ;
You will need to assign a phone number to Bolna Voice AI agent for automatically answering all incoming calls on that phone number
## Method 1. Purchase a phone number from the [Bolna Dashboard](https://platform.bolna.ai/phone-numbers).
Please refer to a [step by step tutorial for purchasing phone numbers on Bolna](/buying-phone-numbers).
## Method 2. Connect your Telephony account and use your own phone numbers.
>
}
href="/twilio-connect-provider"
>
Use your own Twilio phone numbers with Bolna
>
}
href="/plivo-connect-provider"
>
Use your own Plivo phone numbers with Bolna
***
## Set up Bolna Voice AI agents to receive inbound calls from dashboard
## Set up Bolna Voice AI agents to receive inbound calls using APIs
#### Step 1. Use [List Phone Numbers API](api-reference/phone-numbers/get_all) to list all avalable phone numbers.
```curl request-phone-numbers
curl --request GET \
--url https://api.bolna.ai/phone-numbers/all \
--header 'Authorization: Bearer '
```
```json response-phone-numbers
[
{
"id": "3c90c3cc0d444b5088888dd25736052a",
"humanized_created_at": "5 minutes ago",
"created_at": "2024-01-23T05:14:37Z",
"humanized_updated_at": "5 minutes ago",
"updated_at": "2024-02-29T04:22:89Z",
"renewal_at": "17th Dec, 2024",
"phone_number": "+19876543210",
"agent_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"price": "$5.0",
"telephony_provider": "twilio",
"rented": true
}
]
```
#### Step 2. Use [Set Inbound Agent API](api-reference/inbound/agent) to assign a phone number for Bolna Voice AI agent.
```curl request-set-inbound-agent
curl --request POST \
--url https://api.bolna.ai/inbound/setup \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"agent_id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
"phone_number_id": "123e4567-e89b-12d3-a456-426614174000"
}'
```
```json response-setup-inbound-agent
{
"url": "https://api.bolna.ai/inbound_call?agent_id=3c90c3cc-0d44-4b50-8888-8dd25736052a&user_id=28f9c34b-8eb0-4af5-8689-c2f6c4daec22",
"phone_number": "+19876543210",
"id": "3c90c3cc0d444b5088888dd25736052a"
}
```
# Explore Telephony Integrations with Bolna Voice AI
Source: https://docs.bolna.ai/supported-telephony-providers
Review the telephony providers compatible with Bolna agents, including Twilio and Plivo. Integrate seamlessly to initiate inbound and outbound Voice AI calls.
You can create your agents on Bolna and use them to initiate calls for a variety of use-cases like:
* 24x7 AI Front desk
* Automated scheduling
* Lead qualifications
* Recruitments
* Customer support
* etc.
We have the following telephony integrations available for initiating both **outbound** and **inbound** calls
* [Twilio](/twilio)
* [Plivo](/plivo)
* [Talk to us to include more](https://calendly.com/bolna/30min)
# Prompting & best practices for Bolna Voice AI agents
Source: https://docs.bolna.ai/tips-and-tricks
Explore practical tips and tricks to enhance the performance of Bolna Voice AI agents. Master best practices for effective implementation.
### Agent Overview
* Choosing between a Free flowing and an IVR agent
* **Benefits of Free Flowing agents** : A free flowing agent requires a plain english prompt to create. It enables truly natural conversations and allows your agent to be creative when responding
* **Harms of Free Flowing agents**: The harms of free flowing agents are that the prompt and settings often require fine-tuning to ensure you are getting desired responses. Free flowing agents are also costlier
* **Benefits of IVR agents** : You have complete control over the exact sentences your IVR agent will say. IVR agents are also cheaper and have much no risk of deviation / hallucination
* **Harms of IVR agents** : Conversations are limited to what you have defined in the IVR tree. The agent will try to map any response a user makes to the options that you have given it. This increases chances of the call seeming artificial. It also takes time to build an IVR tree
* Choosing a Task
* The quickest way of creating an agent is choosing a task and making small changes to the pre-defined template that we have set for you (Note: these agents are built to run on default settings. Changes in settings will require changing prompts)
* If you want to start from scratch, choose Others as your task
* Choosing invocation
* Choose telephone only if you want your agent to make telephone calls (Note: Telephone calling is expensive and you will burn through your credits rapidly. We **strongly** suggest you to use our playground to thoroughly test your agent before initiating a call)
### Assigning Rules
* Stay concise with your prompts. Use the 'Tips' to quickly build a prompt. Ideally start, with a clear, short prompt and keep adding details.
* Prompt engineering takes time! Be patient if your agent does not follow your prompt the way you want it to
* **Expert Tips** : For smart low-latency conversations, only use the Overview page (leave all pages blank). Clearly state your required intent, and start the prompt with the line "You will not speak more than 2 sentences"
### Assigning Follow-up tasks
* Summary will give a short summary of all important points discussed in the conversation
* Extraction allows you to specify what classifiers you want to pull from the conversation. Be clear in defining what you want to extract
* For webhook, you will have to give a webhook url (e.g., Zapier). Your extraction prompt should trigger the task set through the webhook.
### Assigning Settings
* Refer to this page for a detailed pricing + latency guide when assigning settings -
* Make sure the voice you choose speaks the language that you have chosen
* Only modify advanced settings if you have experience working with LLMs
* **Expert Tips** : For smart low-latency conversations use these settings
* Model: Dolphin-2.6-mixtral-8x7b
* Language: en
* Voice: Danielle (United States - English)
* Max Tokens: 60
* Buffer Size : 100
# Booking Calendar Slots via Bolna Voice AI and Cal.com Integration
Source: https://docs.bolna.ai/tool-calling/book-calendar-slots
A comprehensive guide on how to book calendar slots during live calls using Bolna Voice AI integrated with Cal.com.
| Property | Description |
| ----------- | ----------------------------------------------------------------------------------------------------------------------------------------- |
| Description | Clearly describe the tool's purpose and try to **make it as descriptive as possible for the LLM model to execute it successfully**. |
| API key | You Cal.com API key ([generate here](https://app.cal.com/settings/developer/api-keys)) |
| Event | You Cal.com event |
| Timezone | **Select the timezone** used in your Cal.com event. This **helps the agent compute times accurately** based on your local setting. |
# Implementing Custom Function Calls in Bolna Voice AI Agents
Source: https://docs.bolna.ai/tool-calling/custom-function-calls
Learn how to design and integrate custom function calls within Bolna Voice AI agents to enhance their capabilities.
You can design your own functions and use them in Bolna. Custom functions follow the [OpenAI specifications](https://platform.openai.com/docs/guides/function-calling).
You can paste your valid JSON schema and define your own custom function.
## Steps to write your own custom function
* Make sure the `key` is set as `custom_task`.
* Write a good description for the function. This helps the model to intelligently decide to call the mentioned functions.
* All parameter properties must be mentioned in the value param as a JSON and follow Python format specifiers like below
| Param | Type | Variable |
| ----------- | ------- | --------------- |
| `user_name` | `str` | `%(user_name)s` |
| `user_age` | `int` | `%(user_age)i` |
| `cost` | `float` | `%(cost)%f` |
## More examples of writing custom function calls
```curl
curl
--location 'https://my-api-dev.xyz?customer_phone=+19876543210' \
--header 'Authorization: Bearer ***'
```
The above request corresponds to the following `GET` custom function:
```json {7, 17}
{
"name": "get_product_details",
"description": "Use this tool to fetch product details",
"parameters": {
"type": "object",
"properties": {
"customer_phone": {
"type": "string",
"description": "This is customer's phone number"
}
}
},
"key": "custom_task",
"value": {
"method": "GET",
"param": {
"customer_phone": "%(customer_phone)s"
},
"url": "https://my-api-dev.xyz",
"api_token": 'Bearer ***'
}
}
```
```curl
curl --location 'https://my-api-dev.xyz/v1/store_rating' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer ***' \
--data '{
"customer_id": "134532",
"rating": "2.3"
}'
```
The above request corresponds to the following `POST` custom function:
```json {7, 11, 21, 22}
{
"name": "save_feedback",
"description": "Use this tool to save customer's feedback",
"parameters": {
"type": "object",
"properties": {
"customer_id": {
"type": "string",
"description": "This is customer's ID"
},
"rating": {
"type": "string",
"description": "This is the rating provided by the customer"
}
}
},
"key": "custom_task",
"value": {
"method": "POST",
"param": {
"customer_id": "%(customer_id)s"
"rating": "%(rating)s"
},
"url": "https://my-api-dev.xyz/v1/store_rating",
"api_token": 'Bearer ***'
}
}
```
## Using variables and dynamic context
[All variables](/using-context) that are part of the agent prompt if included in custom function will be substituted automatically with their appropriate values. The model won't enquire for these values since they're already available.
You can check the following demonstration.
```php agent_prompt {2,6,7,8}
This is your agent Sam and you're speaking to {customer_name}.
Please have a frienly conversation with the customer.
Please note:
The agent has a unique id which is "{agent_id}".
The call's unique id is "{call_sid}".
The customer's phone number is "{to_number}".
```
```json custom_function {7,11,15,19,29,30,31,32}
{
"name": "get_product_details",
"description": "Use this tool to fetch product details",
"parameters": {
"type": "object",
"properties": {
"to_number": {
"type": "string",
"description": "This is customer's phone number"
},
"agent_id": {
"type": "string",
"description": "This is the agent_id"
},
"customer_name": {
"type": "string",
"description": "This is name of the customer"
},
"customer_address": {
"type": "string",
"description": "This is name of the customer"
}
}
},
"key": "custom_task",
"value": {
"method": "GET",
"param": {
"to_number": "%(to_number)s",
"agent_id": "%(agent_id)s",
"customer_name": "%(customer_name)s",
"customer_address": "%(customer_address)s"
},
"url": "https://my-api-dev.xyz",
"api_token": 'Bearer ***'
}
}
```
```json injected_call_params
{
"to_number": "+19876543210", // automatically injected
"customer_name": "Bruce", // passed via triggering the call
"agent_id": "7d86b904-da64-4b8e-8f51-6fef2c630380" // automatically injected
}
```
```json final_custom_function {7,11,15,19,29,30,31,32}
{
"name": "get_product_details",
"description": "Use this tool to fetch product details",
"parameters": {
"type": "object",
"properties": {
"to_number": {
"type": "string",
"description": "This is customer's phone number"
},
"agent_id": {
"type": "string",
"description": "This is the agent_id"
},
"customer_name": {
"type": "string",
"description": "This is name of the customer"
},
"customer_address": {
"type": "string",
"description": "This is name of the customer"
}
}
},
"key": "custom_task",
"value": {
"method": "GET",
"param": {
"to_number": "+19876543210", // substituted via passing variables
"agent_id": "7d86b904-da64-4b8e-8f51-6fef2c630380", // substituted via passing variables
"customer_name": "Bruce", // substituted via passing variables
"customer_address": "%(customer_address)s" // unchanged as this wasn't injected. this will be computed in realtime via the LLM model
},
"url": "https://my-api-dev.xyz",
"api_token": 'Bearer ***'
}
}
```
# Fetching Available Calendar Slots with Bolna Voice AI and Cal.com Integration
Source: https://docs.bolna.ai/tool-calling/fetch-calendar-slots
Learn how to integrate Bolna Voice AI with Cal.com to fetch real-time available calendar slots during live conversations.
| Property | Description |
| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| Description | Add description for fetching the slots. **Try to make it as descriptive as possible for the LLM to execute this function successfully**. |
| API key | You Cal.com API key ([generate here](https://app.cal.com/settings/developer/api-keys)) |
| Event | You Cal.com event |
| Timezone | **Select the timezone** used in your Cal.com event. This **helps the agent compute times accurately** based on your local setting. |
# Function Calling in Bolna Voice AI: Automate Workflows with Custom Functions
Source: https://docs.bolna.ai/tool-calling/introduction
Learn how to use function calling in Bolna Voice AI to automate complex workflows by integrating custom functions with your voice agents.
## Type of tools supported in Bolna AI
Explore how to transfer a live phone call to a human using Bolna Voice AI.
Explore how to transfer a live phone call to a human using Bolna Voice AI.
Explore how to transfer a live phone call to a human using Bolna Voice AI.
Design your own functions and use them with Bolna Voice AI agents
# Transferring Live Calls Using Bolna Voice AI Agents: A Step-by-Step Guide
Source: https://docs.bolna.ai/tool-calling/transfer-calls
Discover how to transfer live phone calls to human agents using Bolna Voice AI, enabling seamless integration and workflow automation.
Using this, you can transfer on-going calls to another phone number depending on the description (prompt) provided.
| Property | Description |
| ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| Description | Description for your transfer call functionality. **Make it as descriptive as possible for the LLM to execute this function successfully**. |
| Transfer to Phone number | The phone number where the agent will transfer the call to |
You may add multiple transfer call tools for multiple phone numbers.
# Using Bolna AI with No-code Tools
Source: https://docs.bolna.ai/tutorials/introduction
Build your custom workflows and applications using Bolna Voice AI integrations with popular automations tools like Make.com, Zapier and viaSocket
## Tutorials for Bolna AI modules using APIs on Make.com
1. Create API connection with Bolna AI + Make.com: [Read tutorial](/tutorials/make-com/create-bolna-api-connection).
2. Create Webhook connection with Bolna AI + Make.com: [Read tutorial](/tutorials/make-com/create-bolna-webhook-connection).
3. Email workflows using Bolna AI with Make.com: [Read tutorial](/tutorials/make-com/send-email-after-bolna-call).
4. SMS workflows using Bolna AI with Make.com: [Read tutorial](/tutorials/make-com/send-sms-after-bolna-cal).
5. WhatsApp workflows using Bolna AI with Make.com: [Read tutorial](/tutorials/make-com/send-whatsapp-after-bolna-call).
***
## Tutorials for Bolna AI modules using APIs on Zapier
1. Create API connection with Bolna AI + Zapier: [Read tutorial](/tutorials/zapier/create-bolna-api-connection).
***
## Tutorials for Bolna AI modules using APIs on viaSocket
1. Create API connection with Bolna AI + viaSocket: [Read tutorial](/tutorials/viasocket/create-bolna-api-connection).
# Create Bolna API connection with Make.com
Source: https://docs.bolna.ai/tutorials/make-com/create-bolna-api-connection
Learn how to establish an API connection between Bolna Voice AI agents and Make.com, facilitating seamless integration and automation.
## Following are the steps to create a API connection
In this tutorial, we will:
1. Use Bolna's Make.com Module `Make an Outgoing Phone Call` module
2. Create a new API connection
Please refer to [this documentation](/api-reference/introduction#authentication) for creating a new API Key.
After completing the above steps, you can use Action modules to make Bolna APIs call on Make.com.
Please refer to Bolna APIs from [API reference docs](/api-reference/introduction).
# Create Bolna Webhook connection with Make.com
Source: https://docs.bolna.ai/tutorials/make-com/create-bolna-webhook-connection
Step-by-step tutorial on integrating Bolna Voice AI agents with Make.com using webhook connections for seamless workflow automations.
## Following are the steps to create a Webhook connection
In this tutorial, we will:
1. Use Bolna's Make.com Trigger `Watch end of Phone call` module
2. Create a new Webhook connection
3. Generate a Webhook URL on Make.com
4. Use the Webhook URL created by Make.com in Bolna's voice AI agent
After completing the above steps, your Bolna Voice AI agent is successfully configured to automatically send phone call information to Make.com.
# Using Bolna AI with Make.com
Source: https://docs.bolna.ai/tutorials/make-com/overview
Create custom workflows by choosing Bolna AI integrations with [Make.com](https://www.make.com/en/integrations/bolna)
## Bolna AI modules using APIs on Make.com
These are modules that require [Bolna's API connection with Make.com](/tutorials/make-com/create-bolna-api-connection).
Use this to make Outgoing Phone Calls to phone numbers
Use this to performs any authorized API call
## Bolna AI Trigger modules using webhooks on Make.com
These get triggered when Make.com receives events from Bolna servers and requires [Bolna's webhook connection with Make.com](/tutorials/make-com/create-bolna-webhook-connection).
Use this to get entire payload of call data post every every
***
## Bolna + Make.com Tutorials
# Integrating make.com with Bolna AI to send emails
Source: https://docs.bolna.ai/tutorials/make-com/send-email-after-bolna-call
Discover how to automate email notifications after a Bolna Voice AI phone call using Make.com for your automations and workflows.
Connect Bolna with any of your favorite apps in just a few clicks using [official bolna's make.com integrations](https://www.make.com/en/integrations/bolna).
## Overview and requirements
1. Bolna account. You may create a free Bolna account by [signing up on Bolna](https://platform.bolna.ai/sign-up)
2. A Bolna Voice AI agent which will be making the calls
3. Official Bolna's Make.com `Bolna Watch end of Phone call` trigger integration
## Steps to send emails after a Bolna AI phone call is completed
Follow the [webhook connection guide](/tutorials/make-com/create-bolna-webhook-connection) to create a webhook connection and generate a webhook URL.
Please note:
1. If you want to use any extraction details, it will be provided as a JSON under `"extracted_data"` as shown below. Please refer to [extracting conversation data](/call-details) for more details and using it.
```json using extracted content
...
...
"extracted_data": {
"address": "Market street, San francisco",
"salary_expected": "100k USD"
},
...
```
2. Any dynamic variables you pass for making the call, can be retrieved from `"context_details" > "recipient_data"` as shown below:
```json using user details
...
"context_details": {
"recipient_data": {
"name": "Harry",
"email": "harry@hogwarts.com"
},
"recipient_phone_number": "+19876543210"
},
...
```
# Integrating make.com with Bolna AI to send SMS
Source: https://docs.bolna.ai/tutorials/make-com/send-sms-after-bolna-call
Discover how to automate SMS notifications after a Bolna Voice AI phone call by integrating with Make.com, improving follow-up communication.
Connect Bolna with any of your favorite apps in just a few clicks using [official bolna's make.com integrations](https://www.make.com/en/integrations/bolna).
## Overview and requirements
1. Bolna account. You may create a free Bolna account by [signing up on Bolna](https://platform.bolna.ai/sign-up)
2. A Bolna Voice AI agent which will be making the calls
3. Official Bolna's Make.com `Bolna Watch end of Phone call` trigger integration
## Steps to send SMS after a Bolna AI phone call is completed
Follow the [webhook connection guide](/tutorials/make-com/create-bolna-webhook-connection) to create a webhook connection and generate a webhook URL.
Please note:
1. If you want to use call summary, it will be provided under `"summary"` as shown below.
```json using call summary
...
...
"summary": "The conversation was between a user named Harry and an agent named Charles from Alexa. Charles initiated the call to assist Harry with inquiries about Apple Service Center. Harry asked for the location of the closest service center, and Charles informed about Apple Union Square, 300 Post Street San Francisco. After receiving the information, Harry indicated he had no further questions, and Charles offered to help in the future before concluding the call.",
...
```
2. Any dynamic variables you pass for making the call, can be retrieved from `"context_details" > "recipient_data"` as shown below:
```json using user details
...
"context_details": {
"recipient_data": {
"name": "Harry",
"email": "harry@hogwarts.com"
},
"recipient_phone_number": "+19876543210"
},
...
```
# Integrating make.com with Bolna AI to send WhatsApp
Source: https://docs.bolna.ai/tutorials/make-com/send-whatsapp-after-bolna-call
Learn how to send automated WhatsApp messages following a Bolna Voice AI phone call by integrating with Make.com.
Connect Bolna with any of your favorite apps in just a few clicks using [official bolna's make.com integrations](https://www.make.com/en/integrations/bolna).
## Overview and requirements
1. Bolna account. You may create a free Bolna account by [signing up on Bolna](https://platform.bolna.ai/sign-up)
2. A Bolna AI agent which will be making the calls
3. Official Bolna's Make.com `Bolna Watch end of Phone call` trigger integration
## Steps to send WhatsApp after a Bolna AI phone call is completed
Follow the [webhook connection guide](/tutorials/make-com/create-bolna-webhook-connection) to create a webhook connection and generate a webhook URL.
Please note:
1. If you want to use any extraction details, it will be provided as a JSON under `"extracted_data"` as shown below. Please refer to [extracting conversation data](/call-details) for more details and using it.
```json using extracted content
...
...
"extracted_data": {
"salary_expected": "120K USD"
},
...
```
2. Any dynamic variables you pass for making the call, can be retrieved from `"context_details" > "recipient_data"` as shown below:
```json using user details
...
"context_details": {
"recipient_data": {
"name": "Harry",
"email": "harry@hogwarts.com"
},
"recipient_phone_number": "+19876543210"
},
...
```
# Create Bolna API connection with viaSocket
Source: https://docs.bolna.ai/tutorials/viasocket/create-bolna-api-connection
Step-by-step guide to integrating Bolna Voice AI with viaSocket by creating an API connection, enabling efficient workflow automation.
A simple step-by-step guide to integrating Bolna Voice AI with [viaSocket](https://viasocket.com/integrations/bolna) by creating an API connection. This integration will allow you to automate workflows efficiently by connecting voice commands from Bolna to various apps in viaSocket.
## Following are the steps to create a API connection
In this tutorial, we will:
1. Create a new Bolna API connection
Please refer to [this documentation](/api-reference/introduction#authentication) for creating a new API Key.
After completing the above steps, you can use [viaSocket](https://viasocket.com/integrations/bolna) modules with Bolna AI.
# Using Bolna AI with viaSocket
Source: https://docs.bolna.ai/tutorials/viasocket/overview
Learn how to integrate Bolna Voice AI with viaSocket to create custom workflows, enabling seamless automation between Bolna and other applications.
Integrating Bolna Voice AI with [viaSocket](http://viasocket.com/integrations?utm_source=Bolna.dev\&utm_medium=marketplace\&utm_campaign=Bolna.dev_listing) allows you to automate tasks using voice commands. With this integration, Bolna can trigger actions in thousands of apps connected through viaSocket, making your processes more efficient and reducing manual work. It helps streamline workflows and improve productivity by automating repetitive tasks.
## Bolna AI modules using APIs on viaSocket
These are modules that require [Bolna's API connection with viaSocket](/tutorials/viasocket/create-bolna-api-connection).
To create Outbound calls
Get all agents executions and their details
***
You can reach out to viaSocket support from [https://viasocket.com/support](https://viasocket.com/support).
# Create Bolna API connection with Zapier
Source: https://docs.bolna.ai/tutorials/zapier/create-bolna-api-connection
Step-by-step guide to integrating Bolna Voice AI with Zapier by creating an API connection, enabling efficient workflow automation.
## Following are the steps to create a API connection
In this tutorial, we will:
1. Create a new Bolna API connection
Go to [https://zapier.com/app/connections](https://zapier.com/app/connections) to manage apps
Please refer to [this documentation](/api-reference/introduction#authentication) for creating a new API Key.
After completing the above steps, you can use Action modules to make Bolna APIs call on Zapier.
Please refer to Bolna APIs from [API reference docs](/api-reference/introduction).
# Using Bolna AI with Zapier
Source: https://docs.bolna.ai/tutorials/zapier/overview
Learn how to integrate Bolna Voice AI with Zapier to create custom workflows, enabling seamless automation between Bolna and other applications.
## Bolna AI modules using APIs on Zapier
These are modules that require [Bolna's API connection with Zapier](/tutorials/zapier/create-bolna-api-connection).
Use this to make Outgoing Phone Calls to phone numbers
***
## Bolna + Zapier Tutorials
Coming soon
# Integrate Twilio with Bolna for Enhanced Calling
Source: https://docs.bolna.ai/twilio
Leverage Twilio with Bolna to handle inbound and outbound calls seamlessly. Follow setup guides and connect your Twilio account for tailored experiences.
Bolna agents make phone calls using Twilio numbers
Bolna agents receive phone calls on Twilio numbers and answers them
Use your own Twilio account with Bolna
# Securely Link Your Twilio Account to Bolna
Source: https://docs.bolna.ai/twilio-connect-provider
Follow detailed steps to connect your Twilio account with Bolna. Enable the use of your Twilio phone numbers for both inbound and outbound Voice AI calls.
## Use your own Twilio account to make outbound calls
We connect your `Twilio` account securely via using [infisical](https://infisical.com/).
You can connect your own Twilio account and start using it on Bolna. All calls initiated from Bolna will be from your own Twilio account and use your own Twilio phone numbers.
1. Navigate to `Providers` tab from the left menu bar & Click **Twilio connect button**.
2. Fill in the required details.
3. Save details by clicking on the **connect button**.
4. You'll see that your Twilio account was successfully connected. All your calls will now go via your own Twilio account and phone numbers.
# Make Outbound Calls via Twilio with Bolna Voice AI
Source: https://docs.bolna.ai/twilio-outbound-calls
Set up Bolna Voice AI agents to place outbound calls through Twilio. Learn dashboard configurations and API methods for efficient call management.
## Making outbound calls from dashboard
1. Login to the dashboard at [https://platform.bolna.ai](https://platform.bolna.ai) using your account credentials
2. Choose `Twilio` as the Call provider for your agent and save it
3. Start placing phone calls by providing the recipient phone numbers.
Bolna will place the calls to the provided phone numbers.
You can place calls using your own custom Twilio phone numbers only if you've connected your Twilio account.
You can read more on how to connect your Twilio account [here](/providers).
## Making outbound calls Using APIs
1. Generate and save your [Bolna API Key](/api-reference/introduction#steps-to-generate-your-api-key)
2. Set your agent `input` and `output` tools as `twilio` while using [`/create` Agent API](/api-reference/agent/create)
```create-agent.json
...
...
"tools_config": {
"output": {
"format": "wav",
"provider": "twilio"
},
"input": {
"format": "wav",
"provider": "twilio"
},
"synthesizer": {...},
"llm_agent": {...},
"transcriber": {...},
"api_tools": {...}
}
...
...
```
3. Use [`/call` API](api-reference/calls/make) to place the call to the agent
```call.json
curl --request POST \
--url https://api.bolna.ai/call \
--header 'Authorization: ' \
--header 'Content-Type: application/json' \
--data '{
"agent_id": "123e4567-e89b-12d3-a456-426655440000",
"recipient_phone_number": "+10123456789"
}'
```
# Using Context with Bolna Voice AI Agents
Source: https://docs.bolna.ai/using-context
Learn how to effectively use context in Bolna Voice AI agents to create dynamic, personalized, and meaningful interactions
## Injecting current time
By default, the conversation has information about the current date and time in the users's timezone.
Bolna agents automatically attempt to inject the appropriate timezone during calls. However, for improved accuracy, it is recommended to explicitly pass the timezone, as the automatic detection may not always be precise.
| name | description |
| ---------- | ----------------------------------------------------------------------------------------------------------- |
| `timezone` | Name of the timezone as per the [tz database](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) |
## Default variables and context
By default, the following variables and information are always available in the conversation context.
| name | description |
| -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `agent_id` | This is the `id` of the agent. |
| `execution_id` | This is the unique `id` of the bolna conversation or the call. |
| `call_sid` | This is the unique `id` of the phone call belonging to telephony providers like `Twilio`, `Plivo`, `Vonage`, etc. |
| `from_number` | The phone number that **initiated** the call.
In **inbound calls**, this is the caller (e.g. customer). In **outbound calls**, this is your agent's number. |
| `to_number` | The phone number that **received** the call.
In **inbound calls**, this is your agent's number. In **outbound calls**, this is the recipient's number (e.g. customer). |
You may use the above information to pass useful info into your systems or use them in the function calls or prompts.
### Example using default variables
For example, adding the below content in the prompt using the above default variables will automatically fill in their values.
```php
This is your agent Sam. Please have a frienly conversation with the customer.
Please note:
The agent has a unique id which is "{agent_id}".
The call's unique id is "{call_sid}".
The customer's phone number is "{to_number}".
```
The above prompt content computes to and is fed as:
```php
This is your agent Sam. Please have a frienly conversation with the customer.
Please note:
The agent has a unique id which is "4a8135ce-94fc-4a80-9a33-140fe1ed8ff5".
The call's unique id is "PXNEJUFEWUEWHVEWHQFEWJ".
The customer's phone number is "+19876543210".
```
***
## Custom variables and context
Apart from the default variables, you can write your own variables and pass it into the prompt.
Any content written between `{}` in the prompt becomes a variable.
For example, adding the below content in the prompt will dynamically fill in the values.
### Example using custom variables
```
This is your agent Sam speaking.
May I confirm if your name is {customer_name} and you called us on
{last_contacted_on} to enquire about your order item {product_name}.
Use the call's id which is {call_sid} to automatically transfer the call to a human when the user asks.
```
You can now pass these values while placing the call:
```bash
curl --request POST \
--url https://api.bolna.ai/call \
--header 'Authorization: Bearer ' \
--header 'Content-Type: application/json' \
--data '{
"agent_id": "123e4567-e89b-12d3-a456-426655440000",
"recipient_phone_number": "+10123456789",
"from_phone_number": "+1987654007",
"user_data": {
"customer_name": "Caroline",
"last_contacted_on": "4th August",
"product_name": "Pearl shampoo bar"
}
}'
```
The above prompt content computes to and is fed as (`call_sid` being the default variable gets injected automatically by default):
```
This is your agent Sam speaking.
May I confirm if your name is Caroline and you called us on
4th August to enquire about your order item Pearl shampoo bar.
Use the call's id which is PDFHNEWFHVUWEHC to automatically transfer the call to a human when the user asks.
```
# Integrating Knowledgebases with Bolna Voice AI
Source: https://docs.bolna.ai/using-your-knowledgebases
Learn how to use your knowledgebases with Bolna Voice AI agents to provide accurate, context-aware, and data-driven responses for seamless interactions.
We have integrated with [LanceDB](https://lancedb.com/), an enterprise grade open source database for managing your knowledgebases.
## Adding your Knowledgebase
* Navigate to [Knowledge base](https://platform.bolna.ai/knowledgebase) tab from the left menu bar after login
* Click the button `Upload` to upload a new PDF
* Ingesting your knowledgebase document
* Wait for few minutes while we work out our magic and process your uploaded document
## Using your uploaded knowledgebases in Agents
* In the agent creation page, navigate to `LLM tab` and select the knowledgebase from the dropdown.