Retrieve the LLM conversation messages for a specific call. This returns the actual messages exchanged between the system, user, and AI assistant during the call - including system prompts, user inputs, and AI responses.
Messages vs Transcripts: Messages are the LLM conversation log (what the AI “thought” and said). Transcripts are the raw speech-to-text output of what was actually spoken. Use messages for debugging AI behavior; use transcripts for the actual conversation content.
Path Parameters
| Parameter | Type | Required | Description |
|---|
call_id | integer | Yes | The internal call ID (not the call SID) |
Query Parameters
| Parameter | Type | Required | Description |
|---|
role | string | No | Filter by message role: system, user, or assistant |
Request
curl "https://api.burki.dev/api/v1/calls/101/messages" \
-H "Authorization: Bearer YOUR_API_KEY"
With Role Filter
curl "https://api.burki.dev/api/v1/calls/101/messages?role=assistant" \
-H "Authorization: Bearer YOUR_API_KEY"
Response
Returns an array of chat message objects ordered by message index.
[
{
"id": 1,
"call_id": 101,
"role": "system",
"content": "You are a friendly customer service assistant for Acme Corp...",
"message_index": 0,
"timestamp": "2024-01-15T10:00:00Z",
"llm_provider": "openai",
"llm_model": "gpt-4o-mini",
"prompt_tokens": null,
"completion_tokens": null,
"total_tokens": null
},
{
"id": 2,
"call_id": 101,
"role": "user",
"content": "Hi, I'd like to check on my order status.",
"message_index": 1,
"timestamp": "2024-01-15T10:00:05Z",
"llm_provider": null,
"llm_model": null,
"prompt_tokens": null,
"completion_tokens": null,
"total_tokens": null
},
{
"id": 3,
"call_id": 101,
"role": "assistant",
"content": "Of course! I'd be happy to help you check your order status. Could you please provide your order number or the email address associated with your account?",
"message_index": 2,
"timestamp": "2024-01-15T10:00:07Z",
"llm_provider": "openai",
"llm_model": "gpt-4o-mini",
"prompt_tokens": 150,
"completion_tokens": 35,
"total_tokens": 185
}
]
Response Fields
| Field | Type | Description |
|---|
id | integer | Unique message ID |
call_id | integer | ID of the parent call |
role | string | Message role: system, user, or assistant |
content | string | The message content |
message_index | integer | Position in the conversation (0-indexed) |
timestamp | string | When the message was recorded (ISO 8601) |
llm_provider | string | LLM provider used (e.g., openai, anthropic) |
llm_model | string | Specific model used (e.g., gpt-4o-mini) |
prompt_tokens | integer | Number of prompt tokens used |
completion_tokens | integer | Number of completion tokens generated |
total_tokens | integer | Total tokens for this message |
Message Roles
| Role | Description |
|---|
system | System prompt and instructions for the AI |
user | User’s spoken input (transcribed) |
assistant | AI assistant’s response |
Error Responses
404 Not Found
{
"detail": "Call with ID 101 not found in your organization"
}
Use Cases
Debug AI Behavior
import requests
def analyze_call_conversation(call_id):
response = requests.get(
f"https://api.burki.dev/api/v1/calls/{call_id}/messages",
headers={"Authorization": "Bearer YOUR_API_KEY"}
)
messages = response.json()
# Get the system prompt
system_prompt = next(
(m["content"] for m in messages if m["role"] == "system"),
None
)
print(f"System Prompt: {system_prompt[:100]}...")
# Count exchanges
user_messages = [m for m in messages if m["role"] == "user"]
assistant_messages = [m for m in messages if m["role"] == "assistant"]
print(f"User messages: {len(user_messages)}")
print(f"Assistant messages: {len(assistant_messages)}")
# Calculate token usage
total_tokens = sum(
m.get("total_tokens", 0) or 0
for m in messages
)
print(f"Total tokens used: {total_tokens}")
Export Conversation for Review
def export_conversation(call_id):
response = requests.get(
f"https://api.burki.dev/api/v1/calls/{call_id}/messages",
headers={"Authorization": "Bearer YOUR_API_KEY"}
)
messages = response.json()
# Format as readable conversation
conversation = []
for msg in messages:
if msg["role"] == "system":
continue # Skip system prompt for readability
speaker = "Customer" if msg["role"] == "user" else "AI"
conversation.append(f"{speaker}: {msg['content']}")
return "\n\n".join(conversation)
Analyze Token Costs
def calculate_call_llm_costs(call_id):
response = requests.get(
f"https://api.burki.dev/api/v1/calls/{call_id}/messages",
headers={"Authorization": "Bearer YOUR_API_KEY"}
)
messages = response.json()
total_prompt_tokens = 0
total_completion_tokens = 0
for msg in messages:
if msg.get("prompt_tokens"):
total_prompt_tokens += msg["prompt_tokens"]
if msg.get("completion_tokens"):
total_completion_tokens += msg["completion_tokens"]
# GPT-4o-mini pricing (example)
prompt_cost = total_prompt_tokens * 0.00015 / 1000
completion_cost = total_completion_tokens * 0.0006 / 1000
return {
"prompt_tokens": total_prompt_tokens,
"completion_tokens": total_completion_tokens,
"estimated_cost": prompt_cost + completion_cost
}
Notes
- Messages are stored in the order they occurred during the conversation
- The first message is typically the system prompt (role:
system)
- Token counts are only available for assistant messages (LLM responses)
- For the actual spoken words, use the Get Transcripts endpoint