Skip to main content
Retrieve the LLM conversation messages for a specific call. This returns the actual messages exchanged between the system, user, and AI assistant during the call - including system prompts, user inputs, and AI responses.
Messages vs Transcripts: Messages are the LLM conversation log (what the AI “thought” and said). Transcripts are the raw speech-to-text output of what was actually spoken. Use messages for debugging AI behavior; use transcripts for the actual conversation content.

Path Parameters

ParameterTypeRequiredDescription
call_idintegerYesThe internal call ID (not the call SID)

Query Parameters

ParameterTypeRequiredDescription
rolestringNoFilter by message role: system, user, or assistant

Request

curl "https://api.burki.dev/api/v1/calls/101/messages" \
  -H "Authorization: Bearer YOUR_API_KEY"

With Role Filter

curl "https://api.burki.dev/api/v1/calls/101/messages?role=assistant" \
  -H "Authorization: Bearer YOUR_API_KEY"

Response

Returns an array of chat message objects ordered by message index.
[
  {
    "id": 1,
    "call_id": 101,
    "role": "system",
    "content": "You are a friendly customer service assistant for Acme Corp...",
    "message_index": 0,
    "timestamp": "2024-01-15T10:00:00Z",
    "llm_provider": "openai",
    "llm_model": "gpt-4o-mini",
    "prompt_tokens": null,
    "completion_tokens": null,
    "total_tokens": null
  },
  {
    "id": 2,
    "call_id": 101,
    "role": "user",
    "content": "Hi, I'd like to check on my order status.",
    "message_index": 1,
    "timestamp": "2024-01-15T10:00:05Z",
    "llm_provider": null,
    "llm_model": null,
    "prompt_tokens": null,
    "completion_tokens": null,
    "total_tokens": null
  },
  {
    "id": 3,
    "call_id": 101,
    "role": "assistant",
    "content": "Of course! I'd be happy to help you check your order status. Could you please provide your order number or the email address associated with your account?",
    "message_index": 2,
    "timestamp": "2024-01-15T10:00:07Z",
    "llm_provider": "openai",
    "llm_model": "gpt-4o-mini",
    "prompt_tokens": 150,
    "completion_tokens": 35,
    "total_tokens": 185
  }
]

Response Fields

FieldTypeDescription
idintegerUnique message ID
call_idintegerID of the parent call
rolestringMessage role: system, user, or assistant
contentstringThe message content
message_indexintegerPosition in the conversation (0-indexed)
timestampstringWhen the message was recorded (ISO 8601)
llm_providerstringLLM provider used (e.g., openai, anthropic)
llm_modelstringSpecific model used (e.g., gpt-4o-mini)
prompt_tokensintegerNumber of prompt tokens used
completion_tokensintegerNumber of completion tokens generated
total_tokensintegerTotal tokens for this message

Message Roles

RoleDescription
systemSystem prompt and instructions for the AI
userUser’s spoken input (transcribed)
assistantAI assistant’s response

Error Responses

404 Not Found

{
  "detail": "Call with ID 101 not found in your organization"
}

Use Cases

Debug AI Behavior

import requests

def analyze_call_conversation(call_id):
    response = requests.get(
        f"https://api.burki.dev/api/v1/calls/{call_id}/messages",
        headers={"Authorization": "Bearer YOUR_API_KEY"}
    )
    
    messages = response.json()
    
    # Get the system prompt
    system_prompt = next(
        (m["content"] for m in messages if m["role"] == "system"),
        None
    )
    print(f"System Prompt: {system_prompt[:100]}...")
    
    # Count exchanges
    user_messages = [m for m in messages if m["role"] == "user"]
    assistant_messages = [m for m in messages if m["role"] == "assistant"]
    
    print(f"User messages: {len(user_messages)}")
    print(f"Assistant messages: {len(assistant_messages)}")
    
    # Calculate token usage
    total_tokens = sum(
        m.get("total_tokens", 0) or 0 
        for m in messages
    )
    print(f"Total tokens used: {total_tokens}")

Export Conversation for Review

def export_conversation(call_id):
    response = requests.get(
        f"https://api.burki.dev/api/v1/calls/{call_id}/messages",
        headers={"Authorization": "Bearer YOUR_API_KEY"}
    )
    
    messages = response.json()
    
    # Format as readable conversation
    conversation = []
    for msg in messages:
        if msg["role"] == "system":
            continue  # Skip system prompt for readability
        
        speaker = "Customer" if msg["role"] == "user" else "AI"
        conversation.append(f"{speaker}: {msg['content']}")
    
    return "\n\n".join(conversation)

Analyze Token Costs

def calculate_call_llm_costs(call_id):
    response = requests.get(
        f"https://api.burki.dev/api/v1/calls/{call_id}/messages",
        headers={"Authorization": "Bearer YOUR_API_KEY"}
    )
    
    messages = response.json()
    
    total_prompt_tokens = 0
    total_completion_tokens = 0
    
    for msg in messages:
        if msg.get("prompt_tokens"):
            total_prompt_tokens += msg["prompt_tokens"]
        if msg.get("completion_tokens"):
            total_completion_tokens += msg["completion_tokens"]
    
    # GPT-4o-mini pricing (example)
    prompt_cost = total_prompt_tokens * 0.00015 / 1000
    completion_cost = total_completion_tokens * 0.0006 / 1000
    
    return {
        "prompt_tokens": total_prompt_tokens,
        "completion_tokens": total_completion_tokens,
        "estimated_cost": prompt_cost + completion_cost
    }

Notes

  • Messages are stored in the order they occurred during the conversation
  • The first message is typically the system prompt (role: system)
  • Token counts are only available for assistant messages (LLM responses)
  • For the actual spoken words, use the Get Transcripts endpoint