Skip to main content
POST
/
v1
/
messages
claude-3-5-haiku-20241022 (file analysis)
curl --request POST \
  --url https://api.example.com/v1/messages
Claude’s official format for the file analysis API.
curl -X POST "https://gptproto.com/v1/messages" \
  -H "Authorization: YOUR_API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "claude-3-5-haiku-20241022",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "Please analyze this PDF document and provide a summary of its content"
        },
        {
          "type": "document",
          "source": {
            "type": "url",
            "url": "https://www.bt.cn/data/api-doc.pdf"
          }
        }
      ]
    }
  ]
}'
{
  "error": {
    "type": "authentication_error",
    "message": "Invalid API key"
  }
}

Parameters

ParameterTypeRequiredDefaultDescription
modelstring✅ Yesclaude-3-5-haiku-20241022The model to use for file analysis
messagesarray✅ Yes-Array of message objects for the conversation. Each message must have a role (user or assistant) and content. Content can include: - Text blocks with type “text” - Document blocks with type “document” containing: - source: Object with type “base64” or “url” - For base64: media_type (e.g., “application/pdf”) and data (base64 string) - For url: url field with the file URL Supported file formats: PDF, DOCX, XLSX, TXT, CSV, JSON, XML, HTML Maximum file size: 20MB Example with URL: json [ { "role": "user", "content": [ { "type": "text", "text": "Please analyze this document" }, { "type": "document", "source": { "type": "url", "url": "https://example.com/document.pdf" } } ] } ] Example with base64: json [ { "role": "user", "content": [ { "type": "text", "text": "Please analyze this document" }, { "type": "document", "source": { "type": "base64", "media_type": "application/pdf", "data": "JVBERi0xLjQKJeLjz9MK..." } } ] } ]
max_tokensinteger✅ Yes1024The maximum number of tokens to generate before stopping
temperaturenumber❌ No1.0Amount of randomness injected into the response. Ranges from 0.0 to 1.0
top_pnumber❌ No1.0Use nucleus sampling. Ranges from 0.0 to 1.0
top_kinteger❌ NonullOnly sample from the top K options for each subsequent token
streamboolean❌ NofalseWhether to incrementally stream the response using server-sent events
stop_sequencesarray❌ No-Custom text sequences that will cause the model to stop generating

Messages Array Structure

Each message object in the messages array should have the following structure:
FieldTypeRequiredDescription
rolestring✅ YesThe role of the message. Can be: user, assistant, or system
contentarray/string✅ YesThe content of the message

Content Array Structure (when content is an array)

FieldTypeRequiredExampleDescription
typestring✅ YestextThe type of content
textstring✅ Yes"The positive prompt for the generation."The text content when type is text