Text to Text
| Parameter | Type | Required | Default | Range | Description |
|---|---|---|---|---|---|
model | string | ✅ Yes | claude-haiku-4-5-20251001 | - | The model identifier. Use claude-haiku-4-5-20251001 for this model |
messages | array | ✅ Yes | - | - | Input messages for the conversation. Each message must have a role (user/assistant) and content (string or array of content blocks) |
max_tokens | integer | ✅ Yes | - | 1-64000 | Maximum number of tokens to generate. The model may stop before reaching this limit |
temperature | number | ❌ No | 1.0 | 0.0-1.0 | Controls randomness in output. Use lower values (closer to 0.0) for analytical tasks, higher values (closer to 1.0) for creative tasks. Note: Even at 0.0, results are not fully deterministic |
top_p | number | ❌ No | - | 0.0-1.0 | Nucleus sampling threshold. Controls diversity by considering only tokens with cumulative probability up to top_p. Recommended for advanced use only. Do not use with temperature |
top_k | integer | ❌ No | - | >0 | Only sample from the top K options for each token. Removes low probability responses. Recommended for advanced use only |
stream | boolean | ❌ No | false | - | Whether to stream the response incrementally using server-sent events |
stop_sequences | array | ❌ No | - | Max 8191 sequences | Custom text sequences that will cause the model to stop generating. Each sequence must contain non-whitespace characters |
messages array should have the following structure:
| Field | Type | Required | Range | Description |
|---|---|---|---|---|
role | string | ✅ Yes | - | The role of the message. Can be: user or assistant |
content | string/array | ✅ Yes | - | The content of the message. Can be a simple string for text-only messages, or an array of content blocks for multimodal content |
| Field | Type | Required | Range | Description |
|---|---|---|---|---|
type | string | ✅ Yes | - | Must be text |
text | string | ✅ Yes | - | The text content |