Text to Text
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | ✅ Yes | claude-3-5-haiku-20241022 | The model to use for the request |
messages | array | ✅ Yes | [{'role': 'user', 'content': 'Who are you?'}] | Array of message objects for the conversation. Each message must have a role (user or assistant) and content. |
max_tokens | integer | ✅ Yes | 1024 | The maximum number of tokens to generate before stopping |
temperature | number | ❌ No | 1.0 | Amount of randomness injected into the response. Ranges from 0.0 to 1.0 |
top_p | number | ❌ No | 1.0 | Use nucleus sampling. Ranges from 0.0 to 1.0 |
top_k | integer | ❌ No | null | Only sample from the top K options for each subsequent token |
stream | boolean | ❌ No | false | Whether to incrementally stream the response using server-sent events |
stop_sequences | array | ❌ No | - | Custom text sequences that will cause the model to stop generating |
messages array should have the following structure:
| Field | Type | Required | Description |
|---|---|---|---|
role | string | ✅ Yes | The role of the message. Can be: user, assistant, or system |
content | array/string | ✅ Yes | The content of the message |
| Field | Type | Required | Example | Description |
|---|---|---|---|---|
type | string | ✅ Yes | text | The type of content |
text | string | ✅ Yes | "The positive prompt for the generation." | The text content when type is text |