Generate chat completions using Nova ILM model with image and text inputs.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| model | string | Yes | - | The model to use (e.g., nova:amazon.nova-lite-v1:0) |
| messages | array | Yes | - | Array of message objects. Each message contains: - role: Role type, values: system, user, assistant- content: Message content, can be a string or array. Array items can contain:- type: Content type, text or image_url- text: Text content (when type is text)- image_url: Image URL or base64 encoded image (when type is image_url) |
| temperature | number | No | 1.0 | Controls randomness: 0.0-2.0, higher = more random |
| max_tokens | integer | No | 1000 | Maximum number of tokens to generate |
| top_p | number | No | 1.0 | Nucleus sampling: 0.0-1.0, consider tokens with top_p probability mass |
| frequency_penalty | number | No | 0.0 | Reduces repetition of frequent tokens: -2.0 to 2.0 |
| presence_penalty | number | No | 0.0 | Increases likelihood of new topics: -2.0 to 2.0 |
| n | integer | No | 1 | Number of completions to generate |
| stream | boolean | No | false | Whether to stream the response |
| stop | string | array | null | No | null | Stop sequences. Can be a string, array of strings, or null |
| extra_body | object | No | - | Additional body parameters. Contains: - metadata: Metadata object- toolConfig: Tool configuration- tools: Array of tool specifications |
| Parameter | Type | Description |
|---|---|---|
| id | string | Unique identifier for the chat completion |
| object | string | Object type, always “chat.completion” |
| created | integer | Unix timestamp of when the completion was created |
| model | string | The model used for the completion |
| choices | array | Array of completion choices |
| choices[].index | integer | Index of the choice in the choices array |
| choices[].message | object | Message object containing the assistant’s response |
| choices[].message.role | string | Role of the message, always “assistant” |
| choices[].message.content | string | Content of the message |
| choices[].finish_reason | string | Reason why the completion finished |
| usage | object | Token usage information |
| usage.prompt_tokens | integer | Number of tokens in the prompt |
| usage.output_token | integer | Number of tokens in the completion output |
| usage.total_tokens | integer | Total number of tokens used |
The model to use (e.g., nova:amazon.nova-lite-v1:0)
"nova:amazon.nova-lite-v1:0"
Array of message objects
Controls randomness: 0.0-2.0, higher = more random
0 <= x <= 2Maximum number of tokens to generate
Nucleus sampling: 0.0-1.0
0 <= x <= 1Reduces repetition of frequent tokens: -2.0 to 2.0
-2 <= x <= 2Increases likelihood of new topics: -2.0 to 2.0
-2 <= x <= 2Number of completions to generate
Whether to stream the response
Stop sequences
Chat completion response
Unique identifier for the chat completion
"chatcmpl_703da855e7e0415296d5365265a1b323"
Object type, always "chat.completion"
"chat.completion"
Unix timestamp of when the completion was created
1767097779
The model used for the completion
"nova:amazon.nova-lite-v1:0"
Array of completion choices
Token usage information