Generate chat completions using Nova ILM model with image and text inputs.
This endpoint allows you to generate chat completions with image and text inputs using Nova ILM model.Documentation Index
Fetch the complete documentation index at: https://api-tools.memories.ai/llms.txt
Use this file to discover all available pages before exploring further.
POST https://mavi-backend.memories.ai/serve/api/v2/iu/chat/completionsImage Understanding (ILM) endpoints use the /iu path prefix. Video Understanding (VLM) endpoints use /vu instead.nova: prefix when used in the model parameter (e.g., nova:us.amazon.nova-lite-v1:0).
| Model | Input Price | Output Price |
|---|---|---|
| us.amazon.nova-premier-v1:0 | $2.50/1M tokens | $12.50/1M tokens |
| us.amazon.nova-pro-v1:0 | $0.80/1M tokens | $3.20/1M tokens |
| us.amazon.nova-2-lite-v1:0 | $0.33/1M tokens | $2.75/1M tokens |
| us.amazon.nova-lite-v1:0 | $0.06/1M tokens | $0.24/1M tokens |
nova: prefix: "model": "nova:us.amazon.nova-lite-v1:0"| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| model | string | Yes | - | The model to use (e.g., nova:us.amazon.nova-lite-v1:0) |
| messages | array | Yes | - | Array of message objects. Each message contains: - role: Role type, values: system, user, assistant- content: Message content, can be a string or array. Array items can contain:- type: Content type, text or image_url- text: Text content (when type is text)- image_url: Image URL or base64 encoded image (when type is image_url) |
| temperature | number | No | 1.0 | Controls randomness: 0.0-2.0, higher = more random |
| max_tokens | integer | No | 1000 | Maximum number of tokens to generate |
| top_p | number | No | 1.0 | Nucleus sampling: 0.0-1.0, consider tokens with top_p probability mass |
| frequency_penalty | number | No | 0.0 | Reduces repetition of frequent tokens: -2.0 to 2.0 |
| presence_penalty | number | No | 0.0 | Increases likelihood of new topics: -2.0 to 2.0 |
| n | integer | No | 1 | Number of completions to generate |
| stream | boolean | No | false | Whether to stream the response |
| stop | string | array | null | No | null | Stop sequences. Can be a string, array of strings, or null |
| extra_body | object | No | - | Additional body parameters. Contains: - metadata: Metadata object- toolConfig: Tool configuration- tools: Array of tool specifications |
| Parameter | Type | Description |
|---|---|---|
| id | string | Unique identifier for the chat completion |
| object | string | Object type, always “chat.completion” |
| created | integer | Unix timestamp of when the completion was created |
| model | string | The model used for the completion |
| choices | array | Array of completion choices |
| choices[].index | integer | Index of the choice in the choices array |
| choices[].message | object | Message object containing the assistant’s response |
| choices[].message.role | string | Role of the message, always “assistant” |
| choices[].message.content | string | Content of the message |
| choices[].finish_reason | string | Reason why the completion finished |
| usage | object | Token usage information |
| usage.prompt_tokens | integer | Number of tokens in the prompt |
| usage.output_token | integer | Number of tokens in the completion output |
| usage.total_tokens | integer | Total number of tokens used |
The model to use (e.g., nova:amazon.nova-lite-v1:0)
"nova:amazon.nova-lite-v1:0"
Array of message objects
Controls randomness: 0.0-2.0, higher = more random
0 <= x <= 2Maximum number of tokens to generate
Nucleus sampling: 0.0-1.0
0 <= x <= 1Reduces repetition of frequent tokens: -2.0 to 2.0
-2 <= x <= 2Increases likelihood of new topics: -2.0 to 2.0
-2 <= x <= 2Number of completions to generate
Whether to stream the response
Stop sequences
Chat completion response
Unique identifier for the chat completion
"chatcmpl_703da855e7e0415296d5365265a1b323"
Object type, always "chat.completion"
"chat.completion"
Unix timestamp of when the completion was created
1767097779
The model used for the completion
"nova:amazon.nova-lite-v1:0"
Array of completion choices
Token usage information