Chat Completions

Fully OpenAI-compatible. Use any OpenAI SDK by changing base_urlto Nova's endpoint.

POST https://api.nova.ai/v1/chat/completions

Parameters

modelstringrequired

Model ID in the format "provider/model-slug", e.g. "deepseek/deepseek-r1"

messagesarrayrequired

Array of message objects with role (system/user/assistant) and content.

temperaturenumber

Sampling temperature between 0 and 2. Defaults to 1.

max_tokensinteger

Maximum tokens to generate in the completion.

streamboolean

Stream the response using server-sent events. Defaults to false.

Example

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.nova.ai/v1",
    api_key="nova-YOUR_API_KEY",
)

response = client.chat.completions.create(
    model="deepseek/deepseek-r1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user",   "content": "What is the capital of France?"},
    ],
    temperature=0.7,
)

print(response.choices[0].message.content)
bash
curl https://api.nova.ai/v1/chat/completions \
  -H "Authorization: Bearer nova-YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek/deepseek-r1",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'