API Documentation
Janus API Reference
OpenAI-compatible chat completions API with Janus extensions for artifacts, generative UI, memory, and intelligent agent workflows.
Quick start
The Janus API is compatible with OpenAI's chat completions API. If you're already using OpenAI, change the base URL to Janus and reuse your client.
Base URL
https://janus-gateway-bqou.onrender.com/v1
Endpoint
POST /v1/chat/completions
1curl -X POST "https://janus-gateway-bqou.onrender.com/v1/chat/completions" \
2 -H "Content-Type: application/json" \
3 -d '{
4 "model": "baseline-cli-agent",
5 "messages": [
6 {"role": "user", "content": "Hello Janus"}
7 ],
8 "stream": true
9 }'Authentication
The Janus gateway is currently open access. API keys and quotas will be added later; you can pass any placeholder string in the OpenAI client for now.
Endpoints
Create chat completions with streaming, tools, and artifacts.
List available models and competitor baselines.
Fetch generated artifacts by ID.
Request parameters
Standard OpenAI parameters
All standard OpenAI chat completion fields are supported.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| model | string | Yes | - | Model/competitor to use. Currently: "baseline-cli-agent", "baseline-langchain", or "baseline". |
| messages | array | Yes | - | Array of message objects with role and content. |
| messages[].role | "user" | "assistant" | "system" | Yes | - | Message author role |
| messages[].content | string | ContentPart[] | Yes | - | Text or multimodal content |
| stream | boolean | No | false | Enable Server-Sent Events streaming for real-time responses. |
| temperature | number | No | 0.7 | Sampling temperature (0-2). Lower = more focused, higher = more creative. |
| max_tokens | integer | No | 4096 | Maximum tokens to generate in the response. |
| user | string | No | - | Unique user identifier for memory features and usage tracking. |
| tools | array | No | - | Function definitions the model can call. |
| tool_choice | string | object | No | - | Control tool usage: "auto", "none", or specific tool. |
| metadata | object | No | - | Custom metadata passed through to the response (include routing_decision to pin routing). |
Janus extensions
Janus-specific fields enable memory, routing, and generation controls.
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| competitor_id | string | No | - | Explicitly route to a specific competitor implementation. |
| enable_memory | boolean | No | false | Enable memory extraction and retrieval for personalized responses. |
| routing_decision | "fast_qwen" | "fast_nemotron" | "fast_kimi" | "agent_nemotron" | "agent_kimi" | No | - | Optional override for routing decision (set via metadata.routing_decision). |
| generation_flags | object | No | - | Request specific generation types. |
| generation_flags.generate_image | boolean | No | - | Request image generation |
| generation_flags.generate_video | boolean | No | - | Request video generation |
| generation_flags.generate_audio | boolean | No | - | Request audio generation |
| generation_flags.deep_research | boolean | No | - | Enable deep web research |
| generation_flags.web_search | boolean | No | - | Enable web search |
Response format
Responses mirror OpenAI's schema with optional Janus fields for artifacts, reasoning, and cost metadata.
Non-streaming response
1{
2 "id": "chatcmpl-abc123def456",
3 "object": "chat.completion",
4 "created": 1706123456,
5 "model": "baseline-cli-agent",
6 "choices": [
7 {
8 "index": 0,
9 "message": {
10 "role": "assistant",
11 "content": "Quantum computing uses quantum mechanics...",
12 "reasoning_content": null,
13 "artifacts": [],
14 "tool_calls": null
15 },
16 "finish_reason": "stop"
17 }
18 ],
19 "usage": {
20 "prompt_tokens": 15,
21 "completion_tokens": 142,
22 "total_tokens": 157,
23 "cost_usd": 0.00023,
24 "sandbox_seconds": 45.2
25 }
26}Streaming response (SSE)
1data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"delta":{"role":"assistant"},"index":0}]}
2
3data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"delta":{"content":"Quantum"},"index":0}]}
4
5data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","choices":[{"delta":{"content":" computing"},"index":0}]}
6
7data: [DONE]Janus extensions
Artifacts
When the model generates files, images, or other non-text outputs, they appear in the artifacts array.
1{
2 "message": {
3 "content": "I've created the chart you requested.",
4 "artifacts": [
5 {
6 "id": "artf_x7k9m2p4",
7 "type": "image",
8 "mime_type": "image/png",
9 "display_name": "sales_chart.png",
10 "size_bytes": 45678,
11 "url": "https://janus-gateway.../v1/artifacts/artf_x7k9m2p4",
12 "ttl_seconds": 3600
13 }
14 ]
15 }
16}Artifacts under 1MB are returned as data URLs. Larger files use artifact URLs.
Generative UI blocks
Responses may contain interactive UI widgets using the html-gen-ui code fence. These blocks render as sandboxed iframes in the Janus chat UI.
1Here's an interactive calculator:
2
3```html-gen-ui
4<!DOCTYPE html>
5<html>
6<head>
7 <style>
8 body { background: #1a1a2e; color: #e0e0e0; padding: 1rem; }
9 </style>
10</head>
11<body>
12 <input type="number" id="a"> + <input type="number" id="b">
13 <button onclick="calculate()">Calculate</button>
14 <p id="result"></p>
15 <script>
16 function calculate() {
17 const a = parseFloat(document.getElementById('a').value) || 0;
18 const b = parseFloat(document.getElementById('b').value) || 0;
19 document.getElementById('result').textContent = 'Result: ' + (a + b);
20 }
21 </script>
22</body>
23</html>
24```Reasoning content
Complex tasks may include intermediate reasoning steps in the reasoning_content field.
1{
2 "message": {
3 "content": "The answer is 42.",
4 "reasoning_content": "Let me break this down step by step..."
5 }
6}Janus stream events
During streaming, special events may appear in the janus field.
1{
2 "delta": {
3 "content": null,
4 "janus": {
5 "event": "tool_start",
6 "tool_name": "web_search",
7 "metadata": {"query": "latest AI news"}
8 }
9 }
10}Code examples
1import openai
2
3client = openai.OpenAI(
4 base_url="https://janus-gateway-bqou.onrender.com/v1",
5 api_key="not-required" # Currently open access
6)
7
8response = client.chat.completions.create(
9 model="baseline-cli-agent",
10 messages=[
11 {"role": "user", "content": "Explain quantum computing in simple terms"}
12 ],
13 stream=True
14)
15
16for chunk in response:
17 if chunk.choices[0].delta.content:
18 print(chunk.choices[0].delta.content, end="")Multimodal input
Send image URLs or base64 data URLs inside the message content array.
Python
1# Image analysis with URL
2response = client.chat.completions.create(
3 model="baseline-cli-agent",
4 messages=[
5 {
6 "role": "user",
7 "content": [
8 {"type": "text", "text": "What's in this image?"},
9 {
10 "type": "image_url",
11 "image_url": {
12 "url": "https://example.com/image.jpg",
13 "detail": "high" # "auto", "low", or "high"
14 }
15 }
16 ]
17 }
18 ]
19)
20
21# Image analysis with base64
22import base64
23
24with open("image.png", "rb") as f:
25 image_data = base64.b64encode(f.read()).decode()
26
27response = client.chat.completions.create(
28 model="baseline-cli-agent",
29 messages=[
30 {
31 "role": "user",
32 "content": [
33 {"type": "text", "text": "Describe this image"},
34 {
35 "type": "image_url",
36 "image_url": {
37 "url": f"data:image/png;base64,{image_data}"
38 }
39 }
40 ]
41 }
42 ]
43)TypeScript
1// Image analysis with TypeScript
2const response = await client.chat.completions.create({
3 model: 'baseline-cli-agent',
4 messages: [
5 {
6 role: 'user',
7 content: [
8 { type: 'text', text: "What's in this image?" },
9 {
10 type: 'image_url',
11 image_url: {
12 url: 'https://example.com/image.jpg',
13 detail: 'high',
14 },
15 },
16 ],
17 },
18 ],
19});Memory features
Provide a stable user ID and set enable_memoryto persist and retrieve user memories.
1# Enable memory for personalized responses
2response = client.chat.completions.create(
3 model="baseline-cli-agent",
4 messages=[
5 {"role": "user", "content": "Remember that my favorite color is blue"}
6 ],
7 user="user_abc123", # Required for memory features
8 extra_body={
9 "enable_memory": True
10 }
11)
12
13# Later conversation - memories are automatically retrieved
14response = client.chat.completions.create(
15 model="baseline-cli-agent",
16 messages=[
17 {"role": "user", "content": "What's my favorite color?"}
18 ],
19 user="user_abc123",
20 extra_body={
21 "enable_memory": True
22 }
23)
24# Response: "Your favorite color is blue!"Error handling
Errors follow the OpenAI schema with an error object, code, and parameter hints.
1{
2 "error": {
3 "message": "The model "baseline-cli-agent" does not exist.",
4 "type": "invalid_request_error",
5 "param": "model",
6 "code": "model_not_found"
7 }
8}