API Documentation

Integrate with Arlo's powerful, unified API. Access the world's best AI models through a single, consistent interface.

OpenAI Compatible Format

Arlo is compatible with the OpenAI request schema. You can use the official OpenAI Python library by pointing the `base_url` to our API endpoint.

Request Example (Python)

import openai

client = openai.OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.arlo.mom/v1"
)

response = client.chat.completions.create(
    model="gpt-5",
    messages=[
        {"role": "user", "content": "Write a short poem on AI"}
    ]
)

print(response)

Calling Arlo Directly

Alternatively, you can make direct HTTP requests using any tool, like cURL.

Request Example (cURL)

curl -X POST https://api.arlo.mom/v1/chat/completions \
     -H "Authorization: Bearer YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{
       "model": "gpt-5",
       "messages": [
         {"role": "user", "content": "Hello!"}
       ]
     }'

Sending Images to AI

You can send images to vision-capable models using base64 encoding or image URLs. Images should be included in the message content array.

Using Base64 Images

curl -X POST https://api.arlo.mom/v1/chat/completions \
     -H "Authorization: Bearer YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{
       "model": "gpt-5",
       "messages": [
         {
           "role": "user",
           "content": [
             {"type": "text", "text": "What is in this image?"},
             {
               "type": "image_url",
               "image_url": {
                 "url": "data:image/jpeg;base64,/9j/4AAQSkZJRg..."
               }
             }
           ]
         }
       ]
     }'

Using Image URLs

curl -X POST https://api.arlo.mom/v1/chat/completions \
     -H "Authorization: Bearer YOUR_API_KEY" \
     -H "Content-Type: application/json" \
     -d '{
       "model": "gpt-5",
       "messages": [
         {
           "role": "user",
           "content": [
             {"type": "text", "text": "Describe this image"},
             {
               "type": "image_url",
               "image_url": {
                 "url": "https://example.com/image.jpg"
               }
             }
           ]
         }
       ]
     }'

Python Example with Images

import openai
import base64

# Read and encode image
with open("image.jpg", "rb") as image_file:
    base64_image = base64.b64encode(image_file.read()).decode('utf-8')

client = openai.OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.arlo.mom/v1"
)

response = client.chat.completions.create(
    model="gpt-5",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{base64_image}"
                    }
                }
            ]
        }
    ]
)

print(response.choices[0].message.content)

Available Models

To use a model, simply specify its identifier in the `model` parameter of your request. You can also fetch the complete list programmatically via GET /v1/models.

  • claude-sonnet-4
  • gpt-5
  • gemini-2.5-pro
  • gemini-2.5-flash
  • deepseek-r1
  • deepai-standard
  • glm-4.5-air
  • gpt-oss-120b

Decent Provider Models

Use "provider": "decent" for high-quality models:

  • gpt-5
  • gpt-5-mini
  • gpt-5-nano
  • gpt-4o-mini

Best Provider Models

Use "provider": "best" in your request to access 60+ free models including:

  • deepseek-v3.1, deepseek-chat, deepseek-reasoner
  • claude-sonnet-4, claude-sonnet-4.5, claude-haiku-4.5 (Vision supported)
  • gpt-5-nano, gpt-5-chat, gpt-5-mini (Vision supported)
  • gemini-2.5-flash, gemini-2.5-pro (Vision supported)
  • grok-4, grok-4-think, grok-3-mini
  • o1-pro, o3-mini, o4-mini
  • And 50+ more models

Note: Models marked with "Vision supported" can analyze images sent via the image upload format shown above.

Unreliable Provider Models

Use "provider": "unreliable" for experimental models:

  • deepseek-v3.2-exp
  • gpt-5-mini-2025-08-07
  • sonar-reasoning
  • gemini-2.5-flash
  • claude-3.5-haiku

Image Generation Models

Endpoint: POST /v1/image/completions

Use "provider": "best" with these models:

  • flux-schnell, seed-oss, lucid-origin
  • sdxl, nano-banana, gpt-image-1
  • sd-3.5, sd-3.5-large, dall-e-3

Text-to-Speech Models

Endpoint: POST /v1/audio/completions

Use "provider": "unreliable" for audio generation:

  • gpt-4o-mini-tts - Voices: alloy, echo, fable, onyx, nova, shimmer, coral, verse, ballad, ash, sage, marin, cedar
  • eleven-multilingual-v2 - Voices: Clyde, Roger, Sarah, Laura, Charlie, George, Callum, River, Harry, Liam, Alice, Matilda, Will, Jessica, Eric, Chris, Brian, Daniel, Lily, Bill, Burt Reynolds™, Robert Riggs

Special Model: deepseek-r1

The deepseek-r1 is a specialized reasoning model. To provide insight into its thought process, its responses may include <think> blocks containing its intermediate reasoning. You should parse and remove these blocks before displaying the final output.

Model Reference File

For offline use, you can download a Markdown file containing the complete list of models and usage instructions.

Download LLM.md