Skip to content

API Overviewยถ

stdapi.ai provides the OpenAI API interface backed by AWS Bedrock foundation models and AWS AI services. Use your existing OpenAI SDKs by simply changing the base URL.

Interactive Documentationยถ

stdapi.ai provides multiple interfaces for exploring and testing the APIโ€”choose the one that fits your workflow:

๐Ÿ“š Documentation Resourcesยถ

๐ŸŽฎ Live API Playgroundยถ

When running the server, access these interactive interfaces:

Interface URL Best For
Swagger UI http://localhost/docs Testing endpoints directly in your browser with live request/response examples
ReDoc http://localhost/redoc Reading and searching through clean, organized documentation
OpenAPI Schema http://localhost/openapi.json Generating client code or importing into API tools like Postman

OpenAI OpenAI SDK-Compatible APIยถ

Access AWS-powered AI services through the OpenAI API interface. Use official OpenAI SDKs by simply changing the base URLโ€”no custom clients needed.

Supported Endpoints:

Category Endpoint Capability Documentation
๐Ÿ’ฌ Chat POST /v1/chat/completions Multi-modal conversations with text, images, video, documents Chat Completions โ†’
๐ŸŽจ Images POST /v1/images/generations Text-to-image generation Images โ†’
๐Ÿ”Š Audio POST /v1/audio/speech Text-to-speech synthesis Text to Speech โ†’
POST /v1/audio/transcriptions Speech-to-text transcription Speech to Text โ†’
POST /v1/audio/translations Speech-to-English translation Speech to English โ†’
๐Ÿง  Embeddings POST /v1/embeddings Vector embeddings for semantic search Embeddings โ†’
๐Ÿ“‹ Models GET /v1/models List available models Models โ†’
  • Backend: AWS Bedrock, AWS Polly, AWS Transcribe, AWS Translate
  • SDKs: Official OpenAI SDKs (Python, Node.js, Go, Ruby, etc.)

Quick Start Guideยถ

This guide shows how to use stdapi.ai with OpenAI SDKs.

stdapi.ai works with official OpenAI client librariesโ€”no custom clients needed:

Python

pip install openai
from openai import OpenAI

# Initialize client with your stdapi.ai endpoint
client = OpenAI(
    base_url="https://your-deployment-url/v1",
    api_key="your-api-key"
)

# Use AWS Bedrock models through the familiar OpenAI interface
response = client.chat.completions.create(
    model="amazon.nova-micro-v1:0",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain quantum computing in simple terms."}
    ]
)

print(response.choices[0].message.content)
# Pass provider-specific parameter through `extra_body`:
response = client.chat.completions.create(
    model="anthropic.claude-sonnet-4-5-20250929-v1:0",
    messages=[{"role": "user", "content": "Solve this complex problem..."}],
    extra_body={
        "top_k": 0.5
    }
)

Node.js

npm install openai
import OpenAI from 'openai';

// Initialize client with your stdapi.ai endpoint
const client = new OpenAI({
  baseURL: 'https://your-deployment-url/v1',
  apiKey: 'your-api-key'
});

// Use AWS Bedrock models through the familiar OpenAI interface
const response = await client.chat.completions.create({
  model: 'amazon.nova-micro-v1:0',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain quantum computing in simple terms.' }
  ]
});

console.log(response.choices[0].message.content);

cURL

curl -X POST "https://your-deployment-url/v1/chat/completions" \
  -H "Authorization: Bearer your-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "amazon.nova-micro-v1:0",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain quantum computing in simple terms."}
    ]
  }'

Understanding Compatibilityยถ

What "Compatible" Meansยถ

stdapi.ai implements the API from source specification with AWS services as the backend. This means:

  • โœ… Same Request Format: Use identical request bodies and parameters
  • โœ… Same Response Structure: Receive responses in the expected format
  • โœ… Same SDKs: Use official client libraries
  • โœ… Drop-in Replacement: Change only the base URL and model ID

Handling Parameter Differencesยถ

Different providers support different features. stdapi.ai handles this gracefully:

Silent Ignoring (Safe Parameters)ยถ

Parameters that don't affect output are accepted but ignored:

# This worksโ€”unsupported parameters are silently ignored
response = client.chat.completions.create(
    model="amazon.nova-micro-v1:0",
    messages=[{"role": "user", "content": "Hello"}],
    user="user-123",  # Logged but doesn't affect AWS processing
)

Clear Errors (Behavior-Changing Parameters)ยถ

Parameters that would change output return helpful errors:

# This returns HTTP 400 with a clear explanation
response = client.chat.completions.create(
    model="amazon.nova-micro-v1:0",
    messages=[{"role": "user", "content": "Hello"}],
    logit_bias={123: 100},  # Not supportedโ€”clear error returned
)

Understanding the Architectureยถ

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Your App       โ”‚
โ”‚  (OpenAI SDK)   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚ HTTPS /v1/*
         โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  stdapi.ai      โ”‚
โ”‚  API Gateway    โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚ Translates to AWS APIs
    โ”Œโ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ–ผ          โ–ผ          โ–ผ          โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚AWS     โ”‚ โ”‚AWS     โ”‚ โ”‚AWS     โ”‚ โ”‚AWS     โ”‚
โ”‚Bedrock โ”‚ โ”‚Polly   โ”‚ โ”‚Transcr.โ”‚ โ”‚S3      โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Request Flow: 1. Your application uses the standard OpenAI SDK 2. Requests go to stdapi.ai's /v1/* endpoints instead of OpenAI 3. stdapi.ai translates OpenAI-format requests to AWS service APIs 4. AWS services process the requests 5. Responses are formatted as OpenAI-compatible responses 6. Your application receives familiar OpenAI response structures


Ready to build intelligent applications? Start with the Chat Completions API or explore available models!