AI Coding Assistants Integration¶
Connect your favorite AI coding assistants to AWS Bedrock models through stdapi.ai's OpenAI-compatible interface. Get intelligent code completions, chat assistance, and codebase understanding with powerful AWS models like Claude, Kimi K2 thinking, and Qwen Coder Next—no vendor lock-in required.
About AI Coding Assistants¶
Popular Tools: Cline | JetBrains AI Assistant | Continue.dev | Cursor | Windsurf
AI coding assistants are IDE extensions that leverage large language models to enhance developer productivity. These tools provide real-time code completions, intelligent suggestions, natural language code generation, and interactive chat capabilities directly within your coding environment—acting as AI pair programmers that understand your codebase context.
What AI coding assistants can do:
- Real-time completions - Autocomplete code as you type with context awareness
- Interactive chat - Ask questions about your codebase, get explanations
- Code generation - Natural language to code conversion
- Refactoring - Intelligent code improvements and optimization suggestions
- Documentation - Auto-generate comments, docstrings, and READMEs
- Testing - Create unit tests, debug issues, suggest fixes
- Git integration - Generate commit messages, review diffs
- Multi-language - Support for Python, JavaScript, TypeScript, Go, Rust, Java, and more
Why AI Coding Assistants + stdapi.ai?¶
-
Works with Your IDE
Almost any coding assistant that supports OpenAI-compatible APIs works with stdapi.ai. Continue.dev, Cursor, Cline, Windsurf, Aider—all compatible with AWS Bedrock models. -
Best-in-Class Coding Models
Claude 4.6+ for reasoning and architecture, Kimi K2 thinking for complex problem-solving, Qwen Coder Next for specialized coding tasks. Choose the right model for each task. -
Code Privacy Guaranteed
Your code never leaves your AWS account. Perfect for proprietary codebases, enterprise security requirements, or compliance-sensitive projects. -
Flexible Deployment Options
Run stdapi.ai in AWS for production or locally with Docker for development. Test locally, deploy to cloud—same API, same experience. -
Pay-Per-Use, No Subscriptions
No per-developer licenses or monthly subscriptions. Pay only AWS Bedrock rates for actual usage. Use powerful models without per-seat costs.
%%{init: {'flowchart': {'htmlLabels': true}} }%%
flowchart LR
ide["<img src='../styles/logo_vscode.svg' style='height:64px;width:auto;vertical-align:middle;' /> IDE + AI Assistant"] --> stdapi["<img src='../styles/logo.svg' style='height:64px;width:auto;vertical-align:middle;' /> stdapi.ai"]
stdapi --> bedrock["<img src='../styles/logo_amazon_bedrock.svg' style='height:64px;width:auto;vertical-align:middle;' /> AWS Bedrock"]
✅ Prerequisites¶
What You'll Need
- ✓ stdapi.ai deployed - See deployment guide or run locally with Docker
- ✓ Your stdapi.ai URL - e.g.,
https://api.example.comorhttp://localhost:8000for local - ✓ Your API key - From Terraform output or configuration (optional for local development)
- ✓ IDE with AI assistant - VS Code, JetBrains, Cursor, or your preferred editor with an AI coding extension
⚙️ Configuration¶
🔑 Universal Setup Guide¶
Most AI coding assistants follow a similar configuration pattern. The exact menu location and field names may vary, but the core settings remain consistent.
Generic Configuration Steps
In your coding assistant settings:
- Navigate to Settings or Preferences
- Find the AI Provider or Model Provider section
- Select "OpenAI Compatible" or "Custom OpenAI" as the provider type
- Configure the connection:
API Base URL: https://YOUR_STDAPI_URL/v1 (or sometimes just: https://YOUR_STDAPI_URL) API Key: YOUR_STDAPI_KEY Model: anthropic.claude-sonnet-4-5-20250929-v1:0 (or select from detected models if available)
Model Selection for Coding
Recommended models for different tasks:
- Advanced reasoning & architecture:
anthropic.claude-sonnet-4-5-20250929-v1:0or Claude Opus - Complex problem-solving: Kimi K2 thinking models
- Specialized coding tasks:
qwen2-coder-next-1-5-instruct-v1:0(Qwen Coder Next) - Fast completions: Amazon Nova Micro or Nova Lite
Configuration tips:
- Auto-detect: Some assistants query
/v1/modelsand show a dropdown - Manual entry: Use full Bedrock model ID (e.g.,
anthropic.claude-sonnet-4-5-20250929-v1:0) - Multi-model setup: Use fast, cheap models for secondary tasks (autocomplete, summaries) and powerful models for complex generation
💬 Chat Completions¶
All coding assistants use chat completions for interactive conversations, code generation, and explanations.
How It Works
Your coding assistant calls POST /v1/chat/completions (see Chat Completions API) to:
- Answer questions about your code
- Generate new code from natural language
- Explain complex functions or algorithms
- Suggest refactoring and improvements
- Debug issues and propose fixes
The model must be a text/chat-capable model from the correct family for your Bedrock region.
🛠️ Tool Calling Support¶
stdapi.ai fully supports tool calling (function calling) through the chat completions API, which is essential for autonomous and efficient coding agents.
Advanced Agent Capabilities
Tool calling enables your coding assistant to:
- Execute terminal commands and see results
- Read and write files in your codebase
- Search through code and documentation
- Run tests and analyze output
- Interact with external APIs and services
Most modern autonomous agents like Cline or Junie rely heavily on tool calling to perform complex, multi-step coding tasks. stdapi.ai's tool calling support (see Chat Completions API - Tool Calling) ensures these agents can work at their full potential with Amazon Bedrock models.
⚡ Code Completions¶
Some coding assistants support dedicated code completion endpoints for real-time suggestions as you type.
Completion Support
Advanced assistants may call POST /v1/completions for:
- Inline code suggestions
- Auto-completion while typing
- Context-aware code snippets
Not all models or assistants support this mode. Chat-based assistants handle completions through the chat API instead.
🐳 Running stdapi.ai Locally¶
stdapi.ai works well when running locally with Docker, making it ideal for your development environment.
Running Locally
For complete local deployment instructions, see the Getting Started Guide.
Configure your coding assistant:
API Base URL: http://localhost:8000/v1
API Key: your_stdapi_key
The URL will likely be http://localhost:8000/v1 depending on your Docker port configuration.