n8n Integration¶
Connect n8n automation workflows to AWS Bedrock models through stdapi.ai's OpenAI-compatible or Anthropic-compatible interfaces. Existing OpenAI and Anthropic templates from the n8n marketplace work out of the box—simply point them to your stdapi.ai instance and access AWS Bedrock models.
About n8n¶
🔗 Links: Website | GitHub | Documentation
n8n is a powerful workflow automation platform that enables you to connect any app with an API to build intelligent automations. With its intuitive visual interface, you can create complex AI-powered workflows without writing code, connecting AWS Bedrock models to 400+ services including Slack, Google Sheets, Salesforce, and more.
Key Features:
- ⭐ 50,000+ GitHub stars - Leading open-source workflow automation platform
- 400+ integrations - Pre-built nodes for popular services and APIs
- Visual no-code builder - Drag-and-drop interface with JavaScript customization
- Self-hosted or cloud - Deploy on your infrastructure or use n8n Cloud
- AI-native platform - Built-in OpenAI nodes work instantly with AWS Bedrock via stdapi.ai
- Template marketplace - Thousands of pre-built workflows ready to use
Why n8n + stdapi.ai?¶
-
Use Existing OpenAI Templates
stdapi.ai works with n8n's OpenAI nodes. Thousands of marketplace templates and workflows designed for OpenAI run on AWS Bedrock—zero modifications needed. -
Use Existing Anthropic Templates
stdapi.ai works with n8n's Anthropic nodes. Templates and workflows designed for Anthropic Claude run on AWS Bedrock—zero modifications needed. -
Access AWS Bedrock Models
Claude 4.6+, Nova 2, Llama 4, DeepSeek v3.2, Stable Diffusion, and 80+ models available in n8n workflows. Switch models without changing automation logic. -
Automate Business Processes
Connect AWS Bedrock AI to Slack, Salesforce, Google Workspace, databases, and 400+ services. Build intelligent automation with no-code drag-and-drop. -
Enterprise Data Control
All AI processing stays in your AWS account. Self-host n8n and stdapi.ai for complete data sovereignty and compliance. -
Pay-Per-Use Pricing
No OpenAI subscriptions or per-automation fees. Pay only AWS Bedrock rates for actual AI usage in your workflows.
%%{init: {'flowchart': {'htmlLabels': true}} }%%
flowchart LR
n8n["<img src='../styles/logo_n8n.svg' style='height:64px;width:auto;vertical-align:middle;' /> n8n"] --> stdapi["<img src='../styles/logo.svg' style='height:64px;width:auto;vertical-align:middle;' /> stdapi.ai"]
stdapi --> bedrock["<img src='../styles/logo_amazon_bedrock.svg' style='height:64px;width:auto;vertical-align:middle;' /> AWS Bedrock"]
stdapi --> transcribe["<img src='../styles/logo_amazon_transcribe.svg' style='height:64px;width:auto;vertical-align:middle;' /> AWS Transcribe"]
stdapi --> polly["<img src='../styles/logo_amazon_polly.svg' style='height:64px;width:auto;vertical-align:middle;' /> AWS Polly"]
Prerequisites¶
What You'll Need
- ✓ stdapi.ai deployed - See deployment guide
- ✓ Your stdapi.ai URL - e.g.,
https://api.example.com - ✓ Your API key - From Terraform output or configuration
- ✓ n8n instance - Self-hosted or n8n Cloud
Configuration¶
OpenAI Nodes¶
Set Up Your Credentials¶
The foundation of any n8n integration is configuring your API credentials. This one-time setup unlocks all AI capabilities.
Creating Your stdapi.ai Credential
In your n8n interface:
- Navigate to Credentials menu
- Click Create Credential
- Search and select "OpenAI" in the credential list
- Configure the following fields:
API Key: YOUR_STDAPI_KEY Base URL: https://YOUR_STDAPI_URL/v1
What This Does
By setting a custom Base URL, you redirect all OpenAI API calls to your stdapi.ai instance. n8n will use this credential to authenticate and route requests to Amazon Bedrock models instead of OpenAI's servers.
Configure Nodes¶
For each node, first select the credentials you previously created in the node parameters. Then, select the model you want to use. If you want to use a model that is not listed, you can enter its ID as an expression in the Model parameter.
Chat Completions¶
Enables: Text generation and conversational AI in workflows.
Supported Node
OpenAI Chat Model
- Model can be selected directly in the
Modelparameter - ⚠️ Important:
Use Responses APIparameter must be unchecked (Responses API is not supported yet by stdapi.ai)
n8n calls POST /v1/chat/completions (see Chat Completions API), so the model must be a text/chat-capable model from the correct family.
Embeddings¶
Enables: Vector embeddings for semantic search and RAG workflows.
Supported Node
Embeddings OpenAI
- Model can be selected directly in the
Modelparameter
n8n calls POST /v1/embeddings (see Embeddings API), so the model must be an embeddings-capable model from the correct family.
Image Generation¶
Enables: Text-to-image creation in workflows.
Supported Node
OpenAI/Generate an image
- Model ID can be entered as expression in the
Modelparameter
n8n calls POST /v1/images/generations (see Images Generations API), so the model must be an image-generation model from the correct family.
Image Editing¶
Enables: Image transformation and editing in workflows.
Supported Node
OpenAI/Edit image
- Model ID can be entered as expression in the
Modelparameter
n8n calls POST /v1/images/edits (see Images Edits API), so the model must be an image-editing model from the correct family.
Audio Generation (TTS)¶
Enables: Text-to-speech audio generation in workflows.
Supported Node
OpenAI/Generate audio
- Model ID can be entered as expression in the
Modelparameter - Or use OpenAI model names directly:
tts-1andtts-1-hdwork by default thanks to built-in model aliases
n8n calls POST /v1/audio/speech (see Audio Speech API), so the model must match the text-to-speech modality and family.
Audio Transcription (STT)¶
Enables: Speech-to-text transcription in workflows.
Supported Node
OpenAI/Transcribe a recording
- Works out of the box with OpenAI's
whisper-1model name - The model alias automatically maps to
amazon.transcribe
n8n calls POST /v1/audio/transcriptions (see Audio Transcriptions API), so the model must match the speech-to-text modality.
Files¶
Enables: Upload files once and reference them across multiple chat completion requests without resending the raw bytes each time.
n8n calls the /v1/files endpoints (see Files API). Set Resource to "Files" in the OpenAI node for all operations below.
Upload a file — OpenAI/Upload a file
Uploads a file to S3 and returns a file_id for use in subsequent requests.
Node parameters:
- Resource: Files
- Operation: Upload a File
- Input Data Field Name: name of the binary field containing the file
- Purpose: intended purpose (e.g.
assistants,user_data)
Typical workflow pattern:
- Receive or fetch a file (PDF, image, etc.) in an earlier node
- Pass the binary data to this node
- Store the returned
file_idin a variable or database - Pass
file_idinOpenAI Chat Modelmessages via thetype: "file"content part for repeated analysis without re-uploading
Delete a file — OpenAI/Delete a file
Permanently deletes a file from S3 by its file_id.
Node parameters:
- Resource: Files
- Operation: Delete a File
- File ID: the
file_idof the file to delete
List files — OpenAI/List files
Returns a paginated list of uploaded files, optionally filtered by purpose.
Node parameters:
- Resource: Files
- Operation: List Files
- Purpose: (optional) filter results to a specific purpose
- Return All / Limit: control pagination; enable Return All or set Limit for the first page
Files are returned in descending order (newest first) by default.
Unsupported Nodes¶
The following nodes are not yet supported:
Known Limitations
OpenAI/Message a model— Requires Responses API (not supported yet by stdapi.ai). UseOpenAI Chat Modelinstead.OpenAI/Analyze image— Requires Responses API (not supported yet by stdapi.ai). UseOpenAI Chat Modelinstead.
Anthropic Nodes¶
Set Up Your Credentials¶
Creating Your stdapi.ai Anthropic Credential
In your n8n interface:
- Navigate to Credentials menu
- Click Create Credential
- Search and select "Anthropic" in the credential list
- Configure the following fields:
API Key: YOUR_STDAPI_KEY Base URL: https://YOUR_STDAPI_URL/anthropic
Anthropic Base URL
By default, all Anthropic-compatible routes are prefixed with /anthropic, so the Base URL must end with /anthropic. You can customize this prefix using the ANTHROPIC_ROUTES_PREFIX configuration variable documented in Operations Configuration.
Configure Nodes¶
For each node, first select the credentials you previously created in the node parameters. Then, select the model you want to use. The model can be selected directly in the Model parameter for all supported nodes.
Chat Completions¶
Enables: Text generation and conversational AI in workflows.
Supported Nodes
Anthropic Chat Model
- Model can be selected directly in the
Modelparameter
Anthropic/Message a model
- Model can be selected directly in the
Modelparameter
Message a model in Anthropic
- Model can be selected directly in the
Modelparameter
n8n calls POST /anthropic/v1/messages (see Anthropic Messages API), so the model must be a text/chat-capable model from the correct family.
Image Analysis¶
Enables: Image understanding and analysis in workflows.
Supported Node
Anthropic/Analyze image
- Model can be selected directly in the
Modelparameter
n8n calls POST /anthropic/v1/messages with image content (see Anthropic Messages API), so the model must support vision capabilities.
Document Analysis¶
Enables: Document understanding and extraction in workflows.
Supported Node
Anthropic/Analyze document
- Model can be selected directly in the
Modelparameter
n8n calls POST /anthropic/v1/messages with document content (see Anthropic Messages API), so the model must support document processing capabilities.
Files¶
Enables: Upload files once and reference them across multiple Messages requests as document or image sources.
n8n calls the /anthropic/v1/files endpoints (see Anthropic Files API). Set Resource to "Files" in the Anthropic node for all operations below.
Upload a file — Anthropic/Upload a file
Uploads a file to S3 and returns a file_id for use in subsequent Messages requests.
Node parameters:
- Resource: Files
- Operation: Upload a File
- Input Data Field Name: name of the binary field containing the file
Typical workflow pattern:
- Receive or fetch a file (PDF, image, etc.) in an earlier node
- Pass the binary data to this node
- Store the returned
file_idin a variable or database - Pass
file_idas asource: {type: "file"}inAnthropic/Message a modeldocument or image blocks
Get file metadata — Anthropic/Get file metadata
Retrieves metadata (filename, MIME type, size, creation date) for a file by its file_id.
Node parameters:
- Resource: Files
- Operation: Get File Metadata
- File ID: the
file_idof the file to retrieve
List files — Anthropic/List files
Returns a paginated list of uploaded files.
Node parameters:
- Resource: Files
- Operation: List Files
- Return All / Limit: control pagination; enable Return All or set Limit for the first page
Files are returned in ascending order (oldest first). Use after_id / before_id cursors for bidirectional pagination.
Delete a file — Anthropic/Delete a file
Permanently deletes a file from S3 by its file_id.
Node parameters:
- Resource: Files
- Operation: Delete a File
- File ID: the
file_idof the file to delete
Next Steps¶
- Getting Started — Deploy stdapi.ai to AWS with Terraform
- Local Development — Run stdapi.ai locally with Docker
- More Use Cases — Explore other integrations and tools
- API Overview — Explore supported endpoints