LLM orchestration
Conductor provides native system tasks for LLM orchestration and integration. No external frameworks or custom workers required — configure a provider and use it in any workflow. Each provider supports function calling via MCP tool integration.
Supported LLM providers
| Provider | Chat Completion | Text Completion | Embeddings |
|---|---|---|---|
| Anthropic (Claude) | ✓ | ✓ | — |
| OpenAI (GPT) | ✓ | ✓ | ✓ |
| Azure OpenAI | ✓ | ✓ | ✓ |
| Google Gemini | ✓ | ✓ | ✓ |
| AWS Bedrock | ✓ | ✓ | ✓ |
| Mistral | ✓ | ✓ | ✓ |
| Cohere | ✓ | ✓ | ✓ |
| HuggingFace | ✓ | ✓ | ✓ |
| Ollama | ✓ | ✓ | ✓ |
| Perplexity | ✓ | — | — |
| Grok (xAI) | ✓ | ✓ | — |
| StabilityAI | — | — | — |
No other open source workflow engine provides native LLM orchestration at this breadth. Each provider is a configuration — switch models by changing a parameter, not your code.
Vector database workflows
Built-in vector database integration enables RAG (retrieval-augmented generation) pipelines as standard vector database workflows.
| Vector Database | Store Embeddings | Index Text | Semantic Search |
|---|---|---|---|
| Pinecone | ✓ | ✓ | ✓ |
| pgvector (PostgreSQL) | ✓ | ✓ | ✓ |
| MongoDB Atlas Vector Search | ✓ | ✓ | ✓ |
Example: RAG pipeline
A complete RAG workflow using native system tasks — index documents, search, and generate an answer. No custom workers required.
{
"name": "rag_pipeline",
"description": "Index documents, search, and generate RAG answer",
"version": 1,
"schemaVersion": 2,
"tasks": [
{
"name": "index_document",
"taskReferenceName": "index_ref",
"type": "LLM_INDEX_TEXT",
"inputParameters": {
"vectorDB": "postgres-prod",
"index": "knowledge_base",
"namespace": "docs",
"docId": "${workflow.input.docId}",
"text": "${workflow.input.text}",
"embeddingModelProvider": "openai",
"embeddingModel": "text-embedding-3-small",
"dimensions": 1536,
"metadata": "${workflow.input.metadata}"
}
},
{
"name": "search_index",
"taskReferenceName": "search_ref",
"type": "LLM_SEARCH_INDEX",
"inputParameters": {
"vectorDB": "postgres-prod",
"index": "knowledge_base",
"namespace": "docs",
"query": "${workflow.input.question}",
"embeddingModelProvider": "openai",
"embeddingModel": "text-embedding-3-small",
"dimensions": 1536,
"maxResults": 3
}
},
{
"name": "generate_answer",
"taskReferenceName": "answer_ref",
"type": "LLM_CHAT_COMPLETE",
"inputParameters": {
"llmProvider": "openai",
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"message": "Answer the question using only the provided context."
},
{
"role": "user",
"message": "Context:\n${search_ref.output.result}\n\nQuestion: ${workflow.input.question}"
}
],
"temperature": 0.2
}
}
],
"outputParameters": {
"searchResults": "${search_ref.output.result}",
"answer": "${answer_ref.output.result}"
}
}
Every task type — LLM_INDEX_TEXT, LLM_SEARCH_INDEX, LLM_CHAT_COMPLETE — is a native Conductor system task. The vector database, embedding model, and LLM provider are all configuration parameters. Switch from pgvector to Pinecone or from OpenAI to Anthropic by changing a parameter value.
Content generation
Native system tasks for multimodal content generation:
| Task | Type | Description |
|---|---|---|
| Generate Image | GENERATE_IMAGE |
Text-to-image generation via AI models |
| Generate Audio | GENERATE_AUDIO |
Text-to-speech synthesis |
| Generate Video | GENERATE_VIDEO |
Text/image-to-video generation (async) |
| Generate PDF | GENERATE_PDF |
Markdown-to-PDF document conversion |
Examples
Ready-to-use workflow definitions for every AI task type. Each example is a complete JSON workflow you can register and run directly.
| Example | Task types used |
|---|---|
| Chat Completion | LLM_CHAT_COMPLETE |
| Generate Embeddings | LLM_GENERATE_EMBEDDINGS |
| Image Generation | GENERATE_IMAGE |
| Audio Generation | GENERATE_AUDIO |
| Semantic Search | LLM_SEARCH_INDEX |
| RAG Basic | LLM_SEARCH_INDEX, LLM_CHAT_COMPLETE |
| RAG Complete | LLM_INDEX_TEXT, LLM_SEARCH_INDEX, LLM_CHAT_COMPLETE |
| MCP List Tools | LIST_MCP_TOOLS |
| MCP Call Tool | CALL_MCP_TOOL |
| MCP AI Agent | LIST_MCP_TOOLS, LLM_CHAT_COMPLETE, CALL_MCP_TOOL |
| Video — OpenAI Sora | GENERATE_VIDEO |
| Video — Gemini Veo | GENERATE_VIDEO |
| Image-to-Video Pipeline | GENERATE_IMAGE, GENERATE_VIDEO |
| StabilityAI Image | GENERATE_IMAGE |
| PDF Generation | GENERATE_PDF |
| LLM-to-PDF Pipeline | LLM_CHAT_COMPLETE, GENERATE_PDF |
Browse all examples: ai/examples/
Next steps
- Durable Agents — What persists, what gets retried, and why JSON is AI-native.
- Dynamic Workflows — Agents that build their own execution plans at runtime.
- AI & LLM Recipes — Practical recipes for common LLM workflow patterns.