🌋 Volcano SDK
The TypeScript SDK for Multi-Provider AI Agents
Build agents that chain LLM reasoning with MCP tools. Mix OpenAI, Claude, Mistral in one workflow. Parallel execution, branching, loops. Native retries, streaming, and typed errors.
📚 Read the full documentation at volcano.dev →
✨ Features
🤖 Automatic Tool SelectionLLM automatically picks which MCP tools to call based on your prompt. No manual routing needed. |
🧩 Multi-Agent CrewsDefine specialized agents and let the coordinator autonomously delegate tasks. Like automatic tool selection, but for agents. |
💬 Conversational ResultsAsk questions about what your agent did. Use |
🔧 100s of ModelsOpenAI, Anthropic, Mistral, Bedrock, Vertex, Azure. Switch providers per-step or globally. |
🔄 Advanced PatternsParallel execution, branching, loops, sub-agent composition. Enterprise-grade workflow control. |
📡 StreamingStream tokens in real-time as LLMs generate them. Perfect for chat UIs and SSE endpoints. |
🛡️ TypeScript-FirstFull type safety with IntelliSense. Catch errors before runtime. |
📊 ObservabilityOpenTelemetry traces and metrics. Export to Jaeger, Prometheus, DataDog, or any OTLP backend. |
⚡ Production-ReadyBuilt-in retries, timeouts, error handling, and connection pooling. Battle-tested at scale. |
Quick Start
Installation
npm install volcano-sdk
That's it! Includes MCP support and all common LLM providers (OpenAI, Anthropic, Mistral, Llama, Vertex).
Hello World with Automatic Tool Selection
import { agent, llmOpenAI, mcp } from "volcano-sdk";
const llm = llmOpenAI({
apiKey: process.env.OPENAI_API_KEY!,
model: "gpt-4o-mini"
});
const weather = mcp("http://localhost:8001/mcp");
const tasks = mcp("http://localhost:8002/mcp");
// Agent automatically picks the right tools
const results = await agent({ llm })
.then({
prompt: "What's the weather in Seattle? If it will rain, create a task to bring an umbrella",
mcps: [weather, tasks] // LLM chooses which tools to call
})
.run();
// Ask questions about what happened
const summary = await results.summary(llm);
console.log(summary);
Multi-Agent Coordinator
import { agent, llmOpenAI } from "volcano-sdk";
const llm = llmOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
// Define specialized agents
const researcher = agent({ llm, name: 'researcher', description: 'Finds facts and data' })
.then({ prompt: "Research the topic." })
.then({ prompt: "Summarize the research." });
const writer = agent({ llm, name: 'writer', description: 'Creates content' })
.then({ prompt: "Write content." });
// Coordinator autonomously delegates to specialists
const results = await agent({ llm })
.then({
prompt: "Write a blog post about quantum computing",
agents: [researcher, writer] // Coordinator decides when done
})
.run();
// Ask what happened
const post = await results.ask(llm, "Show me the final blog post");
console.log(post);
Documentation
📖 Comprehensive Guides
- Getting Started - Installation, quick start, core concepts
- LLM Providers - OpenAI, Anthropic, Mistral, Llama, Bedrock, Vertex, Azure
- MCP Tools - Automatic selection, OAuth authentication, connection pooling
- Advanced Patterns - Parallel, branching, loops, multi-LLM workflows
- Features - Streaming, retries, timeouts, hooks, error handling
- Observability - OpenTelemetry traces and metrics
- API Reference - Complete API documentation
- Examples - Ready-to-run code examples
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Questions or Feature Requests?
- 📝 Report bugs or issues
- 💡 Request features or ask questions
- ⭐ Star the project if you find it useful
License
Apache 2.0 - see LICENSE file for details.