MCP Hub
Back to servers

neurolink

Universal AI Development Platform with MCP server integration, multi-provider support, and professional CLI. Build, test, and deploy AI applications with multiple ai providers.

Stars
103
Forks
88
Tools
12
Updated
Jan 8, 2026
Validated
Jan 9, 2026

🧠 NeuroLink

The Enterprise AI SDK for Production Applications

12 Providers | 58+ MCP Tools | HITL Security | Redis Persistence

npm version npm downloads Build Status Coverage Status License: MIT TypeScript GitHub Stars Discord

Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.

🧠 What is NeuroLink?

NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDK—whichever fits your workflow.

Where we're headed: We're building for the future of AI—edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision →

Get Started in <5 Minutes →


What's New (Q1 2026)

FeatureVersionDescriptionGuide
Image Generation with Geminiv8.31.0Native image generation using Gemini 2.0 Flash Experimental (imagen-3.0-generate-002). High-quality image synthesis directly from Google AI.Image Generation Guide
HTTP/Streamable HTTP Transportv8.29.0Connect to remote MCP servers via HTTP with authentication headers, automatic retry with exponential backoff, and configurable rate limiting.HTTP Transport Guide
// Image Generation with Gemini (v8.31.0)
const image = await neurolink.generateImage({
  prompt: "A futuristic cityscape",
  provider: "google-ai",
  model: "imagen-3.0-generate-002",
});

// HTTP Transport for Remote MCP (v8.29.0)
await neurolink.addExternalMCPServer("remote-tools", {
  transport: "http",
  url: "https://mcp.example.com/v1",
  headers: { Authorization: "Bearer token" },
  retries: 3,
  timeout: 15000,
});

Previous Updates (Q4 2025)
  • Image Generation – Generate images from text prompts using Gemini models via Vertex AI or Google AI Studio. → Guide
  • Gemini 3 Preview Support - Full support for gemini-3-flash-preview and gemini-3-pro-preview with extended thinking
  • Structured Output with Zod Schemas – Type-safe JSON generation with automatic validation. → Guide
  • CSV & PDF File Support – Attach CSV/PDF files to prompts with auto-detection. → CSV | PDF
  • LiteLLM & SageMaker – Access 100+ models via LiteLLM, deploy custom models on SageMaker. → LiteLLM | SageMaker
  • OpenRouter Integration – Access 300+ models through a single unified API. → Guide
  • HITL & Guardrails – Human-in-the-loop approval workflows and content filtering middleware. → HITL | Guardrails
  • Redis & Context Management – Session export, conversation history, and automatic summarization. → History

Enterprise Security: Human-in-the-Loop (HITL)

NeuroLink includes a production-ready HITL system for regulated industries and high-stakes AI operations:

CapabilityDescriptionUse Case
Tool Approval WorkflowsRequire human approval before AI executes sensitive toolsFinancial transactions, data modifications
Output ValidationRoute AI outputs through human review pipelinesMedical diagnosis, legal documents
Confidence ThresholdsAutomatically trigger human review below confidence levelCritical business decisions
Complete Audit TrailFull audit logging for compliance (HIPAA, SOC2, GDPR)Regulated industries
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  hitl: {
    enabled: true,
    requireApproval: ["writeFile", "executeCode", "sendEmail"],
    confidenceThreshold: 0.85,
    reviewCallback: async (action, context) => {
      // Custom review logic - integrate with your approval system
      return await yourApprovalSystem.requestReview(action);
    },
  },
});

// AI pauses for human approval before executing sensitive tools
const result = await neurolink.generate({
  input: { text: "Send quarterly report to stakeholders" },
});

Enterprise HITL Guide | Quick Start

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more →

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

🤖 AI Provider Integration

12 providers unified under one API - Switch providers with a single parameter change.

ProviderModelsFree TierTool SupportStatusDocumentation
OpenAIGPT-4o, GPT-4o-mini, o1✅ Full✅ ProductionSetup Guide
AnthropicClaude 3.5/3.7 Sonnet, Opus✅ Full✅ ProductionSetup Guide
Google AI StudioGemini 2.5 Flash/Pro✅ Free Tier✅ Full✅ ProductionSetup Guide
AWS BedrockClaude, Titan, Llama, Nova✅ Full✅ ProductionSetup Guide
Google VertexGemini 3/2.5 (gemini-3-*-preview)✅ Full✅ ProductionSetup Guide
Azure OpenAIGPT-4, GPT-4o, o1✅ Full✅ ProductionSetup Guide
LiteLLM100+ models unifiedVaries✅ Full✅ ProductionSetup Guide
AWS SageMakerCustom deployed models✅ Full✅ ProductionSetup Guide
Mistral AIMistral Large, Small✅ Free Tier✅ Full✅ ProductionSetup Guide
Hugging Face100,000+ models✅ Free⚠️ Partial✅ ProductionSetup Guide
OllamaLocal models (Llama, Mistral)✅ Free (Local)⚠️ Partial✅ ProductionSetup Guide
OpenAI CompatibleAny OpenAI-compatible endpointVaries✅ Full✅ ProductionSetup Guide

📖 Provider Comparison Guide - Detailed feature matrix and selection criteria 🔬 Provider Feature Compatibility - Test-based compatibility reference for all 19 features across 12 providers


🔧 Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

ToolPurposeAuto-AvailableDocumentation
getCurrentTimeReal-time clock accessTool Reference
readFileFile system readingTool Reference
writeFileFile system writingTool Reference
listDirectoryDirectory listingTool Reference
calculateMathMathematical operationsTool Reference
websearchGroundingGoogle Vertex web search⚠️ Requires credentialsTool Reference

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// stdio transport - local MCP servers via command execution
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// HTTP transport - remote MCP servers via URL
await neurolink.addExternalMCPServer("github-copilot", {
  transport: "http",
  url: "https://api.githubcopilot.com/mcp",
  headers: { Authorization: "Bearer YOUR_COPILOT_TOKEN" },
  timeout: 15000,
  retries: 5,
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

MCP Transport Options:

TransportUse CaseKey Features
stdioLocal serversCommand execution, environment variables
httpRemote serversURL-based, auth headers, retries, rate limiting
sseEvent streamsServer-Sent Events, real-time updates
websocketBi-directionalFull-duplex communication

📖 MCP Integration Guide - Setup external servers 📖 HTTP Transport Guide - Remote MCP server configuration


💻 Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

FeatureDescriptionDocumentation
Auto Provider SelectionIntelligent provider fallbackSDK Guide
Streaming ResponsesReal-time token streamingStreaming Guide
Conversation MemoryAutomatic context managementMemory Guide
Full Type SafetyComplete TypeScript typesType Reference
Error HandlingGraceful provider fallbackError Guide
Analytics & EvaluationUsage tracking, quality scoresAnalytics Guide
Middleware SystemRequest/response hooksMiddleware Guide
Framework IntegrationNext.js, SvelteKit, ExpressFramework Guides
Extended ThinkingNative thinking/reasoning mode for Gemini 3 and Claude modelsThinking Guide

🏢 Enterprise & Production Features

Production-ready capabilities for regulated industries:

FeatureDescriptionUse CaseDocumentation
Enterprise ProxyCorporate proxy supportBehind firewallsProxy Setup
Redis MemoryDistributed conversation stateMulti-instance deploymentRedis Guide
Cost OptimizationAutomatic cheapest model selectionBudget controlCost Guide
Multi-Provider FailoverAutomatic provider switchingHigh availabilityFailover Guide
Telemetry & MonitoringOpenTelemetry integrationObservabilityTelemetry Guide
Security HardeningCredential management, auditingComplianceSecurity Guide
Custom Model HostingSageMaker integrationPrivate modelsSageMaker Guide
Load BalancingLiteLLM proxy integrationScale & routingLoad Balancing

Security & Compliance:

  • ✅ SOC2 Type II compliant deployments
  • ✅ ISO 27001 certified infrastructure compatible
  • ✅ GDPR-compliant data handling (EU providers available)
  • ✅ HIPAA compatible (with proper configuration)
  • ✅ Hardened OS verified (SELinux, AppArmor)
  • ✅ Zero credential logging
  • ✅ Encrypted configuration storage

📖 Enterprise Deployment Guide - Complete production checklist


Enterprise Persistence: Redis Memory

Production-ready distributed conversation state for multi-instance deployments:

Capabilities

FeatureDescriptionBenefit
Distributed MemoryShare conversation context across instancesHorizontal scaling
Session ExportExport full history as JSONAnalytics, debugging, audit
Auto-DetectionAutomatic Redis discovery from environmentZero-config in containers
Graceful FailoverFalls back to in-memory if Redis unavailableHigh availability
TTL ManagementConfigurable session expirationMemory management

Quick Setup

import { NeuroLink } from "@juspay/neurolink";

// Auto-detect Redis from REDIS_URL environment variable
const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis", // Automatically uses REDIS_URL
    ttl: 86400, // 24-hour session expiration
  },
});

// Or explicit configuration
const neuriolinkExplicit = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
    redis: {
      host: "redis.example.com",
      port: 6379,
      password: process.env.REDIS_PASSWORD,
      tls: true, // Enable for production
    },
  },
});

// Export conversation for analytics
const history = await neurolink.exportConversation({ format: "json" });
await saveToDataWarehouse(history);

Docker Quick Start

# Start Redis
docker run -d --name neurolink-redis -p 6379:6379 redis:7-alpine

# Configure NeuroLink
export REDIS_URL=redis://localhost:6379

# Start your application
node your-app.js

Redis Setup Guide | Production Configuration | Migration Patterns


🎨 Professional CLI

15+ commands for every workflow:

CommandPurposeExampleDocumentation
setupInteractive provider configurationneurolink setupSetup Guide
generateText generationneurolink gen "Hello"Generate
streamStreaming generationneurolink stream "Story"Stream
statusProvider health checkneurolink statusStatus
loopInteractive sessionneurolink loopLoop
mcpMCP server managementneurolink mcp discoverMCP CLI
modelsModel listingneurolink modelsModels
evalModel evaluationneurolink evalEval

📖 Complete CLI Reference - All commands and options

💰 Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • 💰 Automatic Cost Optimization: Selects cheapest models for simple tasks
  • 🔄 LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • 🔍 Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚡ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

Revolutionary Interactive CLI

NeuroLink's CLI goes beyond simple commands - it's a full AI development environment:

Why Interactive Mode Changes Everything

FeatureTraditional CLINeuroLink Interactive
Session StateNoneFull persistence
MemoryPer-commandConversation-aware
ConfigurationFlags per command/set persists across session
Tool TestingManual per toolLive discovery & testing
StreamingOptionalReal-time default

Live Demo: Development Session

$ npx @juspay/neurolink loop --enable-conversation-memory

neurolink > /set provider vertex
✓ provider set to vertex (Gemini 3 support enabled)

neurolink > /set model gemini-3-flash-preview
✓ model set to gemini-3-flash-preview

neurolink > Analyze my project architecture and suggest improvements

✓ Analyzing your project structure...
[AI provides detailed analysis, remembering context]

neurolink > Now implement the first suggestion
[AI remembers previous context and implements suggestion]

neurolink > /mcp discover
✓ Discovered 58 MCP tools:
   GitHub: create_issue, list_repos, create_pr...
   PostgreSQL: query, insert, update...
   [full list]

neurolink > Use the GitHub tool to create an issue for this improvement
✓ Creating issue... (requires HITL approval if configured)

neurolink > /export json > session-2026-01-01.json
✓ Exported 15 messages to session-2026-01-01.json

neurolink > exit
Session saved. Resume with: neurolink loop --session session-2026-01-01.json

Session Commands Reference

CommandPurpose
/set <key> <value>Persist configuration (provider, model, temperature)
/mcp discoverList all available MCP tools
/export jsonExport conversation to JSON
/historyView conversation history
/clearClear context while keeping settings

Interactive CLI Guide | CLI Reference

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "examples/data/invoice.pdf", // Auto-detected as PDF
      "./diagrams/architecture.png", // Auto-detected as image
    ],
  },
  provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

Gemini 3 with Extended Thinking

import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink();

// Use Gemini 3 with extended thinking for complex reasoning
const result = await neurolink.generate({
  input: {
    text: "Solve this step by step: What is the optimal strategy for...",
  },
  provider: "vertex",
  model: "gemini-3-flash-preview",
  thinkingLevel: "medium", // Options: "minimal", "low", "medium", "high"
});

console.log(result.content);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

CapabilityHighlights
Provider unification12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
Multimodal pipelineStream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types.
Quality & governanceAuto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
Memory & contextConversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4).
CLI toolingLoop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
Enterprise opsProxy support, regional routing (Q3), telemetry hooks, configuration management.
Tool ecosystemMCP auto discovery, HTTP/stdio/SSE/WebSocket transports, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

AreaWhen to UseLink
Getting startedInstall, configure, run first promptdocs/getting-started/index.md
Feature guidesUnderstand new functionality front-to-backdocs/features/index.md
CLI referenceCommand syntax, flags, loop sessionsdocs/cli/index.md
SDK referenceClasses, methods, optionsdocs/sdk/index.md
IntegrationsLiteLLM, SageMaker, MCP, Mem0docs/litellm-integration.md
AdvancedMiddleware, architecture, streaming patternsdocs/advanced/index.md
CookbookPractical recipes for common patternsdocs/cookbook/index.md
GuidesMigration, Redis, troubleshooting, provider selectiondocs/guides/index.md
OperationsConfiguration, troubleshooting, provider matrixdocs/reference/index.md

New in 2026: Enhanced Documentation

Enterprise Features:

Provider Intelligence:

Middleware System:

Redis & Persistence:

Migration Guides:

Developer Experience:

Integrations

Contributing & Support


NeuroLink is built with ❤️ by Juspay. Contributions, questions, and production feedback are always welcome.

Reviews

No reviews yet

Sign in to write a review