MCP Hub
Back to servers

Launch Engine

Business Execution OS — Full pipeline from idea to revenue for solo founders.

Registry
Stars
1
Forks
1
Updated
Mar 21, 2026

Quick Install

npx -y launch-engine-mcp

Launch Engine

npm version License: MIT GitHub stars

Agentic Business Pipeline OS as an MCP server. Full pipeline from idea to revenue — for solo founders and bootstrappers.

npx -y launch-engine-mcp

Launch Engine Demo


Why Launch Engine?

Most MCP servers give you one tool. A GitHub integration. A database query. A Slack bot.

Launch Engine gives you 35 tools that work as a pipeline — the entire playbook from raw idea to validated revenue, running inside the AI client you already use.

  • No more blank-page paralysis. Start with scout and the system tells you exactly what to do next, every step of the way.
  • Every stage feeds the next. Buyer research flows into offer design. Offer design flows into campaign copy. Campaign copy flows into validation. Nothing is wasted.
  • Math before assets. Unit economics are validated before you build anything. You'll never spend weeks building an offer that can't work at your budget.
  • Test ideas for $50, not $5,000. rapid_test gives you signal in 3-5 days with a landing page and paid traffic — before you commit to the full pipeline.
  • Your AI becomes a co-founder, not a chatbot. It doesn't just answer questions. It executes a structured business system with you.

Install

npm install -g launch-engine-mcp

Or run directly without installing:

npx -y launch-engine-mcp

Quick Start

Claude Desktop

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "launch-engine": {
      "command": "npx",
      "args": ["-y", "launch-engine-mcp"],
      "env": {
        "LAUNCH_ENGINE_PROJECT_DIR": "/path/to/your/project"
      }
    }
  }
}

Cursor

Add to your MCP settings (.cursor/mcp.json):

{
  "mcpServers": {
    "launch-engine": {
      "command": "npx",
      "args": ["-y", "launch-engine-mcp"],
      "env": {
        "LAUNCH_ENGINE_PROJECT_DIR": "/path/to/your/project"
      }
    }
  }
}

From Source

git clone https://github.com/ZionHopkins/launch-engine-mcp.git
cd launch-engine-mcp
npm install
npm run build
node dist/index.js

How It Works

Launch Engine is a two-layer tool system:

Layer A — 35 SOP Tools (read-only): Each tool validates prerequisites against pipeline-state.json, loads upstream context from previous stages, checks learnings.json for patterns, and returns full SOP instructions enriched with that context. Your AI executes the instructions.

Layer B — 3 Utility Tools (mutations): update_pipeline_state, save_asset, capture_learning. These handle all state writes and file creation. Your AI calls them after executing each SOP.

The Pipeline

Three entry points:

1. scout        → Full pipeline (research → offer → build → deploy → validate)
2. rapid_test   → Quick $50-100 test (signal in 3-5 days)
3. passive_deploy → Marketplace assets (after research)

Full Pipeline Flow

LAYER 1 (Strategist):
  scout → autonomy → market_intel → research → build_blocks → stress_test → unit_economics

LAYER 2 (Builder):
  name_lock → platform + product → deploy → qa → validate_prep

LAYER 3 (Validator):
  validate_check (daily) → validate_decide → feedback → iterate

TRAFFIC:
  traffic_strategy → channels → creative_test → funnel_optimize → scale

CROSS-CUTTING:
  status | daily_check | lessons | voice_extract | dream_100

Each tool checks prerequisites automatically. If you try to run research before completing market_intel, you'll get a clear STAGE_BLOCKED message telling you exactly what to run first.

Tools Reference

SOP Tools (35)

ToolDescriptionPrerequisites
scoutMarket scanning — takes a raw idea, determines viabilityNone (entry point)
autonomyAgent Autonomy Score — AI-buildable product viabilityscout
market_intelDeep market research with competitive scoringscout, autonomy
researchTherapeutic Buyer Engine — deep persona researchmarket_intel
build_blocks7 Building Blocks from buyer researchresearch
stress_testOffer scoring across 10 dimensionsbuild_blocks
unit_economicsCPA, LTV, break-even modelingstress_test
name_lockLock business/product namestress_test, unit_economics
platformTech stack selection and scoringstress_test
productProduct architecture designstress_test, name_lock
deploySales pages, emails, ad copy generationname_lock, platform, product
qa7-check persona alignment gatedeploy
validate_prepValidation deployment packagedeploy, qa
validate_checkDaily 60-second health checkvalidate_prep
validate_decideEnd-of-window verdictvalidate_prep
feedbackPerformance diagnosis and fix routingdeploy
traffic_strategyTraffic channel research and scoringdeploy
channelsChannel setup and configurationtraffic_strategy
creative_testAd creative variation testingchannels
funnel_optimizeCRO testing across conversion funnelchannels
scaleSystematic scaling of validated channelscreative_test
traffic_analyticsPerformance reporting and attributionchannels
dream_100Relationship strategy and outreachresearch
passive_deployMarketplace asset scoring and specsresearch
passive_checkScheduled performance checkspassive_deploy
passive_compoundDeploy related assets around anchorspassive_deploy
passive_portfolioQuarterly portfolio reviewpassive_deploy
rapid_testQuick idea test — landing page + adsNone (entry point)
rapid_checkDaily metrics vs. thresholdsrapid_test
rapid_graduateGraduate test to full pipelinerapid_check
rapid_statusDashboard of all rapid testsNone
statusPipeline status reportNone
daily_check5-minute daily operations pulseLive campaigns
lessonsPattern library — capture and retrieveNone
voice_extractBrand voice extraction from contentqa

Utility Tools (3)

ToolDescription
update_pipeline_stateUpdate pipeline-state.json with dot-notation paths
save_assetSave files to assets/[market-name]/ directory
capture_learningCapture reusable patterns to learnings.json

Project Directory Structure

Launch Engine creates and manages files in your project directory:

your-project/
├── pipeline-state.json      # Pipeline progress tracking
├── learnings.json            # Pattern library across pipelines
└── assets/
    └── [market-name]/
        ├── research/         # Scout reports, buyer research, market intel
        ├── building-blocks/  # The 7 Building Blocks
        ├── product/          # Product Architecture Blueprint
        ├── copy/             # Sales letters, email sequences
        ├── campaigns/        # Landing pages, ad copy
        ├── traffic/          # Traffic strategy, creative tests, analytics
        ├── validation/       # Deployment packages, daily checks, verdicts
        ├── voice/            # Brand voice calibration
        ├── passive-portfolio/ # PADA outputs
        └── rapid-test/       # Rapid test assets

Configuration

The project directory is resolved in order:

  1. LAUNCH_ENGINE_PROJECT_DIR environment variable
  2. --project-dir= CLI argument
  3. Current working directory

First Use

When you run status with no existing pipeline, you'll see:

Three paths available:

  1. rapid_test — $50-100 paid traffic test in 3-5 days
  2. scout — Full active pipeline with deep research and validation
  3. passive_deploy — Marketplace assets (requires research first)

Best Practices

Getting Started

  • Start with status — always run this first. It reads your pipeline state and tells you exactly where you are and what to do next.
  • New idea? Use rapid_test first — don't run the full pipeline on an unvalidated idea. Spend $50-100 to get signal in 3-5 days. If it graduates, then run scout.
  • One pipeline at a time — you can run multiple rapid tests in parallel, but focus on one full pipeline at a time. Context switching kills momentum.

During the Pipeline

  • Follow the order — the prerequisite system exists for a reason. Each stage feeds the next. Skipping market_intel means research has no competitive context. Skipping stress_test means you might build assets for a broken offer.
  • Don't skip qa — it catches promise-product misalignment, unattributed statistics, and persona drift. Every asset that touches a buyer must clear the QA gate.
  • Run daily_check every day during validation — it takes 60 seconds and catches problems before they burn budget.
  • Use lessons after every major decision — verdicts (ADVANCE/KILL), graduated rapid tests, creative test winners. The pattern library makes every future pipeline smarter.

Working with the AI

  • Let the AI execute the full SOP — each tool returns complete instructions. Don't interrupt midway. Let it finish the research, generate the deliverables, and save the files.
  • Review Tier 3/4 decisions carefully — the system will pause and ask for your input on market selection, pricing, kill decisions, and anything involving real money. These pauses are intentional.
  • Trust the mathunit_economics will tell you if the numbers work at your budget. If the verdict is NON-VIABLE, don't try to force it. Move on or adjust the offer.

Scaling

  • Validate before you scalescale requires proven creative winners with 30+ conversions. Scaling unvalidated campaigns is the fastest way to burn money.
  • Compound your learnings — passive assets that reach ANCHOR status should trigger passive_compound. One proven asset can spawn 5-10 related assets.
  • Run traffic_analytics weekly — attribution drift happens. What worked last week may not work next week. Stay on top of the data.

Common Mistakes to Avoid

  • Don't build assets before stress_test passes — a GO verdict means the offer is structurally sound. REVISE or REBUILD means fix the foundation first.
  • Don't skip name_lock — changing the business name after assets are built means rebuilding everything. Lock it early.
  • Don't ignore KILL signals — if rapid test metrics hit kill thresholds, kill it. If validation says KILL, capture the lessons and move on. Sunk cost is not a strategy.
  • Don't publish without qa clearance — unvetted copy with unattributed claims or persona misalignment damages trust and conversion rates.
  • Don't run the full pipeline for every idea — that's what rapid_test is for. Test 5-10 ideas cheaply, then invest the full pipeline in the winner.

Listings

Listed on MCP Server Hub | MCP Registry

License

MIT

Reviews

No reviews yet

Sign in to write a review