MCP Hub
Back to servers

sir-thaddeus

A privacy-first, local AI copilot and permissioned agent runtime for Windows. Powered by the Model Context Protocol (MCP).

GitHub
Stars
6
Forks
1
Updated
Mar 4, 2026
Validated
Mar 5, 2026
Sir Thaddeus local AI copilot logo

Sir Thaddeus

Privacy-first, permission-based local AI copilot for Windows.

Latest release Apache 2.0 license Platform Windows 10 and 11 Supports local models and LM Studio


Local-First AI Copilot for Windows

Sir Thaddeus runs on your machine.

Sir Thaddeus is a local AI assistant for Windows built for people who want useful AI without giving up control. It connects to local language models (for example, LM Studio), uses permission-based tool access, and keeps execution visible to the operator.

No telemetry by default. No silent background autonomy. No hidden actions.

If it acts, you see it. If you press STOP, it stops.


Documentation


Why Sir Thaddeus?

Most everyday AI tasks do not need a massive cloud model.

They need something that is:

  • Private
  • Reliable
  • Fast enough on modest hardware
  • Simple to run
  • Respectful of user boundaries

Sir Thaddeus lowers the barrier to entry for local AI on Windows while keeping the user in charge.


What It Feels Like to Use

Hold the push-to-talk hotkey and say:

"When is the local grocery store open?"

Before doing anything, Sir Thaddeus proposes the next step. You can see:

  • What access is requested
  • Why it is needed
  • How long the permission lasts

You approve. It runs. You get the result. Permission expires.

That same interaction model applies throughout the runtime:

  • Nothing runs silently
  • Nothing lingers in the background without approval
  • Every important action is recorded locally
Sir Thaddeus desktop UI showing permission-based local AI workflow

Features

Voice and Interface

  • Push-to-talk voice input with release-to-send behavior
  • Command palette for keyboard-first workflows
  • Global STOP kill switch to halt active execution
  • Tray-first Windows experience with local desktop controls

Local AI Runtime

  • Local LLM integration through LM Studio and OpenAI-compatible endpoints
  • Reasoning pipeline for breaking down logic questions step by step
  • Small-model support with routing assistance for better tool use
  • Lightweight document reading for text-based context

Permissioned Tooling via MCP

  • Web search and browser actions
  • Screen reading and active-window context
  • Read-only file listing and reading with limits
  • Allowlisted system actions
  • Built-in utilities for math, conversions, and structured lookups

Trust and Safety

  • Explicit permission prompts before tool execution
  • Time-boxed permission tokens
  • Local audit logging
  • Fail-closed behavior when something goes sideways
  • Tool budgets to prevent runaway loops and token burn

Optional Connected Services

  • Background watchers for website changes
  • Local notifications for monitored events

Quick Start

No cloud account required.

  1. Go to the Releases page
  2. Download the latest release ZIP
  3. Unzip the archive
  4. Run SirThaddeus.exe
    Windows SmartScreen may appear. If so, choose More Info -> Run Anyway
  5. Start your local model runner
    Tested primarily with LM Studio
  6. Complete first-run setup inside the app

That is it.


Core Principles

1. You are in control

Sir Thaddeus proposes actions. You approve them.

2. Nothing runs silently

If it acts, you can see it.

3. STOP always works

The kill switch revokes permissions and halts execution immediately.

Sir Thaddeus is not designed to replace your judgment. It is designed to extend your capability without taking away your agency.


Architecture

Sir Thaddeus uses a five-layer architecture that separates loop control, interface, model access, tools, and voice runtime.

Execution loop: propose -> validate -> execute -> observe -> verify -> repair -> repeat

flowchart LR
  subgraph loop [Layer 1: Loop - packages/agent]
    Loop[Bounded Agent Loop]
    Context[Run Context and History]
    Router[Intent Router]
    Gate[Policy Gate]
    Validate[Action and Completion Validation]
    Repair[Targeted Repair]
  end

  subgraph frontend [Layer 2: Interface - apps/desktop-runtime]
    Tray[System Tray]
    Overlay[WPF Overlay]
    PTT[Audio Input]
    Playback[Audio Playback]
    Palette[Command Palette]
  end

  subgraph model [Layer 3: Model - packages/llm-client]
    LmStudio[LM Studio / OpenAI-compatible]
  end

  subgraph tools [Layer 4: Tools - apps/mcp-server + packages/memory + memory-sqlite]
    Server[MCP Server - stdio]
    Toolset[Browser / File / System / Screen / WebSearch / Weather / Utilities]
    Memory[SQLite Memory and Retrieval]
  end

  subgraph voice [Layer 5: Voice - apps/voice-host + voice-backend]
    VoiceHost[VoiceHost Proxy]
    VoiceBackend[Voice Backend - Python]
    VoiceBackend --> VoiceHost
  end

  PTT -->|audio buffer| VoiceHost
  VoiceHost -->|transcribed text| Loop
  Palette -->|typed request| Loop

  Loop --> Router --> Gate
  Gate -->|allowed tools + budgets| Loop

  Loop -->|model prompt| LmStudio
  LmStudio -->|tool_calls / next action| Loop

  Loop --> Validate
  Validate -->|blocked/ok| Loop
  Validate -->|complete/partial/missing| Repair
  Repair -->|targeted follow-up| Loop

  Loop -->|tools/call| Server
  Server --> Toolset
  Server --> Memory
  Server -->|tool result| Loop

  Loop -->|final text| VoiceHost
  VoiceHost -->|audio stream| Playback

  Loop -->|events| Overlay
  Tray --> Overlay

Layer Responsibilities

LayerProject(s)ResponsibilityTalks to
Layer 1: Looppackages/agentRoute, gate, validate, repair, completeInterface, Model, Tools, Voice
Layer 2: Interfaceapps/desktop-runtimeTray, overlay, hotkeys, command palette, push-to-talk UXLoop, Voice
Layer 3: Modelpackages/llm-clientOpenAI-style model calls and embeddingsLM Studio, Loop
Layer 4: Toolsapps/mcp-server, packages/memory, packages/memory-sqliteMCP tools plus local memory retrieval/storageLoop
Layer 5: Voiceapps/voice-host, apps/voice-backendLocal ASR and TTS transport/runtimeInterface, Loop

Project Structure

sir-thaddeus/
|-- apps/
|   |-- desktop-runtime/
|   |-- voice-host/
|   |-- voice-backend/
|   `-- mcp-server/
|-- assets/
|-- packages/
|-- tests/
|-- tools/
`-- project-notes/

Technical Notes

  • Tested primarily with LM Studio and smaller local models
  • Other local runtimes may work, but support may vary
  • Smaller reasoning models can take longer to respond, especially in deeper thinking modes
  • The runtime is designed around permissioned execution, local visibility, and practical reliability

Who This Is For

Sir Thaddeus is for:

  • Developers exploring local AI tooling
  • Privacy-conscious users who want AI on Windows without telemetry
  • Builders interested in MCP architecture, tool routing, and permissioned agents
  • Anyone who wants an AI copilot they can actually control

It is not intended to be an unbounded autonomous agent that runs freely on your machine.


License

Licensed under Apache 2.0. See LICENSE for details.

Reviews

No reviews yet

Sign in to write a review