vivesca
An information metabolism engine. Every text fragment loaded into an LLM's context window is energy — tokens consumed, output influenced, budget spent. Vivesca applies continuous selection pressure so that energy is never wasted.
MCP server is one interface. The metabolism is the identity.
Install
uv add vivesca
What it provides
- 25 tools across 8 domains — messaging, search, calendar, memory, browser, health, metabolism
- Structured output — every tool returns Pydantic models with
outputSchema - Metabolism engine — signal collection, fitness computation, variant evolution, promotion gates
- 4 resources — budget, calendar, search log, memory stats
- 3 prompts — research, draft message, morning brief
Hypothesis
Should LLM systems manage their own context? Vivesca tests whether they should, and how.
Claim: Every text fragment in an LLM's context is energy under selection pressure. Continuous metabolism — governed by taste, not just metrics — outperforms manual optimization.
Predictions:
- Frequency-based pruning creates fragile systems (constitutive knowledge looks unused but is load-bearing)
- Taste-based fitness outperforms pure token efficiency metrics
- Systems that metabolize all loaded text outperform systems that optimize only some
Status: Hypothesis with one implementation, zero external validation. The signals accumulating now are the first real data.
Philosophy
Tokens are energy. Text is mass. Taste decides how to spend it. The rest is plumbing.
Read the trilogy:
- The Missing Metabolism — tools need evolution
- Taste Is the Metabolism — taste is the organizing principle
- Everything Is Energy — one equation