MCP Hub
Back to servers

SLayer

Agent-native semantic layer, letting AI agents query databases through specifying intent instead of writing SQL, then compiling structured queries into correct, dialect-aware SQL. Dynamic and expressive, supporting multi-stage queries, time-shifts, and complex join schemas.

glama
Stars
35
Forks
3
Updated
Apr 20, 2026
Validated
Apr 23, 2026

SLayer — AI agent operating a semantic layer

PyPI Python Docs License GitHub stars

SLayer is a semantic layer that lets AI agents query your database correctly.

If you find SLayer useful, a ⭐ helps others discover it!


How it works

SLayer sits between your database and whatever consumes the data – AI agents, internal tools, dashboards, or scripts. You define your data models (or let SLayer auto-generate them from the schema), and query using a structured API of measures, dimensions, and filters instead of writing SQL directly.

SLayer compiles these queries into the correct SQL for your database, handling joins, aggregations, time-based calculations, and dialect differences so that consumers don't have to.

SLayer is

  1. dynamic: models can be updated at any time and used immediately; aggregations are defined in queries, not models
  2. simple: query structure is intuitive and easily understood by LLMs and humans
  3. expressive: supports queries like "month-on-month % increase in total revenue, compared to the previous year"
  4. embeddable: can be used as a standalone service or imported as a Python module with no extra server
  5. flexible: exposes MCP, REST API, CLI and Python interfaces; supports most popular databases

See also: automatic model ingestion, queries-as-models, auto-applied filters, and more.

Why not just let agents write SQL? Because they get it wrong often enough to matter – see our blog post and dbt's benchmark analysis.

Quickstart

We recommend using uv, especially if you don't work in a Python project.

To run the server:

uvx --from 'motley-slayer[all]' slayer serve

Or to add the MCP server:

claude mcp add slayer -- uvx --from 'motley-slayer[all]' slayer mcp

Then configure a datasource or ask your agent to help you do it.

Read more on how to get started with MCP, CLI, REST API, Python in the docs.

Interfaces

REST API

# Query
curl -X POST http://localhost:5143/query \
  -H "Content-Type: application/json" \
  -d '{"model": "orders", "fields": [{"formula": "*:count"}], "dimensions": [{"name": "status"}]}'

# List models (returns name + description)
curl http://localhost:5143/models

# Get a single datasource (credentials masked)
curl http://localhost:5143/datasources/my_postgres

See more in the docs.

MCP Server

SLayer supports two MCP transports, HTTP (served alongside the API) and stdio (serverless, spawned by the agent).

# 1. stdio-based, does not require a running server
claude mcp add slayer -- slayer mcp

# 2. HTTP-based (SSE), provided SLayer server is already running
claude mcp add slayer-remote --transport sse --url http://localhost:5143/mcp/sse

SLayer does not expose credentials to consumers once created.

Both transports expose the same tools, allowing to inspect, create and update datasources and models and run queries. More info in the docs.

Python Client

Useful for agents working in code execution environments, e.g. for AI data analytics, as well as any Python apps.

from slayer.client.slayer_client import SlayerClient
from slayer.core.query import SlayerQuery, ColumnRef

# Remote mode (connects to running server)
client = SlayerClient(url="http://localhost:5143")

# Or local mode (no server needed)
from slayer.storage.yaml_storage import YAMLStorage
client = SlayerClient(storage=YAMLStorage(base_dir="./my_models"))

# Query data
query = SlayerQuery(
    model="orders",
    fields=[{"formula": "*:count"}, {"formula": "revenue:sum"}],
    dimensions=[ColumnRef(name="status")],
    limit=10,
)
df = client.query_df(query)
print(df)

CLI

# Run a query directly from the terminal
slayer query '{"model": "orders", "fields": [{"formula": "*:count"}], "dimensions": [{"name": "status"}]}'

# Or from a file
slayer query @query.json --format json

These commands do not depend on a running server.

Models

By default, models are defined as YAML files. Add an optional description to help users and agents understand complex models:

name: orders
sql_table: public.orders
data_source: my_postgres
description: "Core orders table with revenue metrics"

dimensions:
  - name: id
    sql: id
    type: number
    primary_key: true
  - name: status
    sql: status
    type: string
  - name: created_at
    sql: created_at
    type: time

measures:
  - name: revenue
    sql: amount
  - name: quantity
    sql: qty

Fields

The fields parameter specifies what data columns to return.

{
  "model": "orders",
  "dimensions": ["status"],
  "time_dimensions": [{"dimension": "created_at", "granularity": "month"}],
  "fields": [
    {"formula": "*:count"},
    {"formula": "revenue:sum"},
    {"formula": "revenue:sum / *:count", "name": "aov", "label": "Average Order Value"},
    {"formula": "cumsum(revenue:sum)"},
    {"formula": "change_pct(revenue:sum)"},
    {"formula": "last(revenue:sum)", "name": "latest_rev"},
    {"formula": "time_shift(revenue:sum, -1, 'year')", "name": "rev_last_year"},
    {"formula": "time_shift(revenue:sum, -2)", "name": "rev_2_periods_ago"},
    {"formula": "lag(revenue:sum, 1)", "name": "rev_prev_row"},
    {"formula": "rank(revenue:sum)"},
    {"formula": "change(cumsum(revenue:sum))", "name": "cumsum_delta"}
  ]
}

Available functions: cumsum, time_shift, change, lag, and more – see docs. Formulas support arbitrary nesting — e.g., change(cumsum(revenue:sum)) or cumsum(revenue:sum) / *:count.

Filters

Filters use simple formula strings — no verbose JSON objects:

{
  "model": "orders",
  "fields": [{"formula": "*:count"}, {"formula": "revenue:sum"}],
  "filters": [
    "status == 'completed'",
    "amount > 100"
  ]
}

Filters support a variety of operators, composition, pattern matching. Transforms & computed columns can also be used for filtering. See docs for more.

Auto-Ingestion

Connect to a database and generate models automatically. SLayer introspects the schema, detects foreign key relationships, and creates models with explicit join metadata.

For example, given tables orders → customers → regions (via FKs), the orders model will automatically include:

  • Joined dimensions: customers.name, regions.name, etc. (dotted syntax)
  • Count-distinct measures: customers.*:count_distinct, regions.*:count_distinct
  • Explicit joins — LEFT JOINs are constructed dynamically at query time
# Via CLI
slayer ingest --datasource my_postgres --schema public

# Via API
curl -X POST http://localhost:5143/ingest \
  -d '{"datasource": "my_postgres", "schema_name": "public"}'

Via MCP, agents can do this conversationally:

  1. create_datasource(name="mydb", type="postgres", host="localhost", database="app", username="user", password="pass")
  2. ingest_datasource_models(datasource_name="mydb", schema_name="public")
  3. models_summary(datasource_name="mydb")inspect_model(model_name="orders")query(...)

Datasource Setup

By default, datasources are configured as individual YAML files in the datasources/ directory:

# datasources/my_postgres.yaml
name: my_postgres
type: postgres
host: ${DB_HOST}
port: 5432
database: ${DB_NAME}
username: ${DB_USER}
password: ${DB_PASSWORD}

Environment variable references (${VAR}) are resolved at read time.

See more in the docs.

Storage Backends

SLayer ships with two storage backends:

  • YAMLStorage (default) — models and datasources as YAML files on disk. Great for version control.
  • SQLiteStorage — everything in a single SQLite file. Good for embedded use or when you don't want to manage files.

SLayer allows easily implementing your own storage backends, which is useful for features such as tenant isolation.

See the documentation page for storage backends for more.

Roadmap

#StepStatus
1Dynamic joins
2Multi-stage queries
3Cross-model measures
4Aggregation at query time
5Smart output formatting (currency, percentages)
6Unpivoting
7Auto-propagating filters
8Asof joins
9Chart generation (eCharts)

Examples

The examples/ directory contains runnable examples that also serve as integration tests:

ExampleDescription
embeddedSQLite, no server needed
postgresDocker Compose with Postgres + REST API
mysqlDocker Compose with MySQL + REST API
clickhouseDocker Compose with ClickHouse + REST API

Tutorials

The docs/examples/ directory contains Jupyter notebooks that walk through SLayer's features step by step.

NotebookTopic
SQL vs DSLHow model SQL and query DSL stay cleanly separated
Auto-IngestionSchema introspection, FK graph discovery, automatic model generation
Time Operationschange, change_pct, time_shift, lag, lead, last — composable time transforms
JoinsDot syntax, multi-hop dimensions, diamond join disambiguation
Joined MeasuresCross-model measures with sub-query isolation
Multistage QueriesQuery chaining, queries-as-models, ModelExtension

Claude Code Skills

SLayer includes Claude Code skills in .claude/skills/ to help Claude understand the codebase:

  • slayer-overview — architecture, package structure, MCP tools list
  • slayer-query — how to construct queries with fields, dimensions, filters, time dimensions
  • slayer-models — model definitions, datasource configs, auto-ingestion, incremental editing

Known limitations

SLayer currently has no caching or pre-aggregation engine. If you need to process lots of requests to large databases at sub-second latency, consider adding a caching layer or pre-aggregation engine.

License

MIT — see LICENSE.

Reviews

No reviews yet

Sign in to write a review