MCP Hub
Back to servers

Force Fabric MCP Server

Provides live optimization analysis and health checks for Microsoft Fabric items including Lakehouses, Warehouses, Eventhouses, and Semantic Models. It enables users to detect performance bottlenecks, data quality issues, and security vulnerabilities using over 100 automated rules.

glama
Stars
9
Forks
1
Updated
Mar 12, 2026
Validated
Mar 14, 2026

Force Fabric MCP Server — Detect & Optimize

Force Fabric MCP Server

A Model Context Protocol (MCP) server that provides live optimization analysis for Microsoft Fabric items. It connects to your Fabric tenant via Azure authentication and runs 100+ rules across Lakehouses, Warehouses, Eventhouses, and Semantic Models — detecting real issues with specific table and column names.

Features

🏠 Lakehouse Analysis (29 rules)

  • REST API: SQL Endpoint status, Delta format check, medallion architecture naming
  • SQL Endpoint: Data type analysis, nullable keys, empty tables, wide columns, naming conventions, audit columns, sensitive data
  • OneLake Delta Log: VACUUM/OPTIMIZE history, auto-optimize settings, retention policies, file size analysis, write amplification, Z-Order, data skipping, partitioning

🏗️ Warehouse Analysis (39 rules)

  • Schema: Primary keys, deprecated types, float precision, column/table naming, wide tables, foreign keys, circular FKs
  • Data Quality: Nullable keys, empty tables, mixed date types, missing defaults, sensitive/PII columns
  • Query Performance: Top slow queries, frequent queries, failed queries, volume trends, average duration
  • Security: Data masking, RLS, privilege audit
  • Database Config: AUTO_UPDATE_STATISTICS, result set caching, ANSI settings, snapshot isolation, page verify

📊 Eventhouse Analysis (13 rules per KQL database)

  • Storage: Extent fragmentation, compression ratios, storage by table
  • Policies: Caching, retention, merge, encoding, row order, partitioning, ingestion batching, streaming
  • Health: Materialized views, data freshness, continuous exports, ingestion failures
  • Queries: Performance summary (P95/avg/max), slow queries, failed commands

📐 Semantic Model Analysis (32 rules)

  • DAX Expression Checks (via MDSCHEMA_MEASURES DMV): IFERROR, DIVIDE vs /, EVALUATEANDLOG, INTERSECT, duplicates, FILTER patterns, nested CALCULATE, SUMX, 1-(x/y), format strings
  • Model Structure (via MDSCHEMA DMVs): Table count, date table, measure documentation, naming
  • COLUMNSTATISTICS BPA (Import models): High-cardinality text, GUIDs, constants, boolean/date/number as text, string keys, wide tables, timestamps

Setup

Prerequisites

  • Node.js 18+
  • Azure CLI (az login) or another Azure authentication method
  • Fabric capacity with items (Lakehouse, Warehouse, Eventhouse, or Semantic Model)

Install

git clone https://github.com/tmdaidevs/Force-Fabric-MCP-Server.git
cd Force-Fabric-MCP-Server
npm install
npm run build

Configure in VS Code

Add to your .vscode/mcp.json:

{
  "servers": {
    "fabric-optimization": {
      "type": "stdio",
      "command": "node",
      "args": ["dist/index.js"],
      "cwd": "${workspaceFolder}"
    }
  }
}

Or add to your global VS Code settings (settings.json):

{
  "mcp": {
    "servers": {
      "fabric-optimization": {
        "type": "stdio",
        "command": "node",
        "args": ["/absolute/path/to/Force-Fabric-MCP-Server/dist/index.js"]
      }
    }
  }
}

Usage

1. Authenticate

Use auth_login with method "azure_cli"

Available methods: azure_cli, interactive_browser, device_code, vscode, default, service_principal

2. List items

List all lakehouses in workspace <workspace-id>
List all warehouses in workspace <workspace-id>
List all eventhouses in workspace <workspace-id>
List all semantic models in workspace <workspace-id>

3. Run optimization scan

Run lakehouse optimization recommendations for <lakehouse-id> in workspace <workspace-id>
Run warehouse optimization recommendations for <warehouse-id> in workspace <workspace-id>
Run eventhouse optimization recommendations for <eventhouse-id> in workspace <workspace-id>
Run semantic model optimization recommendations for <model-id> in workspace <workspace-id>

Output Format

Every scan returns a unified rule results table:

15 rules — ✅ 9 passed | 🔴 1 failed | 🟡 5 warning

| Rule | Status | Finding | Recommendation |
|------|--------|---------|----------------|
| LH-007 Key Columns Are NOT NULL | 🔴 | 16 key column(s) allow NULL: ... | Add NOT NULL constraints... |
| LH-004 Table Maintenance | 🟡 | 4 Delta tables need OPTIMIZE... | Run lakehouse_run_table_maintenance... |

Only issues (FAIL/WARN) are shown in the table. Passed rules are counted in the summary.

Available Tools

ToolDescription
auth_loginAuthenticate to Fabric
auth_statusCheck authentication status
auth_logoutDisconnect
workspace_listList all workspaces
lakehouse_listList lakehouses in a workspace
lakehouse_list_tablesList tables in a lakehouse
lakehouse_run_table_maintenanceRun OPTIMIZE/VACUUM on tables
lakehouse_get_job_statusCheck maintenance job status
lakehouse_optimization_recommendationsFull scan with 29 rules
warehouse_listList warehouses in a workspace
warehouse_optimization_recommendationsFull scan with 39 rules
warehouse_analyze_query_patternsFocused query performance analysis
eventhouse_listList eventhouses in a workspace
eventhouse_list_kql_databasesList KQL databases
eventhouse_optimization_recommendationsFull scan with 13+ rules per KQL DB
semantic_model_listList semantic models
semantic_model_optimization_recommendationsFull scan with 32 rules

Complete Rule Reference

🏠 Lakehouse Rules (29)

RuleCategorySeverityDescription
LH-001AvailabilityHIGHSQL Endpoint is active and provisioned
LH-002MaintainabilityLOWLakehouse follows medallion naming (bronze/silver/gold)
LH-003PerformanceHIGHAll tables use Delta format
LH-004PerformanceMEDIUMDelta tables have regular OPTIMIZE + VACUUM
LH-005Data QualityMEDIUMNo empty tables
LH-006PerformanceMEDIUMNo over-provisioned string columns (>500 chars)
LH-007Data QualityHIGHKey/ID columns are NOT NULL
LH-008Data QualityMEDIUMNo float/real precision issues
LH-009MaintainabilityLOWColumn naming convention (no spaces/special chars)
LH-010Data QualityMEDIUMDate columns use proper DATE/DATETIME2 types
LH-011Data QualityMEDIUMNumeric columns use proper numeric types
LH-012MaintainabilityLOWNo excessively wide tables (>30 columns)
LH-013Data QualityMEDIUMSchema has NOT NULL constraints (not >90% nullable)
LH-014MaintainabilityLOWTables have audit columns (created_at/updated_at)
LH-015Data QualityLOWConsistent date types per table
LH-S01SecurityHIGHNo unprotected sensitive/PII columns
LH-S02PerformanceINFOLarge tables (>1M rows) identified
LH-S03MaintainabilityHIGHNo deprecated data types (TEXT/NTEXT/IMAGE)
LH-S04Data QualityMEDIUMAll tables have key columns
LH-016PerformanceMEDIUMLarge tables (>10GB) are partitioned
LH-017MaintenanceMEDIUMRegular VACUUM executed (within 7 days)
LH-018PerformanceMEDIUMRegular OPTIMIZE executed
LH-019PerformanceHIGHNo small file problem (>50% files <25MB)
LH-020PerformanceMEDIUMAuto-optimize enabled
LH-021MaintenanceLOWRetention policy configured
LH-022PerformanceLOWDelta log version count reasonable (<100)
LH-023PerformanceMEDIUMLow write amplification (MERGE/UPDATE/DELETE ratio)
LH-024PerformanceLOWData skipping configured
LH-025PerformanceMEDIUMZ-Order applied on large tables (>10GB)

🏗️ Warehouse Rules (39)

RuleCategorySeverityDescription
WH-001Data QualityHIGHPrimary keys defined (NOT ENFORCED)
WH-002MaintainabilityHIGHNo deprecated data types (TEXT/NTEXT/IMAGE)
WH-003Data QualityMEDIUMNo float/real precision issues
WH-004PerformanceMEDIUMNo over-provisioned columns (>500 chars)
WH-005MaintainabilityLOWColumn naming convention
WH-006MaintainabilityLOWTable naming convention
WH-007MaintainabilityLOWNo SELECT * in views
WH-008PerformanceMEDIUMStatistics are fresh (<30 days)
WH-009Data QualityMEDIUMNo disabled/untrusted constraints
WH-010Data QualityHIGHKey columns are NOT NULL
WH-011MaintainabilityMEDIUMNo empty tables
WH-012MaintainabilityMEDIUMNo excessively wide tables (>50 columns)
WH-013Data QualityLOWConsistent date types per table
WH-014MaintainabilityMEDIUMForeign keys defined
WH-015PerformanceMEDIUMNo large BLOB/MAX columns
WH-016MaintainabilityLOWTables have audit columns
WH-017Data QualityHIGHNo circular foreign keys
WH-018SecurityHIGHSensitive data protected (PII masking)
WH-019SecurityMEDIUMRow-Level Security defined
WH-020SecurityMEDIUMMinimal db_owner privileges
WH-021MaintainabilityLOWNo over-complex views (>10 dependencies)
WH-022MaintainabilityLOWMinimal cross-schema dependencies
WH-023PerformanceHIGHNo very slow queries (>60s)
WH-024PerformanceHIGHNo frequently slow queries (>10x and >10s avg)
WH-025ReliabilityMEDIUMNo recent query failures
WH-026PerformanceHIGHAUTO_UPDATE_STATISTICS enabled
WH-027PerformanceMEDIUMResult set caching enabled
WH-028ConcurrencyMEDIUMSnapshot isolation enabled
WH-029ReliabilityMEDIUMPage verify CHECKSUM
WH-030StandardsLOWANSI settings correct
WH-031AvailabilityHIGHDatabase ONLINE
WH-032PerformanceMEDIUMAll tables have statistics
WH-033PerformanceMEDIUMOptimal data types
WH-034MaintainabilityLOWNo near-empty tables (<10 rows)
WH-035MaintainabilityLOWStored procedures documented
WH-036Data QualityMEDIUMNOT NULL columns have defaults
WH-037MaintainabilityLOWConsistent string types (varchar/nvarchar)
WH-038MaintainabilityLOWSchemas are documented
WH-039PerformanceMEDIUMQuery performance healthy (avg <5s)

📊 Eventhouse Rules (13 per KQL Database)

RuleCategorySeverityDescription
EH-001AvailabilityHIGHQuery endpoint available
EH-002PerformanceHIGHNo extent fragmentation
EH-003PerformanceMEDIUMGood compression ratio (>40%)
EH-004PerformanceMEDIUMCaching policy configured
EH-005Data ManagementMEDIUMRetention policy configured
EH-006ReliabilityHIGHMaterialized views healthy
EH-007Data QualityMEDIUMData is fresh (<7 days)
EH-008PerformanceHIGHNo slow query patterns (>30s avg)
EH-009ReliabilityMEDIUMNo recent failed commands
EH-010ReliabilityHIGHNo ingestion failures
EH-011PerformanceINFOStreaming ingestion config
EH-012ReliabilityMEDIUMContinuous exports healthy
EH-013PerformanceMEDIUMHot cache coverage (>50% hot)

📐 Semantic Model Rules (32)

RuleCategorySeverityDescription
SM-001DAXMEDIUMAvoid IFERROR function
SM-002DAXMEDIUMUse DIVIDE function instead of /
SM-003DAXHIGHNo EVALUATEANDLOG in production
SM-004DAXMEDIUMUse TREATAS not INTERSECT
SM-005DAXLOWNo duplicate measure definitions
SM-006DAXMEDIUMFilter by columns not tables
SM-007DAXLOWAvoid adding 0 to measures
SM-008MaintenanceLOWMeasures have documentation
SM-009MaintenanceHIGHModel has tables
SM-010PerformanceMEDIUMModel has date table
SM-011DAXMEDIUMAvoid 1-(x/y) syntax
SM-012DAXLOWNo direct measure references
SM-013DAXMEDIUMAvoid nested CALCULATE
SM-014DAXLOWUse SUM instead of SUMX for simple aggregation
SM-015FormattingLOWMeasures have format string
SM-016DAXMEDIUMAvoid FILTER(ALL(...))
SM-017FormattingLOWMeasure naming convention
SM-018PerformanceLOWReasonable table count (<20)
SM-B01Data TypesHIGHNo high cardinality text columns
SM-B02Data TypesHIGHNo description/comment columns
SM-B03Data TypesHIGHNo GUID/UUID columns in model
SM-B04Data TypesMEDIUMNo constant columns (cardinality=1)
SM-B05Data TypesMEDIUMNo booleans stored as text
SM-B06Data TypesMEDIUMNo dates stored as text
SM-B07Data TypesMEDIUMNo numbers stored as text
SM-B08Data TypesMEDIUMInteger keys instead of string keys
SM-B09Data TypesMEDIUMNo excessively wide tables
SM-B10Data TypesHIGHNo extremely wide tables (>100 cols)
SM-B11Data TypesHIGHNo multiple high-cardinality columns
SM-B12Data TypesLOWNo single column tables
SM-B13Data TypesMEDIUMNo high-precision timestamps
SM-B14Data TypesLOWNo low cardinality columns in fact tables

Architecture

src/
├── index.ts                 # MCP server entry point
├── auth/
│   └── fabricAuth.ts        # Azure authentication (CLI, browser, device code, SP)
├── clients/
│   ├── fabricClient.ts      # Fabric REST API + DAX executeQueries
│   ├── sqlClient.ts         # SQL endpoint via tedious (Lakehouse + Warehouse)
│   ├── kqlClient.ts         # KQL/Kusto REST API (Eventhouse)
│   ├── onelakeClient.ts     # OneLake ADLS Gen2 + Delta Log parser
│   └── xmlaClient.ts        # XMLA SOAP client (experimental)
└── tools/
    ├── ruleEngine.ts        # Shared RuleResult type + unified renderer
    ├── auth.ts              # Auth tools
    ├── workspace.ts         # Workspace tools
    ├── lakehouse.ts         # 29 rules (REST + SQL + Delta Log)
    ├── warehouse.ts         # 39 rules (SQL)
    ├── eventhouse.ts        # 13 rules per KQL DB (KQL)
    └── semanticModel.ts     # 32 rules (DAX + DMV + COLUMNSTATISTICS)

Authentication Methods

MethodUse Case
azure_cliDevelopment - uses your az login session
interactive_browserOpens browser for interactive login
device_codeHeadless/remote environments
vscodeUses VS Code Azure account
service_principalCI/CD and automation (requires tenantId, clientId, clientSecret)
defaultAuto-detect (tries CLI, managed identity, env vars, VS Code)

License

MIT

Reviews

No reviews yet

Sign in to write a review