MCP Hub
Back to servers

omni-nli

An MCP server for natural language inference

Registry
Forks
1
Updated
Jan 29, 2026

Quick Install

uvx omni-nli
Omni-NLI Logo

Omni-NLI

Tests Code Coverage Python Version PyPI Documentation License
Examples Docker Image (CPU) Docker Image (CUDA)

A multi-interface (REST and MCP) server for natural language inference


Omni-NLI is a self-hostable server that provides natural language inference (NLI) capabilities via RESTful and the Model Context Protocol (MCP) interfaces. It can be used both as a very scalable standalone stateless microservice and also as an MCP server for AI agents to implement a verification layer for AI-based applications like chatbots or virtual assistants.

Architecture Diagram

What is NLI?

Given two pieces of text called premise and hypothesis, NLI is the task of determining the logical relationship between them if it was done by a human. The relationship is typically shown by one of three labels:

  • "entailment": the hypothesis is supported by the premise
  • "contradiction": the hypothesis is contradicted by the premise
  • "neutral": the hypothesis is neither supported nor contradicted by the premise

NLI is useful for a lot of applications, like fact-checking the output of large language models (LLMs) and checking the correctness of the answers a question-answering system generates.

[!IMPORTANT] The quality of the results depends a lot on the model (the LLM) that is used. A good strategy is to first fine-tune the model using a dataset of premise-hypothesis-label triples that are relevant to your application domain.

Main Features of Omni-NLI

  • Supports models provided by different backends, including Ollama, HuggingFace (public and private/gated models), and OpenRouter
  • Supports REST API (for traditional applications) and MCP (for AI agents) interfaces
  • Fully configurable and very scalable, with built-in caching
  • Provides confidence scores and (optional) reasoning traces for explainability

See ROADMAP.md for the list of implemented and planned features.

[!IMPORTANT] Omni-NLI is in early development, so bugs and breaking changes are expected. Please use the issues page to report bugs or request features.


Quickstart

1. Installation

pip install omni-nli

2. Start the Server

omni-nli

3. Evaluate NLI (with REST API)

curl -X POST \
  -H "Content-Type: application/json" \
  -d '{
    "premise": "A football player kicks a ball into the goal.",
    "hypothesis": "The football player is asleep on the field."
  }' \
  http://127.0.0.1:8000/api/v1/nli/evaluate

Example response:

{
    "label": "contradiction",
    "confidence": 0.99,
    "model": "microsoft/Phi-3.5-mini-instruct",
    "backend": "huggingface"
}

4. Evaluate NLI (with MCP Interface)

lm_studio_mcp_usage_example_1.png


Documentation

Check out the Omni-NLI Documentation for more information, including configuration options, API reference, and examples.


Contributing

Contributions are always welcome! Please see CONTRIBUTING.md for details on how to get started.

License

Omni-NLI is licensed under the MIT License (see LICENSE).

Acknowledgements

  • The logo is from SVG Repo with some modifications.

Reviews

No reviews yet

Sign in to write a review