The security co-pilot for AI agent development.
Build secure AI agents from the start. Scan for logic bugs, prompt injection, missing guardrails, and compliance gaps — before they reach production.
English · 简体中文 · 日本語 · 한국어 · Español · Português · Deutsch · Français
AI agents can loop forever, drain your API budget in minutes, execute arbitrary code from user input, or make high-stakes decisions with zero human oversight. Most of these flaws pass code review because they look like normal code — the danger is in the runtime behavior.
Inkog scans your agent code statically and catches these problems before deployment. One command, works across 20+ frameworks, maps findings to EU AI Act and OWASP LLM Top 10.
When to Use Inkog
- Building an AI agent — Scan during development to catch infinite loops, prompt injection, and missing guardrails before they ship
- Adding security to CI/CD — Add
inkog-io/inkog@v1to GitHub Actions for automated security gates on every PR - Preparing for EU AI Act — Generate compliance reports mapping your agent to Article 14, NIST AI RMF, OWASP LLM Top 10
- Reviewing agent code — Use from Claude Code, Cursor, or any MCP client to get security analysis while you code
- Auditing MCP servers — Check any MCP server for tool poisoning, privilege escalation, or data exfiltration before installing
- Verifying AGENTS.md — Validate that governance declarations match actual code behavior
- Building multi-agent systems — Detect delegation loops, privilege escalation, and unauthorized handoffs between agents
Quick Start
No install needed:
npx -y @inkog-io/cli scan .
Or install permanently:
| Method | Command |
|---|---|
| Install script | curl -fsSL https://inkog.io/install.sh | sh |
| Homebrew | brew tap inkog-io/inkog && brew install inkog |
| Go | go install github.com/inkog-io/inkog/cmd/inkog@latest |
| Binary | Download from Releases |
# Get your free API key at https://app.inkog.io
export INKOG_API_KEY=sk_live_...
inkog .
What It Catches
| Category | Examples | Why it matters |
|---|---|---|
| Infinite loops | Agent re-calls itself with no exit condition, LLM output fed back as input without a cap | Your agent runs forever and racks up API costs |
| Prompt injection | User input flows into system prompt unsanitized, tainted data reaches tool calls | Attackers can hijack your agent's behavior |
| Missing guardrails | No human-in-the-loop for destructive actions, no rate limits on LLM calls, unconstrained tool access | One bad decision and your agent goes rogue |
| Hardcoded secrets | API keys, tokens, and passwords in source code (detected locally, never uploaded) | Credentials leak when you push to GitHub |
| Compliance gaps | Missing human oversight (EU AI Act Article 14), no audit logging, missing authorization checks | You're legally required to have these controls by August 2026 |
Supported Frameworks
Code-first: LangChain · LangGraph · CrewAI · AutoGen · OpenAI Agents · Semantic Kernel · Azure AI Foundry · LlamaIndex · Haystack · DSPy · Phidata · Smolagents · PydanticAI · Google ADK
No-code: n8n · Flowise · Langflow · Dify · Microsoft Copilot Studio · Salesforce Agentforce
GitHub Actions
- uses: inkog-io/inkog@v1
with:
api-key: ${{ secrets.INKOG_API_KEY }}
sarif-upload: true # Shows findings in GitHub Security tab
Scan policies
inkog . --policy low-noise # Only proven vulnerabilities
inkog . --policy balanced # Vulnerabilities + risk patterns (default)
inkog . --policy comprehensive # Everything including hardening tips
inkog . --policy governance # Article 14 controls, authorization, audit trails
inkog . --policy eu-ai-act # EU AI Act compliance report
MCP Server
Scan agent code directly from Claude, ChatGPT, or Cursor:
npx -y @inkog-io/mcp
7 tools including MCP server auditing and multi-agent topology analysis. MCP docs →
Community
- Documentation — CLI reference, detection patterns, integrations
- Slack — Questions, feedback, feature requests
- Issues — Bug reports and feature requests
- Contributing — We welcome PRs
Star History
License
Apache 2.0 — See LICENSE