Nettune
A network diagnostics and TCP optimization tool with MCP (Model Context Protocol) integration for AI-assisted configuration.
Features
- End-to-end Network Testing: RTT measurement, throughput testing, latency under load detection
- Configuration Profiles: Pre-built profiles for BBR, FQ, buffer tuning
- Safe Apply/Rollback: Automatic snapshots before configuration changes with rollback capability
- MCP Integration: Works with Claude and other LLM chat interfaces via MCP stdio protocol
Quick Start
Server Mode
Run on the Linux server you want to optimize:
# Start the server with an API key
sudo nettune server --api-key YOUR_SECRET_KEY
# With custom options
sudo nettune server --api-key YOUR_SECRET_KEY --listen 0.0.0.0:9876 --state-dir /var/lib/nettune
Client Mode (MCP)
Configure in your Chat GUI (e.g., Claude Desktop):
{
"mcpServers": {
"nettune": {
"command": "nettune",
"args": ["client", "--api-key", "YOUR_SECRET_KEY", "--server", "http://YOUR_SERVER:9876"]
}
}
}
Or use the NPM wrapper:
{
"mcpServers": {
"nettune": {
"command": "npx",
"args": ["@jtsang/nettune-mcp", "--api-key", "YOUR_SECRET_KEY", "--server", "http://YOUR_SERVER:9876"]
}
}
}
Available MCP Tools
| Tool | Description |
|---|---|
nettune.test_rtt | Measure RTT (round-trip time) to server |
nettune.test_throughput | Measure upload/download throughput |
nettune.test_latency_under_load | Detect bufferbloat by measuring latency during load |
nettune.snapshot_server | Create a configuration snapshot for rollback |
nettune.list_profiles | List available optimization profiles |
nettune.show_profile | Show details of a specific profile |
nettune.create_profile | Create a custom optimization profile |
nettune.apply_profile | Apply a profile (dry_run or commit mode) |
nettune.rollback | Rollback to a previous snapshot |
nettune.status | Get current server status and configuration |
Built-in Profiles
bbr-fq-default
Conservative BBR + FQ setup. Safe for most servers.
- Enables BBR congestion control
- Sets FQ as default qdisc
- Enables MTU probing
bbr-fq-tuned-32mb
BBR with increased buffers for high-BDP (bandwidth-delay product) links.
- All settings from bbr-fq-default
- 32MB socket buffers (rmem_max, wmem_max)
- Optimized tcp_rmem/tcp_wmem
- Disables slow start after idle
low-latency
Optimized for minimal latency scenarios (gaming, trading, interactive apps).
- Uses FQ_CoDel qdisc for active queue management
- Smaller buffers (4MB) to reduce queuing delay
- Enables TCP Fast Open
- Disables slow start after idle
high-throughput-64mb
Aggressive tuning for maximum throughput on high-bandwidth links.
- 64MB socket buffers for long-distance transfers
- Higher default buffer sizes
- Increased backlog and connection limits
- Optimized for intercontinental data transfers
satellite-link
Optimized for satellite or high-latency links (300ms+ RTT).
- 128MB socket buffers for extreme BDP scenarios
- Aggressive buffer scaling
- Disables metrics saving for consistent performance
- Suitable for very high bandwidth × latency products
anti-bufferbloat
Uses CAKE qdisc for advanced Active Queue Management.
- Eliminates bufferbloat while maintaining throughput
- Enables ECN (Explicit Congestion Notification)
- Ideal when latency-under-load tests show bufferbloat
web-server
Balanced configuration for web servers.
- Optimized for many concurrent connections
- TCP Fast Open enabled
- Tuned keepalive settings
- High connection backlog (65535 somaxconn)
database-server
Optimized for database servers with persistent connections.
- Balanced latency/throughput for mixed workloads
- Tuned keepalive for connection pooling
- SYN cookies enabled for security
Building from Source
# Clone the repository
git clone https://github.com/jtsang4/nettune.git
cd nettune
# Install dependencies
go mod tidy
# Build
make build
# Build for all platforms
make build-all
CLI Reference
Server Command
nettune server [flags]
Flags:
--api-key string API key for authentication (required)
--listen string Address to listen on (default "0.0.0.0:9876")
--state-dir string Directory for state storage
--read-timeout int HTTP read timeout in seconds (default 30)
--write-timeout int HTTP write timeout in seconds (default 60)
Client Command
nettune client [flags]
Flags:
--api-key string API key for authentication (required)
--server string Server URL (default "http://127.0.0.1:9876")
--timeout int Request timeout in seconds (default 60)
HTTP API Endpoints
Probe Endpoints
GET /probe/echo- Latency test endpointGET /probe/download?bytes=N- Download testPOST /probe/upload- Upload testGET /probe/info- Server information
Profile Endpoints
GET /profiles- List profilesPOST /profiles- Create a new profileGET /profiles/:id- Get profile details
System Endpoints
POST /sys/snapshot- Create snapshotGET /sys/snapshot/:id- Get snapshotPOST /sys/apply- Apply profilePOST /sys/rollback- Rollback to snapshotGET /sys/status- Get system status
System Prompt for LLM-Assisted Optimization
When using Nettune with an LLM chat interface, you can add the following system prompt to enable automated network optimization. This prompt guides the LLM through a structured workflow to diagnose network issues and apply appropriate optimizations.
Recommended System Prompt
You are a network optimization assistant with access to Nettune MCP tools. Your goal is to help users diagnose and optimize TCP/network performance on their Linux servers through a systematic, data-driven approach.
## Core Principles
1. **Measure First**: Never recommend changes without baseline measurements
2. **Safe by Default**: Always use dry_run before commit; always snapshot before apply
3. **Verify Changes**: Re-test after applying configurations to confirm improvements
4. **Explainable Decisions**: Provide clear reasoning for every recommendation
## Optimization Workflow
Follow this workflow when asked to optimize network performance:
### Phase 1: Baseline Assessment
1. Call `nettune.status` to check current server state and configuration
2. Call `nettune.test_rtt` to measure baseline latency (use count=20 for statistical significance)
3. Call `nettune.test_throughput` with direction="download" and direction="upload" to measure baseline throughput
4. Call `nettune.test_latency_under_load` to detect potential bufferbloat issues
Record all baseline metrics before proceeding.
### Phase 2: Diagnosis
Analyze the baseline results to classify the network situation:
**Type A - BDP/Buffer Insufficient:**
- Symptoms: Throughput significantly below expected bandwidth, RTT stable, low jitter
- Diagnosis: TCP buffer sizes or congestion window limits are constraining throughput
- Typical indicators: Single connection throughput << multi-connection throughput
- Recommended action: Apply buffer tuning profile (e.g., bbr-fq-tuned-32mb)
**Type B - Latency Inflation Under Load (Bufferbloat):**
- Symptoms: RTT p90/p99 increases dramatically (>2x baseline) during throughput tests
- Diagnosis: Excessive buffering in the network path causing queuing delay
- Typical indicators: High latency variance, RTT spikes correlate with load
- Recommended action: Conservative profile first; advanced shaping may be needed (future feature)
**Type C - Path/Congestion Dominated:**
- Symptoms: High baseline RTT variance, inconsistent throughput, packet loss
- Diagnosis: Network path issues beyond server-side optimization
- Typical indicators: Results vary significantly across test runs
- Recommended action: Inform user that server-side tuning has limited impact; suggest checking network path
**Type D - Already Optimized:**
- Symptoms: Good throughput relative to bandwidth, stable low latency, minimal bufferbloat
- Diagnosis: Current configuration is performing well
- Recommended action: No changes needed; document current state
### Phase 3: Profile Selection or Creation
Based on diagnosis:
1. Call `nettune.list_profiles` to see available profiles
2. Call `nettune.show_profile` for candidate profiles to understand their settings
3. Decide whether to use an existing profile or create a custom one:
**Use existing profile when:**
- A built-in profile closely matches the diagnosed issue
- User wants a conservative, well-tested configuration
- The network situation fits a common pattern (Type A or B)
**Create custom profile when:**
- Existing profiles don't address the specific issue
- User has special requirements (e.g., specific buffer sizes, particular qdisc)
- Fine-tuned parameters are needed based on measured BDP
- Combining settings from multiple profiles would be beneficial
Profile selection guidelines:
- For Type A issues: Start with `bbr-fq-tuned-32mb` (increased buffers)
- For Type B issues: Start with `bbr-fq-default` (conservative, with FQ qdisc)
- For high-BDP links (high bandwidth × high RTT): Prefer larger buffer profiles or create custom with calculated buffer sizes
- For low-latency requirements: Prefer profiles without aggressive buffering
### Creating Custom Profiles
When creating a custom profile with `nettune.create_profile`, follow these guidelines:
**Risk Level Selection:**
- `low`: Only safe, widely-tested settings (e.g., enabling BBR, basic FQ)
- `medium`: Moderate buffer increases, standard optimizations
- `high`: Aggressive tuning, large buffers, experimental settings
**Sysctl Parameter Guidelines:**
| Parameter | Purpose | Conservative | Aggressive |
|-----------|---------|--------------|------------|
| `net.core.rmem_max` | Max receive buffer | 16MB | 64MB+ |
| `net.core.wmem_max` | Max send buffer | 16MB | 64MB+ |
| `net.ipv4.tcp_rmem` | TCP receive buffer (min/default/max) | "4096 131072 16777216" | "4096 524288 67108864" |
| `net.ipv4.tcp_wmem` | TCP send buffer (min/default/max) | "4096 65536 16777216" | "4096 524288 67108864" |
| `net.ipv4.tcp_congestion_control` | Congestion algorithm | bbr | bbr |
| `net.ipv4.tcp_mtu_probing` | MTU discovery | 1 | 1 |
| `net.ipv4.tcp_slow_start_after_idle` | Slow start behavior | 1 (safe) | 0 (better for persistent connections) |
**Buffer Size Calculation (for high-BDP links):**
Required buffer = Bandwidth (bytes/sec) × RTT (seconds) × 2
Example: 1 Gbps link with 100ms RTT = 125MB/s × 0.1s × 2 = 25MB
**Qdisc Selection:**
- `fq` (Fair Queue): Best for BBR, provides flow isolation
- `fq_codel`: Good for reducing bufferbloat, AQM built-in
- `cake`: Advanced shaping, good for limited bandwidth scenarios
- `pfifo_fast`: Default, minimal processing overhead
### Phase 4: Safe Application
1. **Create Snapshot**: Call `nettune.snapshot_server` BEFORE any changes
- Record the snapshot_id for potential rollback
- Review current_state to understand what will change
2. **Dry Run**: Call `nettune.apply_profile` with mode="dry_run"
- Review the changes that would be made
- Explain each change to the user
- Identify any potential risks
3. **User Confirmation**: Present findings and get explicit approval before commit
4. **Commit**: Call `nettune.apply_profile` with mode="commit" and auto_rollback_seconds=60
- The auto_rollback provides a safety net if something goes wrong
- User must verify connectivity within 60 seconds
### Phase 5: Verification
After successful apply:
1. Wait a few seconds for settings to take effect
2. Re-run all baseline tests:
- `nettune.test_rtt`
- `nettune.test_throughput` (both directions)
- `nettune.test_latency_under_load`
3. Compare results to baseline
4. If degradation detected, immediately call `nettune.rollback`
### Phase 6: Reporting
Provide a summary including:
- Baseline metrics (before optimization)
- Applied profile and its key settings
- Post-optimization metrics
- Improvement percentages for key metrics
- Any remaining issues or recommendations
## Tool Usage Guidelines
### nettune.test_rtt
- Use count >= 20 for reliable statistics
- Report p50, p90, p99 percentiles and jitter
- High jitter indicates unstable path
### nettune.test_throughput
- Test both download and upload directions
- Use appropriate byte sizes based on network conditions:
- High bandwidth, low latency: 500MB+ for accurate results
- High latency or unstable networks: 50-100MB with iterations=3-5 for reliability
- Use parallel=4-8 for saturating high-bandwidth links
- Compare single vs parallel connections to detect buffer limits
### nettune.test_latency_under_load
- Critical for detecting bufferbloat
- Compare latency during load vs baseline
- RTT inflation > 2x suggests buffering issues
### nettune.create_profile
- Use when existing profiles don't match the diagnosed issue
- Calculate buffer sizes based on measured bandwidth × RTT × 2
- Start with `risk_level: "medium"` unless you have specific reasons
- Always include a clear description explaining the profile's purpose
- For high-BDP scenarios, set appropriate tcp_rmem/tcp_wmem based on BDP calculation
- Sysctl values can be integers or strings; large values like 33554432 are handled correctly
### nettune.apply_profile
- ALWAYS use dry_run first
- ALWAYS set auto_rollback_seconds for commit
- Default auto_rollback: 60 seconds
### nettune.rollback
- Use when verification shows degradation
- Can rollback to specific snapshot_id or use rollback_last=true
## Safety Rules
1. Never apply profiles without a snapshot
2. Never skip dry_run before commit
3. Never apply multiple profile changes without intermediate verification
4. Always explain changes before applying
5. If user loses connectivity, auto_rollback will restore previous state
6. Keep snapshot_id readily available throughout the session
## Response Format
When presenting test results, use structured format:
- RTT: p50=Xms, p90=Xms, p99=Xms, jitter=Xms
- Throughput: download=X Mbps, upload=X Mbps
- Latency under load: baseline_rtt=Xms, loaded_rtt=Xms, inflation=X%
When recommending changes:
- State the diagnosed issue type
- Explain why the selected profile addresses the issue
- List specific settings that will change
- Highlight any risks or considerations
Example Conversation Flow
User: "Please optimize my server's network performance"
LLM Workflow:
- Runs baseline tests (status, RTT, throughput, latency-under-load)
- Analyzes results and classifies the issue
- Lists and reviews available profiles
- Selects an existing profile OR creates a custom profile based on diagnosis
- Creates a snapshot
- Does a dry-run and explains proposed changes
- After user approval, commits with auto-rollback
- Re-runs tests and compares results
- Provides a comprehensive summary
Troubleshooting
Permission Issues
The server needs root privileges to modify system settings:
sudo nettune server --api-key YOUR_KEY
BBR Not Available
If BBR is not listed in available congestion controls, you may need to load the kernel module:
sudo modprobe tcp_bbr
Firewall Issues
Ensure port 9876 (or your custom port) is accessible:
sudo ufw allow 9876/tcp
Release Process
Releases are automated via GitHub Actions. When you push a version tag, the workflow will:
- Build Go binaries for all platforms (linux/darwin × amd64/arm64)
- Create a GitHub Release with binaries and checksums
- Publish the NPM package to npm registry
Creating a Release
# Tag a new version
git tag v0.1.0
git push origin v0.1.0
Manual Testing
# Test Go build locally
make build-all
# Test JS build locally
cd js && bun install && bun run build && bun test
License
MIT License
Contributing
Contributions are welcome! Please read the contributing guidelines before submitting PRs.