🤖 MCP Enterprise AI Assistant System
A comprehensive Model Context Protocol (MCP) based AI application featuring intelligent server selection, specialized AI tools, and a beautiful web interface for enterprise-grade text analysis, code review, sentiment analysis, and knowledge management.
🌟 Workshop Ready Features
🏗️ True MCP (Model Context Protocol) Implementation
- 📋 JSON-RPC 2.0 Protocol: Full compliance with JSON-RPC 2.0 specification
- � MCP Initialize Handshake: Proper server capability negotiation
- 🔄 Standard MCP Methods:
initialize,tools/list,tools/call,resources/list - ⚡ WebSocket Transport: Persistent connections as per MCP specification
- 🎯 Tool Schema Compliance: Proper
inputSchemaformat for tool definitions
🚀 AI-Powered Enterprise Features
- 🧠 Intelligent Server Selection: AI routing to appropriate servers based on context
- 📝 Text Analysis: AI summarization, entity extraction, and classification
- 🔍 Code Review: Automated quality analysis, bug detection, improvements
- 😊 Sentiment Analysis: Advanced emotion detection and sentiment scoring
- 📚 Knowledge Management: Document Q&A and information retrieval
- 🎨 Beautiful Web UI: Professional interface with structured result display
- 🔧 Configurable AI Models: Support for Ollama (local) and Azure OpenAI
🏗️ System Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Web Frontend │────│ Flask App │────│ MCP Host │
│ (HTML/JS/CSS) │ │ (Web Server) │ │ (AI Coordinator)│
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
┌─────────────────┐
│ MCP Client │
│ (Communication) │
└─────────────────┘
│
┌───────────────────┬───────────────────┬───────────────────┬───────────────────┐
│ │ │ │ │
┌───────▼────────┐ ┌────────▼────────┐ ┌───────▼────────┐ ┌────────▼────────┐
│ Text Analysis │ │ Code Review │ │ Sentiment │ │ Knowledge │
│ Server │ │ Server │ │ Analysis │ │ Server │
│ Port: 8001 │ │ Port: 8002 │ │ Port: 8003 │ │ Port: 8004 │
└────────────────┘ └─────────────────┘ └────────────────┘ └─────────────────┘
📋 Prerequisites
1. Python Installation
- Python 3.8 or higher is required
- Download from: https://python.org/downloads/
- Verify installation:
python --versionorpy --version
2. Ollama Installation & Setup
Install Ollama
- Download Ollama: Visit https://ollama.ai/download
- Install for your OS:
- Windows: Download and run the installer
- macOS:
brew install ollama - Linux:
curl -fsSL https://ollama.ai/install.sh | sh
Download Required AI Model
# Download the Llama 3.2 3B model (recommended for this project)
ollama pull llama3.2:3b
# Verify the model is installed
ollama list
Start Ollama Service
# Start Ollama service (keep this running)
ollama serve
3. Git Installation (for cloning the repository)
- Download from: https://git-scm.com/downloads
- Verify:
git --version
🚀 Installation & Setup
Step 1: Clone the Repository
git clone https://github.com/harunraseed07/ADC_MCP_Project.git
cd ADC_MCP_Project
Step 2: Install Python Dependencies
# Install required packages
pip install -r requirements.txt
Step 3: Verify Ollama Model
# Make sure llama3.2:3b is available
ollama list
# If not installed, download it
ollama pull llama3.2:3b
Step 4: Configure the System
The system is pre-configured to use:
- AI Provider: Ollama (local)
- Model: llama3.2:3b
- Ports: 8001-8004 for MCP servers, 5000 for web interface
Configuration files are in the config/ directory.
🎯 Quick Start
Method 1: Automated Start (Recommended)
# Start all servers and web application (Windows)
start_demo_system.bat
# For PowerShell
./start_demo_system.bat
Method 2: Manual Start
# Terminal 1: Start Text Analysis Server
python -m mcp_servers.text_analysis_server
# Terminal 2: Start Code Review Server
python -m mcp_servers.code_review_server
# Terminal 3: Start Sentiment Analysis Server
python -m mcp_servers.sentiment_analysis_server
# Terminal 4: Start Knowledge Server
python -m mcp_servers.knowledge_server
# Terminal 5: Start Web Application
python web_app/app.py
Step 5: Access the Application
- Open your browser
- Navigate to: http://localhost:5000
- Start interacting with the AI assistant!
💡 Usage Examples
Text Analysis
"Summarize this text: [your text here]"
"Extract entities from: [your text]"
"Classify this content: [your content]"
Code Review
"Review this Python code: def function_name():"
"Check this JavaScript for bugs: [your code]"
"Analyze code quality: [your code]"
Sentiment Analysis
"Analyze sentiment: I love this product!"
"What's the emotion in: [your text]"
"Sentiment of customer feedback: [feedback]"
Knowledge Management
"Search for information about: [topic]"
"What do you know about: [subject]"
"Find documents related to: [query]"
� MCP Protocol Implementation
JSON-RPC 2.0 Compliance
All communication follows JSON-RPC 2.0 specification:
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "summarize_text",
"arguments": {"text": "Your text here"}
}
}
MCP Standard Methods
initialize: Server capability handshaketools/list: Get available tools with schemastools/call: Execute tool with argumentsresources/list: List available resourcesresources/read: Read resource content
Tool Schema Format
{
"name": "summarize_text",
"description": "AI-powered text summarization",
"inputSchema": {
"type": "object",
"properties": {
"text": {
"type": "string",
"description": "Text to summarize"
}
},
"required": ["text"]
}
}
�🔧 Configuration
AI Model Configuration
Edit config/mcp_config.json:
{
"ai_provider": "ollama",
"ai_model": "llama3.2:3b",
"ai_base_url": "http://localhost:11434"
}
Server Ports
- Text Analysis: 8001
- Code Review: 8002
- Sentiment Analysis: 8003
- Knowledge: 8004
- Web Interface: 5000
🛠️ Troubleshooting
Common Issues
-
"Ollama not found" Error
# Make sure Ollama is running ollama serve -
"Model not found" Error
# Download the required model ollama pull llama3.2:3b -
Port Already in Use
# Check what's using the ports netstat -ano | findstr :8001 # Kill processes if needed kill_all_servers.bat -
Python Module Not Found
# Reinstall dependencies pip install -r requirements.txt
Utility Scripts
check_ports.bat- Check which ports are in usekill_all_servers.bat- Stop all running serversstart_demo_system.bat- Start entire system
🎓 Workshop Activities
Activity 1: Basic Setup (15 minutes)
- Install prerequisites (Python, Ollama)
- Download the llama3.2:3b model
- Clone and setup the project
- Start the system and verify it's working
Activity 2: Understanding MCP (20 minutes)
- Explore the system architecture
- Examine how AI routing works
- Test different types of queries
- Observe server selection logic
Activity 3: Customization (25 minutes)
- Modify server responses
- Add new AI tools
- Customize the web interface
- Experiment with different AI models
Activity 4: Advanced Features (20 minutes)
- Implement custom server logic
- Add new MCP servers
- Integrate external APIs
- Deploy to production environment
📁 Project Structure
ADC_MCP_Project/
├── mcp_servers/ # MCP server implementations
│ ├── text_analysis_server.py
│ ├── code_review_server.py
│ ├── sentiment_analysis_server.py
│ └── knowledge_server.py
├── mcp_client/ # MCP client for communication
│ └── client.py
├── mcp_host/ # AI-powered MCP host
│ ├── host.py
│ └── ai_models.py
├── web_app/ # Flask web application
│ ├── app.py
│ ├── templates/
│ └── static/
├── config/ # Configuration files
│ └── mcp_config.json
├── scripts/ # Utility scripts
└── requirements.txt # Python dependencies
🤝 Contributing
- Fork the repository
- Create a feature branch:
git checkout -b feature-name - Make your changes
- Test thoroughly
- Submit a pull request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🆘 Support
- Issues: Report bugs and request features via GitHub Issues
- Documentation: Check the project files for detailed guides
- Community: Join discussions for help and collaboration
🎉 Acknowledgments
- Built with the Model Context Protocol (MCP) framework
- Powered by Ollama and Llama 3.2 AI models
- Inspired by enterprise AI automation needs
Happy Coding! 🚀 Ready to explore the future of AI-powered enterprise applications!
- Verify port configurations
4. Web UI not loading:
- Check if Flask is running on the correct port
- Verify template files exist
- Check browser console for errors
Logging
Enable debug logging by setting:
FLASK_DEBUG=True
LOG_LEVEL=DEBUG
View logs in the terminal where you started the application.
📊 Performance
Resource Usage
- Memory: ~200-500MB (depending on AI model)
- CPU: Low (spikes during AI inference)
- Network: Minimal (local WebSocket communication)
Scalability
- Each server can handle multiple concurrent connections
- AI model responses are cached for common queries
- WebSocket connections are persistent and efficient
🔒 Security
Current Implementation
- Local communication only (localhost)
- No authentication required
- Mock data for demonstration
Production Considerations
- Add authentication and authorization
- Use HTTPS/WSS for encrypted communication
- Implement rate limiting
- Add input validation and sanitization
- Use real databases with proper security
🤝 Contributing
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
Code Style
- Follow PEP 8 for Python code
- Use meaningful variable and function names
- Add docstrings for classes and functions
- Comment complex logic
📝 License
This project is for demonstration purposes. See LICENSE file for details.
🆘 Support
For issues and questions:
- Check the troubleshooting section
- Review server logs
- Open an issue with detailed information
🎯 Future Enhancements
- Add more server types (weather, news, etc.)
- Implement user authentication
- Add persistent data storage
- Support for multiple AI models simultaneously
- Real-time notifications
- Mobile-responsive improvements
- API documentation with Swagger
- Docker containerization
- Kubernetes deployment support
- Monitoring and metrics dashboard
Built with ❤️ using the Model Context Protocol (MCP)