Expert system hardware probe and performance diagnostic engine for AI, Gaming, and High-Performance workflows. This is a Model Context Protocol (MCP) server that provides deep system insights beyond simple specifications.
- 🔍 Deep Hardware Inventory: Comprehensive analysis of CPU, RAM, GPU (VRAM/Bandwidth), Storage, and OS topology.
- ⚡ Real-time Performance Monitoring: Live tracking of system load and identification of resource-hogging processes.
- 🧊 Thermal & Power Diagnostics: Detects thermal throttling and frequency clipping to resolve unexpected slowness.
- 🤖 AI/LLM Optimization: Specialized tools for predicting LLM performance, calculating quantization fit, and optimizing runtimes (Ollama, CUDA, Metal).
- 🛡️ Privacy-First: Automatic anonymization of unique hardware identifiers before any remote transmission.
gemini extension install @yamaru-eu/hardware-probeAdd this to your MCP settings file (e.g., npx-config.json or claude_desktop_config.json):
{
"mcpServers": {
"yamaru-probe": {
"command": "npx",
"args": ["-y", "@yamaru-eu/hardware-probe"]
}
}
}analyze_local_system: Full hardware inventory.analyze_performance: Real-time performance metrics and top processes.analyze_ram_pressure: Detailed memory pressure and RSS analysis for deep RAM troubleshooting.check_storage_health: Disk SMART health, firmware, and I/O bottleneck analysis.thermal_profile: Real-time CPU/GPU thermal states, fan speeds, and frequency throttling detection.diagnose_antivirus_impact: Detects EDR/Antivirus conflicts and exclusion coverage on dev paths.monitor_system_health: Statistical health report (min/max/avg) over a specified duration.check_llm_compatibility(BETA): Predicts performance for a specific LLM model via remote API.get_llm_recommendations(BETA): Recommends the best local models via remote API.analyze_inference_config: Deep-dive into AI runtimes and environment variables.
When used with Gemini CLI, this extension provides the following expert skills:
hardware-performance-expert: Global protocol for system health and troubleshooting.local-inference-optimizer: Specialized logic for fine-tuning local LLM runs.
npm install # Install dependencies
npm run build # Compile TypeScript → dist/
npm run test # Run test suite
npm run inspector # Test tools in the MCP InspectorApache 2.0 - Part of the Yamaru Project.