Self-hosted AI agent engine for developers.
35 providers · 72 tools · 22 channels · runs on anything.
Docs · Report Bug · Discord
Chat with Gemma 4 locally. Zero cost. Runs on your Mac.
- Why profClaw
- Features
- Quick Start
- Interactive TUI
- Slash Commands
- Headless Mode
- Deployment Modes
- Where It Runs
- Configuration
- AI Providers
- Chat Channels
- Integrations
- Architecture
- Development
- License
Most AI coding tools are cloud-only (Cursor, Devin) or single-purpose CLIs (Aider, SWE-Agent). profClaw runs on your machine, talks to 35 providers (or none, Ollama works offline), and does more than chat.
- Local-first. Your data stays on your hardware.
- 35 AI providers, including Ollama for fully offline usage.
- Interactive TUI with streaming markdown, syntax highlighting, and a model switcher.
- Routes tasks to the right agent based on capability scoring.
- Real task queue with retry, backoff, and dead letter handling (BullMQ + Redis).
- 22 chat channels: Slack, Discord, Telegram, WhatsApp, Teams, and more.
- Per-token cost tracking with budget alerts.
- 72 built-in tools: file ops, git, browser automation, cron, web search, voice.
- Pico mode runs on a Raspberry Pi Zero 2W in ~140MB RAM.
| 35 AI providers | Anthropic, OpenAI, Google, Groq, Ollama, DeepSeek, Bedrock, and 28 more |
| 22 chat channels | Slack, Discord, Telegram, WhatsApp, iMessage, Matrix, Teams, and 15 more |
| 72 built-in tools | File ops, git, browser automation, cron, web search, canvas, voice |
| 50 skills | Coding agent, GitHub issues, Notion, Obsidian, image gen, and more |
| Interactive TUI | Streaming markdown, syntax highlighting, slash picker, model selector |
| MCP server | Model Context Protocol. Works with Claude Desktop, Cursor, any MCP client |
| Voice I/O | Whisper STT + TTS (ElevenLabs/OpenAI/system) + talk mode |
| Plugin SDK | Build and share plugins via npm or ClawHub |
| 3 deployment modes | Pico (~140MB), Mini (~145MB), Pro (full features) |
npm install -g profclaw
profclaw init
profclaw chat --tui# Verify
curl http://localhost:3000/healthprofclaw init scans your project, detects your stack, and writes a PROFCLAW.md context file. It picks up any AI provider keys already in your environment.
profclaw chat --tui opens the terminal UI with streaming responses, code highlighting, slash commands, and a model selector.
npx profclaw onboardWalks you through provider setup, config, and server start. Takes about 5 minutes.
docker run -d \
-p 3000:3000 \
-e ANTHROPIC_API_KEY=sk-ant-xxx \
-e PROFCLAW_MODE=mini \
ghcr.io/profclaw/profclaw:latestgit clone https://github.com/profclaw/profclaw.git
cd profclaw
cp .env.example .env
docker compose up -dWant free local AI? Add Ollama:
docker compose --profile ai up -dcurl -fsSL https://raw.githubusercontent.com/profclaw/profclaw/main/install.sh | bashprofClaw has a full terminal UI. No browser needed.
profclaw chat --tui
# or
profclaw tuiWhat you get:
- Streaming markdown rendered as the model responds
- Syntax-highlighted code blocks (100+ languages)
- Slash command picker: type
/to browse skills - Model selector: switch providers mid-conversation with
Ctrl+M - Tool execution panel showing every call, argument, and result
- Session history and full keyboard navigation
┌─ profClaw v2.x.x ─────────────────────────────────────────────┐
│ [Chat] [Tools] │
│ │
│ You: Review src/auth.ts for security issues │
│ │
│ Agent: I'll analyze auth.ts for security issues. read_file │
│ ───────── │
│ ## Security Analysis auth.ts │
│ │
│ **Critical**: JWT secret read from env without analyze │
│ fallback guard on line 42. If `JWT_SECRET` is ──────── │
│ undefined the app will sign tokens with `undefined`. auth.ts │
│ ... │
└─────────────────────────────── [claude-sonnet-4-6] [pro] ──────┘
Type / in any chat to run a skill. Skills are plain Markdown files, easy to create your own.
| Command | What it does |
|---|---|
/commit |
Write a conventional commit message for staged changes |
/review-pr 42 |
Full code review of a pull request |
/deploy |
Run your deploy script with rollback awareness |
/summarize src/ |
Summarize all files in a directory |
/web-research "topic" |
Research a topic across the web |
/analyze-code file.ts |
Security, performance, and style analysis |
/ticket PROJ-123 |
Pull ticket context into the conversation |
/image "prompt" |
Generate an image via configured image provider |
/help |
List all available slash commands |
/exit |
Exit the current session |
See Skills docs for all 50 built-in skills and how to create your own with a plain Markdown file.
Run agents from scripts, CI pipelines, or other services:
# One-shot message
profclaw chat "Summarize the last 10 git commits"
# JSON output for scripting
profclaw chat "List open TODOs in src/" --output json
# Run a skill headlessly
profclaw skill run commit --message "feat: add auth"
# Agent task with file context
profclaw agent run "Review src/ for security issues" --files src/
# Verify your setup
profclaw doctorREST API:
curl -X POST http://localhost:3000/api/chat/message \
-H "Content-Type: application/json" \
-d '{"message": "List open GitHub issues", "provider": "anthropic"}'Same binary, different scale. Set PROFCLAW_MODE and go:
| Mode | What you get | RAM | Best for |
|---|---|---|---|
| pico | Agent + tools + 1 chat channel + cron. No UI. | ~140MB | Raspberry Pi, $5 VPS, home server |
| mini | + Dashboard, integrations, 3 channels | ~145MB | Personal dev server, small VPS |
| pro | + All channels, Redis queues, plugins, browser tools | ~200MB | Teams, production |
Set via PROFCLAW_MODE=pico|mini|pro environment variable.
| Hardware | RAM | Recommended mode |
|---|---|---|
| Raspberry Pi Zero 2W | 512MB | pico |
| Raspberry Pi 3/4/5 | 1-8GB | mini or pro |
| Orange Pi / Rock Pi | 1-4GB | mini or pro |
| $5/mo VPS (Hetzner, OVH) | 512MB-1GB | pico or mini |
| Old laptop / home PC | 4-16GB | pro |
| Docker (alongside other services) | 512MB+ | any mode |
| Old Android phone (Termux) | 1-2GB | pico |
profClaw requires Node.js 22+. For bare-metal embedded devices (ESP32, Arduino), see MimiClaw (C) or PicoClaw (Go).
The setup wizard (profclaw onboard) handles everything interactively. Or set environment variables:
# AI Provider (pick one)
ANTHROPIC_API_KEY=sk-ant-xxx
OPENAI_API_KEY=sk-xxx
OLLAMA_BASE_URL=http://localhost:11434
# Deployment
PROFCLAW_MODE=mini
PORT=3000
# Optional
REDIS_URL=redis://localhost:6379 # Required for pro modeRun profclaw doctor to verify your configuration at any time.
See .env.example for all 130+ options.
35 providers with 90+ model aliases:
| Provider | Models | Local? |
|---|---|---|
| Anthropic | Claude 4.x, 3.5 | No |
| OpenAI | GPT-4o, o1, o3 | No |
| Gemini 2.x | No | |
| Ollama | Llama, Mistral, Qwen, ... | Yes |
| AWS Bedrock | Claude, Titan, Llama | No |
| Groq | Fast inference | No |
| DeepSeek | V3, R1 | No |
| Azure OpenAI | GPT-4o | No |
| xAI | Grok | No |
| OpenRouter | Any model | No |
| Together | Open models | No |
| Fireworks | Open models | No |
| Mistral | Mistral Large, Codestral | No |
| LM Studio | Local models | Yes |
| ... and 21 more | HuggingFace, NVIDIA NIM, Cerebras, Replicate, Zhipu, Moonshot, Qwen, etc. |
| Channel | Protocol |
|---|---|
| Slack | Bolt SDK |
| Discord | HTTP Interactions |
| Telegram | Bot API |
| Cloud API | |
| WebChat | SSE (browser-based) |
| Matrix | Client-Server API |
| Google Chat | Webhook + API |
| Microsoft Teams | Bot Framework |
| iMessage | BlueBubbles |
| Signal | signald bridge |
| IRC | TLS, RFC 1459 |
| LINE | Messaging API |
| Mattermost | REST API v4 |
| DingTalk | OpenAPI + webhook |
| WeCom | WeChat Work API |
| Feishu/Lark | Open Platform |
| Bot API | |
| Nostr | Relay protocol |
| Twitch | Helix API + IRC |
| Zalo | OA API v3 |
| Nextcloud Talk | OCS API |
| Synology Chat | Webhook |
| Platform | Features |
|---|---|
| GitHub | Webhooks, OAuth, issue sync, PR automation |
| Jira | Webhooks, OAuth, issue sync, transitions |
| Linear | Webhooks, OAuth, issue sync |
| Cloudflare | Workers deployment, KV, D1 |
| Tailscale | Private network access |
src/
adapters/ AI agent adapters (tool calling, streaming)
chat/ Chat engine + execution pipeline
providers/ Slack, Discord, Telegram, WhatsApp, WebChat, ...
execution/ Tool executor, sandbox, agentic loop
core/ Deployment modes, feature flags
integrations/ GitHub, Jira, Linear webhooks
queue/ BullMQ (pro) + in-memory (pico/mini) task queue
providers/ 35 AI SDK providers
skills/ Skill loader and registry
mcp/ MCP server (stdio + SSE)
tui/ Interactive terminal UI (ink + blessed)
types/ Shared TypeScript types
ui/src/ React 19 + Vite dashboard (mini/pro only)
skills/ Built-in skill definitions (Markdown)
Design decisions:
- Same binary scales from pico to pro. Features are gated at runtime.
- Adding an AI provider is one file implementing
ModelAdapter. - Skills are plain
.mdfiles. No code needed. - BullMQ for production queues, in-memory fallback for pico/mini.
See Architecture docs for a deep dive.
git clone https://github.com/profclaw/profclaw.git
cd profclaw
pnpm install
cp .env.example .env
pnpm devSee CONTRIBUTING.md for full guidelines.
See SECURITY.md for our security policy and how to report vulnerabilities.