-
tool-parser
Tool/function call parser for LLM model outputs
-
reasoning-parser
Parser for AI model reasoning/thinking outputs (chain-of-thought, etc.)
-
smg-mcp
Model Context Protocol (MCP) client implementation
-
llama-cpp-2
llama.cpp bindings for Rust
-
toon-format
Token-Oriented Object Notation (TOON) - a token-efficient JSON alternative for LLM prompts
-
llm
unifying multiple LLM backends
-
pmcp
High-quality Rust SDK for Model Context Protocol (MCP) with full TypeScript SDK compatibility
-
llmfit
Right-size LLM models to your system hardware. Interactive TUI and CLI to match models against available RAM, CPU, and GPU.
-
claude-agent-sdk-rs
Rust SDK for Claude Code CLI with bidirectional streaming, hooks, custom tools, and plugin support - 100% feature parity with Python SDK
-
liter-llm
Universal LLM API client — 142+ providers, streaming, tool calling. Rust-powered, type-safe, compiled.
-
sofos
An interactive AI coding agent for your terminal
-
zeph
Lightweight AI agent with hybrid inference, skills-first architecture, and multi-channel I/O
-
sqz-cli
Universal LLM context compressor — squeeze tokens from prompts, code, JSON, logs, and conversations
-
rmcp-openapi
converting OpenAPI specifications to MCP tools
-
arai
AI coding rules that actually work. Enforce instruction files via hooks — CLAUDE.md, .cursorrules, copilot-instructions, and more.
-
tower-mcp
Tower-native Model Context Protocol (MCP) implementation
-
pensyve-core
Universal memory runtime for AI agents — episodic, semantic, and procedural memory with 8-signal fusion retrieval
-
adk-model
LLM model integrations for Rust Agent Development Kit (ADK-Rust) (Gemini, OpenAI, Claude, DeepSeek, etc.)
-
gobby-squeeze
YAML-configurable output compressor for LLM token optimization
-
llm-tokenizer
LLM tokenizer library with caching and chat template support
-
wikidesk-server
MCP server that wraps LLM-wiki into a shared research service for AI coding agents
-
llm_adapter
adapting various language model APIs to a unified interface
-
turbomcp
Rust SDK for Model Context Protocol (MCP) with zero-boilerplate macros and WASM support
-
llama-cpp-4
llama.cpp bindings for Rust
-
cgip
Terminal client for interacting with Chat GPT that allows you to build and manipulate contexts
-
rstructor
Rust equivalent of Python's Instructor + Pydantic: Extract structured, validated data from LLMs (OpenAI, Anthropic, Grok, Gemini) using type-safe Rust structs and enums
-
byokey
Bring Your Own Keys — AI subscription-to-API proxy gateway
-
modelexpress-client
Client library for Model Express gRPC server
-
forgellm-cli
CLI tool for the ForgeLLM compiler
-
squeez
Hook-based token compressor for 5 AI CLI hosts (Claude Code, Copilot CLI, OpenCode, Gemini CLI, Codex CLI). Up to 95% bash compression, signature-mode for code reads, cross-call dedup…
-
wikidesk
CLI client for wikidesk: sync wiki and submit research questions
-
swiftide
Fast, streaming indexing, query, and agentic LLM applications in Rust
-
rmcp-actix-web
actix-web transport implementations for RMCP (Rust Model Context Protocol)
-
opendev-cli
Binary entry point for the OpenDev CLI
-
mpatch
A smart, context-aware patch tool that applies diffs using fuzzy matching, ideal for AI-generated code
-
dynamo-llm
Dynamo LLM Library
-
lazy-mcp
MCP proxy that lazy-loads servers and exposes them as four meta-tools
-
kv-index
Radix tree implementations for prefix matching and cache-aware routing
-
supp
here is your context
-
smg-grpc-client
gRPC clients for SGLang and vLLM backends
-
llm_models_spider
Auto-updated registry of LLM model capabilities (vision, audio, etc.)
-
ollama-lmstudio-proxy
High-performance proxy server that bridges Ollama API and LM Studio
-
ba
task tracking for LLM sessions
-
musicgpt
Generate music based on natural language prompts using LLMs running locally
-
codineer-cli
Codineer - a local AI coding-agent CLI powered by Rust
-
langfuse-client-base
Auto-generated Langfuse API client from OpenAPI specification
-
elicitation
Conversational elicitation of strongly-typed Rust values via MCP
-
deepwiki-rs
deepwiki-rs(also known as Litho) is a high-performance automatic generation engine for C4 architecture documentation, developed using Rust. It can intelligently analyze project structures…
-
commandok
Spotlight-like command generator for your terminal, powered by LLMs
-
ascent-research
— an incremental research workflow CLI for AI agents. Every session resumes; knowledge accretes across runs. Mixes HTTP, browser, and local file ingest into a durable per-session wiki + figure-rich HTML report.
-
noether-cli
Noether CLI: ACLI-compliant command-line interface for stage management, composition graph execution, and LLM-powered compose
-
mistralrs
Fast, flexible LLM inference
-
cerememory-cli
Command-line interface for Cerememory
-
zeph-memory
Semantic memory with SQLite and Qdrant for Zeph agent
-
filesystem-mcp-rs
Rust port of the official MCP filesystem server - fast, safe, protocol-compatible file operations
-
yoyo-agent
A coding agent that evolves itself. Born as 200 lines of Rust, growing up in public.
-
zeph-llm
LLM provider abstraction with Ollama, Claude, OpenAI, and Candle backends
-
adk-core
Core traits and types for Rust Agent Development Kit (ADK-Rust) agents, tools, sessions, and events
-
notarai
CLI validator for NotarAI spec files
-
ai-gateway
AI gateway for managing and routing LLM requests - Govern, Secure, and Optimize your AI Traffic
-
bob-cli
CLI app for Bob - a general-purpose AI agent framework
-
dynamo-config
Dynamo Inference Framework
-
llmposter
Drop-in mock server for OpenAI, Anthropic & Gemini APIs — library or standalone CLI. SSE streaming, tool calling, OAuth2, failure injection, streaming chaos, stateful scenarios, request capture…
-
claude-code-proxy
OpenAI-compatible API proxy for Claude Code CLI
-
ast-outline
Fast, AST-based structural outline for source files. Built for LLM coding agents and humans.
-
udiffx
Parse and apply LLM-optimized unified diff + XML file changes
-
cruise
YAML-driven coding agent workflow orchestrator
-
dynamo-parsers
Dynamo Parser Library for Tool Calling and Reasoning
-
scouter-sql
Sql library to use with scouter-server
-
ruvllm
LLM serving runtime with Ruvector integration - Paged attention, KV cache, and SONA learning
-
noether-sandbox
Thin binary: read an IsolationPolicy (JSON on stdin), run the argv after
--inside the bubblewrap sandbox. For non-Rust consumers (Python, Node, Go, shell) that want to delegate to… -
parry-guard
Prompt injection scanner CLI - substring, unicode, secrets, and ML detection
-
forgellm-frontend
Model parsing (GGUF, SafeTensors) and IR construction for ForgeLLM
-
zerobox-sandboxing
Sandbox any command with file, network, and credential controls
-
oxo-call
Model-intelligent orchestration for CLI bioinformatics — call any tool with LLM intelligence
-
openai-protocol
OpenAI-compatible API protocol definitions and types
-
mcp-council
MCP server for multi-LLM peer review and council deliberation workflow
-
llm-connector
Next-generation Rust library for LLM protocol abstraction with native multi-modal support. Supports 12+ providers (OpenAI, Anthropic, Google, Aliyun, Zhipu, Ollama, Tencent, Volcengine…
-
semver-analyzer
Deterministic semantic breaking change analyzer for TypeScript/JavaScript
-
prism-mcp-rs
Production-grade Rust SDK for Model Context Protocol (MCP) - Build AI agents, LLM integrations, and assistant tools with enterprise features
-
redshank-cli
CLI entry point for the Redshank investigation agent
-
outlines-core
Structured Generation
-
sage-runtime
Runtime library for compiled Sage programs
-
linguafranca
LLM API format converter — convert between OpenAI, Anthropic, and Open Responses formats
-
turbovault
Production-grade MCP server for Obsidian vault management - Transform your vault into an intelligent knowledge system for AI agents
-
yomo
A QUIC-based runtime for AI-LLM tool routing and serverless execution
-
wgpu-llm-cli
Terminal-based chat interface for the wgpu LLM inference engine
-
fetchkit
AI-friendly web content fetching and HTML-to-Markdown conversion library
-
infiniloom
High-performance repository context generator for LLMs - Claude, GPT-4, Gemini optimized
-
elifrs
elif.rs CLI - Convention over configuration web framework tooling with zero-boilerplate project generation
-
seite
AI-native static site generator — every page ships as HTML, markdown, and structured data
-
aichat
All-in-one LLM CLI Tool
-
npcsh
The composable multi-agent shell
-
meerkat
Modular, high-performance agent harness for LLM-powered applications
-
pmetal-metal
Metal GPU compute kernels for PMetal - FlashAttention and optimized ML primitives
-
cc-sdk
Rust SDK for Claude Code CLI with full interactive capabilities
-
shimmy
Lightweight sub-5MB Ollama alternative with native SafeTensors support. No Python dependencies, 2x faster loading. Now with GitHub Spec-Kit integration for systematic development.
-
ct2rs
Rust bindings for OpenNMT/CTranslate2
-
sqz-engine
Adaptive multi-pass LLM context compression engine — content-aware pipeline with AST parsing, token counting, session persistence, and budget tracking
-
edgequake-llm
Multi-provider LLM abstraction library with caching, rate limiting, and cost tracking
-
cloudllm
A batteries-included Rust toolkit for building intelligent agents with LLM integration, multi-protocol tool support, multi-agent orchestration, and MentisDB-backed durable memory
-
error-toon
Compress verbose browser errors for LLM consumption. Save 70-90% tokens.
-
codineer-plugins
Plugin system and hooks for Codineer
-
mdstream
Streaming-first Markdown middleware for LLM output (committed + pending blocks, render-agnostic)
-
tuillem
A 3-pane terminal AI chat client with easy connectivity to local and remote LLM endpoints
-
autoagents
Agent Framework for Building Autonomous Agents
-
gsqz
YAML-configurable output compressor for LLM token optimization
-
aigent
CLI, and Claude plugin for managing agent skill definitions
-
dsct
LLM-friendly packet dissector CLI
-
writestead
LLM Wiki
-
anki-llm
A command-line interface for bulk-processing Anki flashcards with LLMs
-
r2t
A fast CLI tool to convert a repository's structure and contents into a single text file, useful for providing context to LLMs
-
forgellm-codegen-metal
Metal GPU code generation for Apple Silicon inference in ForgeLLM
-
repo-flatten
flatten all files in the repository into a single file, consumed by LLMs. Will ignore .gitignore and hidden files.
-
claude-code-agent-sdk
Rust SDK for Claude Code CLI with bidirectional streaming, hooks, custom tools, and plugin support
-
localgpt
CLI — a local-only AI assistant
-
lmm
A language agnostic framework for emulating reality
-
kwaak
Run a team of autonomous agents on your code, right from your terminal
-
ai-memory
AI-agnostic persistent memory system — MCP server, HTTP API, and CLI for any AI platform
-
capsule-run
Secure WASM runtime to isolate and manage AI agent tasks
-
txt
cargo doc for coding agents
-
adk-agent
Agent implementations for Rust Agent Development Kit (ADK-Rust, LLM, Custom, Workflow agents)
-
algocline
LLM amplification engine — MCP server with Lua scripting
-
ratatoskr-cli
Trace-first, deterministic execution for language model workflows
-
vllora
AI gateway for managing and routing LLM requests - Govern, Secure, and Optimize your AI Traffic
-
llm-agent-runtime
Unified Tokio agent runtime -- orchestration, memory, knowledge graph, and ReAct loop in one crate
-
peas
A CLI-based, flat-file issue tracker for humans and robots
-
ramparts
A CLI tool for scanning Model Context Protocol (MCP) servers
-
cargo-read
Download crate source and show README + metadata, designed for LLM tool use
-
langdb_core
AI gateway Core for LangDB AI Gateway
-
kalosm-sample
A common interface for token sampling and helpers for structered llm sampling
-
noether-engine
Noether composition engine: Lagrange graph AST, type checker, planner, executor, semantic index, LLM-backed composition agent
-
awaken-stores
Storage backends (memory, file, PostgreSQL, SQLite mailbox) for Awaken agent state
-
vectorless
Reasoning-based Document Engine
-
motosan-ai
Rust SDK for multi-provider AI chat
-
rho-agent
AI coding agent with file tools
-
octolib
Self-sufficient AI provider library with multi-provider support, embedding models, model validation, and cost tracking
-
noether-core
Noether core: type system, effects, content-addressed stage schema, Ed25519 signing, stdlib
-
ai-rsk
Security gate for AI-generated code - blocks the build until vulnerabilities are fixed
-
tauq
Token-efficient data notation - 49% fewer tokens than JSON (verified with tiktoken)
-
crw-mcp
MCP (Model Context Protocol) server for the CRW web scraper
-
yoagent
effective agent loop with tool execution and event streaming
-
forgellm-codegen-cpu
CPU code generation (x86 AVX2/512, ARM NEON) for ForgeLLM
-
git-prism
Agent-optimized git data MCP server — structured change manifests and full file snapshots for LLM agents
-
llm-git
AI-powered git commit message generator using Claude and other LLMs via OpenAI-compatible APIs
-
ai-agents-llm
LLM providers for AI Agents framework
-
do_it
Autonomous coding agent powered by local LLMs via Ollama. Cross-platform, no shell dependency, no cloud APIs required.
-
mcp-protocol-sdk
Production-ready Rust SDK for the Model Context Protocol (MCP) with multiple transport support
-
tibet-dgx
Zero-Trust DGX — Run LLMs across machines without NVLink. QUIC multi-stream + encrypted RAID-0 + DIME aperture.
-
tru
TOON reference implementation in Rust (JSON <-> TOON)
-
files-to-prompt
Concatenates a directory full of files into a single prompt for use with LLMs
-
awaken-runtime
Phase-based execution engine, plugin system, and agent loop for Awaken
-
runok
Command execution permission framework for LLM agents
-
toon
Token-Oriented Object Notation – a token-efficient JSON alternative for LLM prompts
-
ruvector-sona
Self-Optimizing Neural Architecture - Runtime-adaptive learning for LLM routers with two-tier LoRA, EWC++, and ReasoningBank
-
splintr
Fast Rust tokenizer (BPE + SentencePiece + WordPiece) with Python bindings
-
llama-mcp-server
Local LLM inference MCP server powered by llama.cpp
-
sloppify
your codebase to reduce LLM training
-
mirage-proxy
Invisible sensitive data filter for LLM APIs — secrets, credentials, and PII replaced with plausible fakes
-
siumai-protocol-openai
OpenAI(-like) protocol standard mapping for siumai
-
crw-cli
crw — CLI tool for scraping URLs to markdown/JSON without a server
-
openai-oxide
Idiomatic Rust client for the OpenAI API — 1:1 parity with the official Python SDK
-
elif-core
Core architecture foundation for the elif.rs LLM-friendly web framework
-
cerememory-engine
Orchestrator that assembles all Cerememory stores and engines
-
neva
MCP SDK for Rust
-
runtara-ai
AI/LLM integration for runtara workflows — synchronous, ureq-based
-
soul-core
Async agentic runtime for Rust — steerable agent loops, context management, multi-provider LLM abstraction, virtual filesystem, WASM-ready
-
tirea-extension-permission
Tool-level permission policies and user-approval gating for tirea agents
-
apcore
Schema-driven module standard for AI-perceivable interfaces
-
g3-glitter-bomb
✨💖 GB (G3-Glitter-Bomb) - Dialectical multi-agent autocoding with theatrical personas 💖✨
-
motosan-agent-loop
Standalone ReAct agent loop — LlmClient + AgentLoop with no platform dependencies
-
zag-cli
A unified CLI for AI coding agents — Claude, Codex, Gemini, Copilot, and Ollama
-
praxis-echo
Pipeline enforcement engine for AI self-evolution
-
llm-kit-provider
Provider interface and traits for the LLM Kit - defines the contract for implementing AI model providers
-
rai-cli
Run AI instructions directly from your terminal, scripts, and CI/CD pipelines
-
mcp-host
Production-grade MCP host crate for building Model Context Protocol servers
-
tryparse
Multi-strategy parser for messy real-world data. Handles broken JSON, markdown wrappers, and type mismatches.
-
awaken-ext-skills
Skill package discovery and activation plugin for Awaken
-
ai-agents-disambiguation
Intent disambiguation support for AI Agents framework
-
awaken-contract
Core types, traits, and state model for the Awaken AI agent runtime
-
cargo-ai
Build lightweight AI agents with Cargo. Powered by Rust. Declared in JSON.
-
spn-cli
The Agentic AI Toolkit - unified CLI for models, secrets, MCP servers, workflows, and AI agents
-
magi-core
LLM-agnostic multi-perspective analysis system inspired by MAGI
-
ifran
Local LLM inference, training, and fleet management platform
-
adaptive-card-mcp
MCP server exposing Adaptive Cards v1.6 tools (validate, optimize, transform, analyze) over stdio for any LLM client
-
hoosh
AI inference gateway — multi-provider LLM routing, local model serving, speech-to-text, and token budget management
-
bob-chat
Chat channel types and streaming abstractions for Bob Agent Framework
-
instructors
Type-safe structured output extraction from LLMs. The Rust instructor.
-
sqz-mcp
MCP server for sqz — expose LLM context compression over Model Context Protocol (stdio/SSE)
-
tokf
Config-driven CLI tool that compresses command output before it reaches an LLM context
-
bob-adapters
Adapter implementations for Bob Agent Framework ports
-
ai-agents-state
State machine for AI Agents framework
-
lutum-protocol
Core traits and request/response types for lutum
-
branchforge
Graph-first Rust runtime for durable LLM agents
-
awaken-tool-pattern
Glob and regex pattern matching for tool IDs in Awaken
-
cli_engineer
An autonomous CLI coding agent
-
famulus
LSP server integrating LLMs
-
parecode
A terminal coding agent built for token efficiency and local model reliability
-
edgee-compressor
HTTP response compression library for Edgee
-
ai-agents-process
Input/Output processing pipeline for AI Agents framework
-
siumai-protocol-anthropic
Anthropic Messages protocol standard mapping for siumai
-
codexus
Ergonomic Rust wrapper for codex app-server with runtime safety and release gates
-
awaken-ext-permission
Permission plugin with allow/deny/ask policies for Awaken tool execution
-
ctxforge
Deterministic prompt engineer for AI coding agents. Detects your project stack, attaches GitHub resources, flags missing context — never calls an LLM.
-
tools-rs
Core functionality for the tools-rs tool collection system
-
motosan-agent-tool
Shared AI agent tool kit — traits, registry, and built-in tools for LLM agents
-
zerobox-utils-string
Sandbox any command with file, network, and credential controls
-
ai-agents-reasoning
Reasoning and reflection capabilities for AI Agents framework
-
awaken-ext-deferred-tools
Deferred tool loading with ToolSearch and probability-based deferral for Awaken
-
aof-core
Core types, traits, and abstractions for AOF framework
-
valta-cli
CLI for valta — JSON repair and validation for LLM outputs
-
scope-cli
Code intelligence CLI for LLM coding agents — structural navigation, dependency graphs, and semantic search without reading full source files
-
noos
Reliability layer for Rust LLM agents: scope drift, cost circuit breaks, and procedural correction memory as event-driven Decisions
-
noether-scheduler
Cron-based composition scheduler — runs Noether Lagrange graphs on a schedule, fires webhooks on result
-
awful_rustdocs
Generate Rustdoc comments automatically using Awful Jade and a Nushell-based AST extractor
-
littrs-ruff-python-ast
Vendored ruff_python_ast for littrs (from github.com/astral-sh/ruff)
-
utokenizer
CLI tool for building a local model-tokenizer registry and counting input tokens across model families
-
opensession
CLI for opensession.io - discover, upload, and manage AI coding sessions
-
edgequake-pdf2md
Convert PDF documents to Markdown using Vision Language Models — CLI and library
-
seasoning
Embedding and reranking infrastructure with rate limiting and retry logic
-
rustia-rs
Rust version of typia.io for type-safe JSON validation and LLM JSON parsing
-
ai-agents-memory
Memory implementations for AI Agents framework
-
nab
Token-optimized HTTP client for LLMs — fetches any URL as clean markdown
-
delite-core
The sqlite of durable agent execution — crash-recoverable AI agents with exactly-once semantics. Zero dependencies.
-
awaken-ext-generative-ui
Server-driven UI component plugin (A2UI) for Awaken
-
enki-next
Enki's Rust agent runtime, workflow engine, and shared core abstractions
-
pmetal
High-performance LLM fine-tuning framework for Apple Silicon
-
mii-text
A small, unix-friendly CLI for talking to OpenAI-compatible LLM APIs
-
tirea-extension-observability
LLM inference and tool-call telemetry aligned with OpenTelemetry GenAI conventions
-
llm-transpile
High-performance LLM context bridge — token-optimized document transpiler
-
kalosm-language-model
A common interface for language models/transformers
-
sigit
Code — ACP-compatible AI coding agent for smbCloud platform
-
tirea-extension-skills
Skill discovery, activation, and resource loading for tirea agent tool extensibility
-
onwards
A flexible LLM proxy library
-
forgellm-codegen-gpu
GPU code generation via wgpu/WGSL for ForgeLLM
-
typia
Rust version of typia.io for type-safe JSON validation and LLM JSON parsing
Try searching with DuckDuckGo.