1. toon-format

    Token-Oriented Object Notation (TOON) - a token-efficient JSON alternative for LLM prompts

    v0.4.5 46K #toon #serialization #llm #llm-token #format
  2. llm

    unifying multiple LLM backends

    v1.3.7 7.9K #artificial-intelligence #chat-completion #json-schema #claude #text-to-speech #chat-request #openai #eleven-labs #ollama #multi-step
  3. tool-parser

    Tool/function call parser for LLM model outputs

    v1.2.0 149K #llm-function-calling #function-calling #llm #tool-calling #api-bindings
  4. llmfit

    Right-size LLM models to your system hardware. Interactive TUI and CLI to match models against available RAM, CPU, and GPU.

    v0.9.15 1.0K #model #inference #tui #llm #hardware #llm-inference
  5. liter-llm

    Universal LLM API client — 142+ providers, streaming, tool calling. Rust-powered, type-safe, compiled.

    v1.3.0 4.1K #openai #api-client #llm #kreuzberg
  6. sqz-cli

    Universal LLM context compressor — squeeze tokens from prompts, code, JSON, logs, and conversations

    v1.0.9 #artificial-intelligence #llm-context #compression #llm #cli
  7. gobby-squeeze

    YAML-configurable output compressor for LLM token optimization

    v0.4.2 #llm #token-optimization #compression
  8. llm-tokenizer

    LLM tokenizer library with caching and chat template support

    v1.3.2 157K #tokenize #hugging-face #tiktoken #llm #chat-template #tokenizer
  9. wikidesk-server

    MCP server that wraps LLM-wiki into a shared research service for AI coding agents

    v0.1.2 #wiki #mcp #research #llm-agent #llm
  10. llm_models_spider

    Auto-updated registry of LLM model capabilities (vision, audio, etc.)

    v0.1.85 10K #llm #model #multimodal #vision
  11. adk-model

    LLM model integrations for Rust Agent Development Kit (ADK-Rust) (Gemini, OpenAI, Claude, DeepSeek, etc.)

    v0.7.0 1.5K #artificial-intelligence #llm-agent #adk #llm #gemini
  12. dynamo-llm

    Dynamo LLM Library

    v1.0.2 180 #inference #kv-cache #dynamo #deployment #block #model-express #nixl #remote-storage #gpu #llm
  13. ba

    task tracking for LLM sessions

    v0.2.1 #multi-agent #llm #task
  14. mistralrs

    Fast, flexible LLM inference

    v0.8.1 9.0K #inference #llm-inference #llm #transformer #machine-learning
  15. noether-cli

    Noether CLI: ACLI-compliant command-line interface for stage management, composition graph execution, and LLM-powered compose

    v0.8.2 #pipeline #composition #workflow #llm-agent #llm
  16. zeph-llm

    LLM provider abstraction with Ollama, Claude, OpenAI, and Candle backends

    v0.20.0 #inference #ai-agent #skill #llm #llm-inference
  17. ast-outline

    Fast, AST-based structural outline for source files. Built for LLM coding agents and humans.

    v0.1.0 #tree-sitter #ast #llm #outline #cli
  18. ruvllm

    LLM serving runtime with Ruvector integration - Paged attention, KV cache, and SONA learning

    v2.1.0 1.9K #kv-cache #inference #llm-inference #ruvector #paged-attention #llm
  19. udiffx

    Parse and apply LLM-optimized unified diff + XML file changes

    v0.1.40 #unified-diff #diff-patch #llm #code-editing
  20. llm-connector

    Next-generation Rust library for LLM protocol abstraction with native multi-modal support. Supports 12+ providers (OpenAI, Anthropic, Google, Aliyun, Zhipu, Ollama, Tencent, Volcengine…

    v1.1.20 260 #anthropic #llm #openai #protocols #api-bindings
  21. yomo

    A QUIC-based runtime for AI-LLM tool routing and serverless execution

    v0.6.1 #serverless #ai-agent #llm-agent #llm #tool
  22. aichat

    All-in-one LLM CLI Tool

    v0.30.0 900 #artificial-intelligence #repl #llm
  23. wgpu-llm-cli

    Terminal-based chat interface for the wgpu LLM inference engine

    v0.1.1 #wgpu #inference-engine #compute-shader #wgsl-shader #llm #llm-inference #gpu #llama #from-scratch #chat-interface
  24. meerkat

    Modular, high-performance agent harness for LLM-powered applications

    v0.5.1 420 #artificial-intelligence #llm #ai-agent
  25. sqz-engine

    Adaptive multi-pass LLM context compression engine — content-aware pipeline with AST parsing, token counting, session persistence, and budget tracking

    v1.0.9 #artificial-intelligence #llm-context #compression #token #llm
  26. edgequake-llm

    Multi-provider LLM abstraction library with caching, rate limiting, and cost tracking

    v0.6.14 2.2K #anthropic #openai #gemini #llm #api-bindings
  27. error-toon

    Compress verbose browser errors for LLM consumption. Save 70-90% tokens.

    v1.2.0 #token #llm #compression #cli #error
  28. gsqz

    YAML-configurable output compressor for LLM token optimization

    v0.1.0 #llm #token-optimization #compression #cli
  29. dsct

    LLM-friendly packet dissector CLI

    v0.2.10 #packet-dissector #pcap #pcapng #packet #dissector #llm
  30. adk-agent

    Agent implementations for Rust Agent Development Kit (ADK-Rust, LLM, Custom, Workflow agents)

    v0.7.0 500 #llm-agent #adk #workflow #llm #api-bindings
  31. writestead

    LLM Wiki

    v0.1.17 #wiki #mcp #llm #front-matter #authentication #pdf #logging #lint #broken-links #stale
  32. kalosm-sample

    A common interface for token sampling and helpers for structered llm sampling

    v0.4.1 1.7K #artificial-intelligence #llama #nlp #mistral #llm
  33. noether-engine

    Noether composition engine: Lagrange graph AST, type checker, planner, executor, semantic index, LLM-backed composition agent

    v0.8.2 #composition #workflow #pipeline #llm-agent #llm
  34. ai-agents-llm

    LLM providers for AI Agents framework

    v1.0.0-rc.11 260 #framework #llm #agent-framework #yaml
  35. git-prism

    Agent-optimized git data MCP server — structured change manifests and full file snapshots for LLM agents

    v0.2.0 #llm-agent #mcp #llm #git
  36. runok

    Command execution permission framework for LLM agents

    v0.2.3 #sandbox #llm #security #cli #permissions
  37. llama-mcp-server

    Local LLM inference MCP server powered by llama.cpp

    v0.1.1 #inference #gguf #mcp #llama #mcp-server #web-server #json-rpc #llama-cpp #mcp-model #llm
  38. elif-core

    Core architecture foundation for the elif.rs LLM-friendly web framework

    v0.7.1 100 #dependency-injection #llm #framework #web
  39. soul-core

    Async agentic runtime for Rust — steerable agent loops, context management, multi-provider LLM abstraction, virtual filesystem, WASM-ready

    v0.12.4 #artificial-intelligence #agentic #wasm #llm #ai-agent
  40. llm-kit-provider

    Provider interface and traits for the LLM Kit - defines the contract for implementing AI model providers

    v0.1.2 100 #llm #provider #traits
  41. magi-core

    LLM-agnostic multi-perspective analysis system inspired by MAGI

    v0.3.1 #multi-agent #analysis #llm #consensus
  42. ifran

    Local LLM inference, training, and fleet management platform

    v1.3.0 #inference #llm #model-serving #training #llm-inference
  43. hoosh

    AI inference gateway — multi-provider LLM routing, local model serving, speech-to-text, and token budget management

    v1.3.0 700 #inference #whisper #llm #gateway #llm-inference
  44. adaptive-card-mcp

    MCP server exposing Adaptive Cards v1.6 tools (validate, optimize, transform, analyze) over stdio for any LLM client

    v0.1.0 #mcp #model-context #llm #adaptive-cards
  45. sqz-mcp

    MCP server for sqz — expose LLM context compression over Model Context Protocol (stdio/SSE)

    v1.0.9 #artificial-intelligence #llm-context #mcp #compression #llm
  46. tokf

    Config-driven CLI tool that compresses command output before it reaches an LLM context

    v0.2.41 #context-window #llm #llm-context #token
  47. branchforge

    Graph-first Rust runtime for durable LLM agents

    v0.9.5 #ai-agent #run-time #llm #graph #agent
  48. valta-cli

    CLI for valta — JSON repair and validation for LLM outputs

    v0.2.0 #repair #llm #json #cli-validation
  49. noos

    Reliability layer for Rust LLM agents: scope drift, cost circuit breaks, and procedural correction memory as event-driven Decisions

    v0.4.1 #llm-agent #regulator #circuit-breaker #llm #reliability
  50. rustia-rs

    Rust version of typia.io for type-safe JSON validation and LLM JSON parsing

    v0.1.2 #serde-json #rust-version #json-validation #llm #io #type-safe #typia #deserialize #lenient
  51. tirea-extension-observability

    LLM inference and tool-call telemetry aligned with OpenTelemetry GenAI conventions

    v0.5.0 240 #open-telemetry #observability #llm #agent
  52. llm-transpile

    High-performance LLM context bridge — token-optimized document transpiler

    v0.1.4 #markdown #llm #rag #compression #tokenizer
  53. typia

    Rust version of typia.io for type-safe JSON validation and LLM JSON parsing

    v0.1.1 #serde-json #rust-version #json-validation #llm #io #type-safe #deserialize #validation-error #lenient #json-error
  54. multi-llm

    Unified multi-provider LLM client with support for OpenAI, Anthropic, Ollama, and LMStudio

    v1.0.0 #unified #llm #anthropic #openai
  55. toon-rust

    Token-Oriented Object Notation (TOON) - JSON for LLM prompts at half the tokens. Rust implementation.

    v0.1.3 1.9K #serialization #llm-token #llm
  56. liter-llm-bindings-core

    Shared utilities for liter-llm language bindings — case conversion, config parsing, error formatting, runtime management

    v1.2.2 #config-parser #api-client #native-bindings #openai #formatting #case-conversion #llm #typescript #tokio-runtime #anthropic
  57. cognate-cli

    CLI tool for interacting with LLM providers via Cognate

    v0.1.0 #provider #llm #cli #rate-limiting #axum #chatgpt #multi-provider #tool-calling #production-grade #vector-store
  58. miyabi-llm

    LLM abstraction layer for Miyabi - GPT-OSS-20B integration

    v1.1.0 170 #artificial-intelligence #inference #openai #gpt #llm #llm-inference
  59. langchain-rust

    LangChain for Rust, the easiest way to write LLM-based programs in Rust

    v4.6.0 2.5K #chatgpt #llm #llm-chain
  60. llm_hunter

    forensic research of LLM gguf files and more

    v0.3.5 #gguf #binary-analysis #llm #forensic-analysis #llm-forensics
  61. qubit-metadata

    Type-safe extensible metadata model for the Qubit LLM SDK

    v0.3.0 #qubit #llm #metadata #serde #filter
  62. noether-grid-worker

    RESEARCH — noether-grid worker: advertises LLM capacity, runs graphs on request

    v0.8.2 #pipeline #workflow #composition #llm-agent #llm
  63. agent-sdk

    Rust Agent SDK for building LLM agents

    v0.8.0 110 #model-context-protocol #artificial-intelligence #ai-agent #llm
  64. dumbo-rs

    Turn any codebase into LLM-ready context : supports monorepos, multi-project runs, and git diffs

    v0.6.0 #git-diff #codebase #context #monorepo #text-file #llm #file-context #multi-project
  65. context-builder

    CLI tool to aggregate directory contents into a single markdown file optimized for LLM consumption

    v0.8.2 #markdown-documentation #llm-context #llm #markdown
  66. cupel

    Context window management pipeline for LLM applications

    v1.2.0 #token-budget #context-window #llm #pipeline
  67. cli-pdf-extract

    Fast Rust CLI wrapper around pdf_oxide for LLM-friendly PDF extraction

    v0.1.4 #pdf #markdown #annotations #llm
  68. erinra

    Memory MCP server for LLM coding assistants

    v0.1.0-alpha.2 #embedding-model #mcp #coding-assistants #memories #mcp-server #web-server #llm #daemon #reranker #database
  69. peon-runtime

    A runtime-agnostic agent execution engine with zero-trust context injection, multimodal messaging, and pluggable LLM providers

    v0.1.4 #artificial-intelligence #claude #execution-engine #llm #zero-trust #multimodal #peon #runtime-agnostic #request-scoped #engine-context
  70. god-gragh

    A graph-based LLM white-box optimization toolbox: topology validation, Lie group orthogonalization, tensor ring compression

    v0.5.0 #optimization #graph #graph-tensor #topology #llm
  71. swink-agent

    Core scaffolding for running LLM-powered agentic loops

    v0.8.1 #artificial-intelligence #llm-agent #llm #streaming #ai-agent
  72. sochdb

    LLM-optimized database with native vector search

    v2.0.2 #embedded-database #database #vector-search #llm #vector-database
  73. toondb

    LLM-optimized database with native vector search

    v0.3.4 #embedded-database #vector-search #database #llm
  74. mistralrs-quant

    Fast, flexible LLM inference

    v0.8.1 10K #inference #llm-inference #llm #transformer #machine-learning
  75. codecat

    「 Merge Code Repository into a Single File | Respects .gitignore | Ideal for LLM Code Analysis 」

    v0.1.2 #single-file #llm #cli #string
  76. smg

    High-performance model-routing gateway for large-scale LLM deployments

    v1.4.1 #inference #openai #load-balancing #llm #inference-gateway #llm-inference
  77. llmnop

    A command-line tool for benchmarking the performance of LLM inference endpoints

    v0.9.0 #inference #benchmark #openai #llm-inference #llm
  78. siumai

    A unified LLM interface library for Rust

    v0.11.0-beta.6 #openai #anthropic #llm #async
  79. aether-llm

    Multi-provider LLM abstraction layer for the Aether AI agent framework

    v0.2.3 #artificial-intelligence #anthropic #aether #llm #openai
  80. rtk-lite-cc

    Lightweight CLI proxy for Claude Code — minimizes LLM token consumption by filtering command outputs

    v0.2.2 #llm-token #claude #llm #proxy
  81. llama-rs

    A high-performance Rust implementation of llama.cpp - LLM inference engine with full GGUF support

    v0.15.1 #gguf #artificial-intelligence #llm-inference #llm
  82. tower-llm

    A Tower-based framework for building LLM & agent workflows in Rust

    v0.0.13 700 #artificial-intelligence #multi-agent #llm #llm-agent #openai #ai-agent #ai-agents
  83. reson-agentic

    Agents are just functions - production-grade LLM agent framework

    v0.5.1 #artificial-intelligence #llm #ai-agents #mcp #framework
  84. wonk

    Structure-aware code search CLI for LLM coding agents

    v4.14.1 #coding-agent #code-search #tree-sitter #indexer #llm
  85. ruvllm-wasm

    WASM bindings for RuvLLM - browser-compatible LLM inference runtime with WebGPU acceleration

    v2.0.0 #inference #web-gpu #llm-inference #browser #llm #wasm
  86. synoema-types

    Synoema — programming language optimized for LLM code generation

    v0.1.0 #jit-compiler #cranelift #llm #language-compiler #jit
  87. loki-ai

    An all-in-one, batteries included LLM CLI Tool

    v0.3.0 #chatgpt #repl #llm
  88. lkr-cli

    CLI for LLM Key Ring — manage LLM API keys via macOS Keychain

    v0.3.4 #keychain #llm #api-key #secret
  89. lnmp-llb

    LNMP-LLM Bridge Layer - Optimization layer for LLM prompt visibility and token efficiency

    v0.5.16 #lnmp #serialization #protocols #llm #serialization-protocols
  90. cli-denoiser

    CLI proxy that strips terminal noise for LLM agents. Zero false positives.

    v0.1.1 #llm-agent #llm #denoiser
  91. fzp

    Fuzzy Processor - parallel LLM inference pipe filter

    v0.3.5 #inference #filter #llm #processor #parallel #llm-inference #classify
  92. hermes-llm

    LLM training from scratch using Candle

    v1.8.34 #llm #training #transformer #deep-learning #gpt
  93. llm-stack

    Core traits, types, and tools for the llm-stack SDK

    v0.7.0 #anthropic #llm #ollama #openai
  94. ought-agent

    Provider-agnostic agent loop driving an Llm against a ToolSet

    v0.2.1 #llm #specs #testing #deontic #behavioral
  95. iron_runtime

    Agent runtime with LLM request routing and translation

    v0.4.0 #llm #agent-runtime #routing
  96. swiftide-query

    Fast, streaming indexing, query, and agentic LLM applications in Rust

    v0.32.1 #artificial-intelligence #llm #rag #openai
  97. tiycore

    Unified LLM API and stateful Agent runtime in Rust

    v0.1.21-rc.26042620 #anthropic #ai-agent #llm #openai #api-bindings
  98. llmkit

    Production-grade LLM client - 100+ providers, 11,000+ models. Pure Rust.

    v0.1.3 #artificial-intelligence #claude #openai #llm
  99. llm-utl

    Convert code repositories into LLM-friendly prompts with smart chunking and filtering

    v0.1.5 #code-analysis #llm #prompt #tokenizer
  100. swarm-engine-llm

    LLM integration backends for SwarmEngine

    v0.1.6 #llm #multi-agent #swarm #orchestration
  101. ferrum-interfaces

    Core trait contracts for the Ferrum LLM inference engine

    v0.6.0 #llama #inference-engine #ferrum #open-ai-compatible #llm #hugging-face #metal #llm-inference #model-text #text-image
  102. debugger-cli

    LLM-friendly debugger CLI using the Debug Adapter Protocol

    v0.1.3 #debugging #dap #llm
  103. cllient

    A comprehensive Rust client for LLM APIs with unified interface and model management

    v0.2.1 #openai #llm-client #llm #openai-api #ai-api
  104. tiy-core

    Unified LLM API and stateful Agent runtime in Rust

    v0.1.1-rc.26031910 #anthropic #ai-agent #llm #openai #api-bindings
  105. oris-mutation-evaluator

    Mutation quality evaluator with static analysis and LLM critic

    v0.3.0 #artificial-intelligence #evaluator #oris #llm #self-evolution #critic #closed-loop
  106. llm_runtime

    Abstractions and primitives for building agents and runtimes on top of llm_adapter

    v0.2.0 #large-language-model #llm #api-client #adapter
  107. chace

    CHamal's AutoComplete Engine - An LLM based code completion engine

    v0.2.0 #llm #engine #code-completion #autocomplete #cursor-position #artificial-intelligence #llm-token #breaking-change
  108. engram-agent

    Reusable LLM agent loop with tool-calling and lifecycle hooks

    v0.2.1 #llm-agent #qdrant #rag #llm #memory #agent-memory
  109. liter-llm-cli

    CLI for liter-llm — start an OpenAI-compatible proxy server or MCP tool server

    v1.3.0 #openai #kreuzberg #llm #proxy
  110. seqpacker

    High-performance sequence packing for LLM training

    v0.1.3 #bin-packing #llm #optimization #machine-learning #deep-learning
  111. ferrum-engine

    Model orchestration engine for Ferrum LLM inference

    v0.6.0 #llama #inference-engine #cuda #ferrum #metal #hugging-face #embedding-model #open-ai-compatible #llm #llm-inference
  112. smooai-smooth-operator

    Smooth Operator — Rust-native AI agent framework with built-in checkpointing, tool system, and LLM client

    v0.9.4 #ai-agent #orchestration #tool-use #llm #agent-orchestration
  113. oasis-sim

    Round-based social simulation with LLM agents (feeds, votes, run_state.json I/O)

    v2.1.0 #simulation #agent #llm #social
  114. flyllm

    unifying LLM backends as an abstraction layer with load balancing

    v0.4.1 #load-balancing #openai #llm #anthropic
  115. pctx

    Generate LLM-ready context from your codebase

    v0.1.3 #artificial-intelligence #ai-agent #llm #ai-context #context #cli-agent
  116. rustia-llm

    Rustia-powered LLM tool-calling adapter for aisdk

    v0.1.1 #adapter #aisdk #llm #rustia #input #tool-calling #coercion #parse-time
  117. laminae-cortex

    Self-improving learning loop for LLM applications — tracks user edits, extracts preferences, builds reusable instructions

    v0.4.2 #artificial-intelligence #edit #sdk #track #llm #preferences #laminae #user-preferences
  118. llm-kit-openai-compatible

    OpenAI-compatible provider implementation for the LLM Kit - supports OpenAI, Azure OpenAI, and compatible APIs

    v0.1.1 #openai #llm #azure
  119. llm-tui-rs

    Terminal UI for LLM chat with multi-provider support (Ollama, Claude, Bedrock)

    v20260324.0.1 #artificial-intelligence #vim #multi-provider #claude #ollama #conversation #llm #bedrock #tool-execution #authentication
  120. llm-voice-bridge

    Lightweight pipeline: text → LLM → VOICEVOX → WAV audio

    v0.2.0 #llm #voicevox #anthropic #openai #api-bindings
  121. ucp-llm

    LLM-focused utilities for the Unified Content Protocol

    v0.1.18 #token #ucp #llm #context
  122. yggdra

    Airgapped agentic TUI for local LLM inference with tool execution

    v0.2.1 #tui #ollama #llm #agent
  123. inference-lab

    High-performance LLM inference simulator for analyzing serving systems

    v0.6.2 #inference #llm-inference #simulation #llm #performance
  124. tirea-agent-loop

    LLM inference engine, tool dispatch, and streaming execution loop for tirea

    v0.3.0 230 #tool-calling #llm-inference #llm
  125. struct-llm

    Lightweight, WASM-compatible library for structured LLM outputs using tool-based approach

    v0.2.1 #openai #llm #wasm #anthropic #api-bindings
  126. mistralrs-paged-attn

    Fast, flexible LLM inference

    v0.8.1 3.3K #inference #llm-inference #llm #transformer #machine-learning
  127. aof-llm

    Multi-provider LLM abstraction layer

    v0.4.0-beta #devops #ai-agents #llm #kubernetes
  128. brainos-cortex

    LLM provider abstraction, context assembly, and action dispatch for Brain OS

    v0.1.0 #mcp #llm #local-first #memory
  129. chatpack-cli

    CLI tool for parsing and converting chat exports into LLM-friendly formats

    v0.1.0 #whatsapp #instagram #telegram #llm
  130. rig-cat

    LLM agent framework built on comp-cat-rs: typed effects, no async, categorical foundations

    v0.1.2 #llm-agent #llm #effect #category-theory #ai-agent
  131. roboticus-llm

    LLM client pipeline with circuit breaker, ML model router, semantic cache, and multi-format translation

    v0.11.4 #ai-agent #llm #run-time #agent
  132. limit-llm

    Multi-provider LLM client for Rust with streaming support. Supports Anthropic Claude, OpenAI, and z.ai.

    v0.0.46 #claude #llm #openai #api-bindings
  133. sgr-agent

    SGR LLM client + agent framework — structured output, function calling, agent loop, 3 agent variants

    v0.7.7 #llm-function-calling #structured-output #sgr #llm #gemini #function-calling
  134. llmtrace

    Transparent proxy server for LLM API calls

    v0.2.0 #openai #transparent-proxy #observability #prompt-injection #open-ai-compatible #api-security #real-time #pii #llm #server-api
  135. prompty

    asset class and format for LLM prompts

    v2.0.0-alpha.10 #ai-agent #llm #prompt #agent #llm-prompt
  136. sqz-wasm

    Browser WASM build of sqz — LLM context compression for browser extensions

    v0.2.0 #artificial-intelligence #browser #llm #compression #wasm
  137. schoolmarm

    GBNF grammar-constrained decoding for LLM inference, ported from llama.cpp

    v0.1.1 #inference #llm #grammar #gbnf #sampling
  138. tersify

    Universal LLM context compressor — pipe anything, get token-optimized output

    v0.5.0 #llm-context #token #compression #llm
  139. ask_llm

    request to whatever llm is the best these days, without hardcoding model/provider

    v2.2.2 #model-provider #llm #request #conversation #best #oneshot #medium
  140. llmux

    Hook-driven LLM model multiplexer with pluggable switch policy

    v2.4.0 #model #active-model #multiplexer #llm #switching #llama #proxy #alive #vllm #gpu
  141. rsmap

    Generate multi-layered, LLM-friendly index files for Rust codebases

    v0.1.1 #codebase #llm #parser #rust #index
  142. a3s-power

    A3S Power — Privacy-preserving LLM inference for TEE environments

    v0.4.2 #gguf #llm-inference #inference #tee #llm #privacy
  143. mistralrs-cli

    Command-line interface for mistral.rs LLM inference

    v0.8.1 170 #inference #llm-inference #transformer #machine-learning #llm
  144. rsrvr

    Store all your LLM Interactions

    v0.2.4 250 #artificial-intelligence #chat-completion #reservoir #openai #graph-database #llm #proxy #import-export #rag
  145. ferrum-scheduler

    Request scheduling for Ferrum LLM inference engine

    v0.5.0 #inference-engine #llama #ferrum #llm #llm-inference #open-ai-compatible #api-compatible #metal #rust-native #text-image
  146. ferrum-types

    Shared type definitions for the Ferrum LLM inference engine

    v0.6.0 #llama #inference-engine #ferrum #open-ai-compatible #llm #llm-inference #metal #embedding #rust-native #api-compatible
  147. backdisco

    Discover backend origins from CDN frontends using LLM-assisted pattern analysis and brute force enumeration

    v0.4.0 #llm #cdn #enumeration #reconnaissance #security
  148. truthlens

    AI hallucination detector — formally verified trust scoring for LLM outputs

    v0.6.0 #llm #trust #hallucination #fact-checking
  149. typia-llm

    Typia-powered LLM tool-calling adapter for aisdk

    v0.1.0 #adapter #typia #aisdk #llm #input #tool-calling #coercion
  150. mojentic

    An LLM integration framework for Rust

    v1.2.0 #openai #ai-agents #ollama #llm #api-bindings
  151. noether-grid-protocol

    RESEARCH — shared serde types for noether-grid (intra-company LLM pooling)

    v0.8.2 #pipeline #workflow #composition #llm-agent #llm
  152. cosmoflow

    type-safe workflow engine for Rust, inspired by PocketFlow and optimized for LLM applications

    v0.5.1 290 #llm #cosmoai #workflow
  153. memvid-ask-model

    LLM inference module for Memvid Q&A with local and cloud model support

    v2.0.139 #artificial-intelligence #inference #rag #llm-inference #memvid #llm
  154. bare-metal-kernels

    Metal GPU kernels for LLM inference on Apple Silicon — 85+ optimized compute shaders

    v0.7.1 #inference #apple-silicon #metal #gpu #llm #llm-inference
  155. neith

    Graph-based context orchestrator for LLM agent conversations

    v0.1.0 #graph #tui #llm-context #agent #llm
  156. talu

    Safe, idiomatic Rust SDK for talu LLM inference

    v0.0.1-post.202602141835 #inference #llm #llm-inference #api-bindings
  157. onetool

    Sandboxed Lua REPL for LLM tool use

    v0.0.1-alpha.10 #repl #lua #sandboxed #llm #run-time #mlua #round-trip
  158. openinference-semantic-conventions

    OpenInference semantic conventions for LLM observability in Rust

    v0.1.1 1.9K #open-telemetry #observability #llm #tracing
  159. web2llm

    Fetch web pages and convert to clean Markdown for LLM pipelines

    v0.4.0 #web-scraping #llm #rag #web #markdown
  160. golem-ai-llm

    working with LLM APIs on Golem Cloud

    v0.5.0 #golem #llm #api #provider #cloud #web-search #video #search-api
  161. ferrum-testkit

    Testing utilities for Ferrum LLM inference engine

    v0.5.0 #inference-engine #llama #ferrum #llm #testing #llm-inference #open-ai-compatible #api-compatible #embedding #metal
  162. llm-pipeline

    Reusable node payloads for LLM workflows: prompt templating, Ollama calls, defensive parsing, streaming, and sequential chaining

    v0.1.0 #ollama #payload #llm #langgraph
  163. swiftide-agents

    Fast, streaming indexing, query, and agentic LLM applications in Rust

    v0.32.1 160 #artificial-intelligence #rag #llm #openai
  164. swiftide-langfuse

    Fast, streaming indexing, query, and agentic LLM applications in Rust

    v0.32.1 #artificial-intelligence #llm #openai #rag
  165. moesniper

    Escape-proof precision file editor for LLM agents. Hex-encoded content, line-range splicing, atomic writes.

    v0.5.0 #editor #llm-agent #llm #hex #code
  166. llm-quota

    CLI tool to inspect and report LLM usage quota information

    v0.1.0 #claude #oauth #codex #json #quota #llm #authentication #json-output #summary
  167. llmy

    All-in-one LLM utilities

    v0.5.7 #tokenize #clap #llm #model #utilities #billing #all-in-one #in-memory #agent-tool #debugging
  168. cognate-llm

    A modular, extensible LLM framework for Rust with multi-provider support, type-safe tools, and RAG capabilities

    v0.1.1 #anthropic #openai #llm
  169. nous-judge

    Async LLM-as-judge evaluators for Nous — plan quality, adherence, task completion

    v0.3.0 #testing #async #nous #agent-os #networking #evaluators #monorepo #payment #llm #adherence
  170. alchemy-llm

    Unified LLM API abstraction layer supporting 10+ providers through a consistent streaming interface

    v0.2.0 #llm #anthropic #openai
  171. llm-cost-dashboard

    Real-time terminal dashboard for LLM token spend - cost/request, projected monthly bills, per-model breakdown

    v1.0.2 #ratatui #dashboard #llm #cost #tui
  172. vloom

    Fast, privacy-focused CLI for recording windows and generating LLM-optimized videos

    v0.1.0 #screen-recording #video #cli #llm #macos
  173. shopify-approver-rig-agent

    RIG-based agentic workflow for LLM orchestration with GLM/Claude

    v0.1.0 #shopify #rig #ai-agent #llm-agent #llm
  174. llm-orchestrator-audit

    Tamper-proof audit logging system for LLM workflows with hash chain integrity

    v0.1.1 #workflow #audit-logging #event-logging #ip-address #user-agent #retention #llm #audit-logs #hash #user-id
  175. astrid-llm

    LLM provider abstraction with streaming support for Astrid

    v0.1.1 #artificial-intelligence #claude #llm #provider #astrid #openai #open-ai-compatible #api-compatible #lm
  176. kotoba-llm

    Unified multi-vendor LLM client abstraction, supporting providers such as OpenAI, Anthropic, Google Gemini, etc

    v0.2.0 #anthropic #openai #llm #gemini #api-bindings
  177. agent-orchestrator-sdk

    Rust SDK for orchestrating LLM-powered agents, shared task execution, and teammate coordination

    v0.1.1 #multi-agent #orchestration #llm #ai-agent
  178. llm-extract

    Extract structured data from LLM responses — fence strip, JSON repair, fuzzy repair, typed deserialization

    v0.1.0 #repair #serde-json #llm #extract #extract-json
  179. llm-kit-anthropic

    Anthropic provider for LLM Kit - Complete Claude integration with streaming, tools, thinking, and citations

    v0.1.0 #claude #sdk #llm
  180. mistralrs-server-core

    Fast, flexible LLM inference

    v0.8.1 190 #inference #llm-inference #llm #transformer #machine-learning
  181. cargo-prompt

    Recursively minify and concatenate source code into a markdown document for llm prompting

    v0.1.7 800 #prompting #markdown #concatenation #llm #cargo #c-sharp #javascript #java #development-tools #lua
  182. token-count

    Count tokens for LLM models using exact tokenization

    v0.4.0 #tokenize #llm #gpt #cli #tokenizer
  183. meerkat-client

    LLM provider abstraction for Meerkat

    v0.5.2 600 #artificial-intelligence #meerkat #llm #ai-agent
  184. tibet-oomllama

    OomLlama — Sovereign LLM runtime with .oom format, Q2/Q4/Q8 quantization, and lazy-loading inference

    v0.1.0 #oom #quantization #llm #inference #oomllama #llm-inference
  185. attuned-core

    Core types and traits for Attuned - human state representation for LLM systems

    v1.0.1 #llm #ai-agent #state #context #api-bindings
  186. infernum-server

    HTTP API server for local LLM inference

    v0.2.0-rc.2 #web-api #inference #api-model #chat-completion #prometheus #rag #health-check #llm #llm-inference #infernum
  187. menta

    Minimal Rust library for non-UI LLM and AI primitives

    v0.0.5 #llm #embedding #openai #tool
  188. blazen-llm

    LLM provider abstraction layer for the Blazen workflow engine

    v0.1.150 #json-schema #blazen #model-provider #workflow-engine #openai #llm #anthropic #fal-ai #agentic #azure
  189. llm_providers

    A unified source of truth for LLM providers, models, pricing, and capabilities

    v0.8.2 240 #llm #pricing #provider #model #api-bindings
  190. charter

    Fast structural context generator for Rust codebases, optimized for LLM consumption

    v0.1.3 #llm-context #parser #llm #ast #rust
  191. mistralrs-audio

    Fast, flexible LLM inference

    v0.8.1 10K #inference #llm-inference #llm #transformer #machine-learning
  192. liter-llm-proxy

    OpenAI-compatible LLM proxy server — model routing, virtual keys, rate limiting, cost tracking

    v1.2.2 #openai #gateway #llm #kreuzberg #proxy
  193. llm-orchestrator-state

    State persistence and recovery for LLM workflow orchestrator

    v0.1.1 #workflow #postgresql #checkpoint #cleanup #llm
  194. valta-core

    Core JSON repair and validation library for LLM outputs

    v0.2.0 #json-schema #repair #llm #schema-validation
  195. fuzzy-parser

    Fuzzy JSON repair for LLM-generated DSL

    v0.1.0 210 #repair #llm #json #parser #fuzzy
  196. sema-llm

    LLM provider integrations (Anthropic, OpenAI) for the Sema programming language

    v1.13.0 #artificial-intelligence #openai #llm #anthropic #sema #pricing #ollama #open-ai-compatible #gemini #lisp
  197. self-llm

    Unified chat API for multiple LLM providers

    v0.1.7 #unified #chat-request #openai #llm #anthropic
  198. llmg-core

    Core types and traits for LLMG - LLM Gateway

    v0.4.0 #gateway #rig #llm #openai #api-bindings
  199. gamecode-mcp2

    Minimal, auditable Model Context Protocol server for safe LLM-to-system interaction

    v0.7.0 420 #llm #auditable #mcp #minimal #security
  200. tokemon

    Unified LLM token usage tracking across all providers

    v0.2.4 #monitoring #llm #usage #token
  201. llm-cost-ops

    Core library for cost operations on LLM deployments

    v0.1.1 #llm #monitoring #operation #cost
  202. mesh-llm-client

    Low-level Rust client implementation for Mesh LLM embedded integrations

    v0.65.0-rc2 #inference #llm #mesh-client #llm-client #mesh
  203. nb-mcp-server

    MCP server wrapping the nb CLI for LLM-friendly note-taking

    v0.9.0 #nb #mcp #llm #notes
  204. llm-kit-azure

    Azure OpenAI provider for LLM Kit

    v0.1.0 #openai #llm #azure
  205. turbine-llm

    Unified Rust interface for multiple LLM providers with growing model support

    v0.2.2 #anthropic #groq #llm #gemini #openai
  206. onde-mistralrs

    Fast, flexible LLM inference

    v0.8.2 #inference #llm-inference #llm #transformer #machine-learning
  207. mirror

    unifying multiple LLM backends

    v0.4.1 120 #artificial-intelligence #chat-completion #json-schema #claude #text-to-speech #openai #llm #eleven-labs #multi-step #ollama
  208. sugars_llm

    LLM integration and AI agent builder utilities

    v0.5.4 #llm #cyrup #rust #builder
  209. llm-text

    processing text for LLM consumption

    v0.1.0 #nlp #llm #text-extraction #html #text-html
  210. deemuk

    Compress any text before it enters your LLM. Less tokens, same meaning.

    v1.0.0 #compression #llm #nlp
  211. llm-latency-lens-providers

    Provider adapters for LLM Latency Lens

    v0.1.2 #openai #llm #anthropic #provider
  212. ferrum-cli

    CLI for Ferrum — a Rust-native LLM inference engine

    v0.5.0 #inference-engine #llama #llm #llm-inference #rust-native #open-ai-compatible #hugging-face #text-image #metal #embedding
  213. npcrs

    Rust core for the NPC system — agent kernel, jinx executor, LLM client

    v0.1.2 #npc #llm #shell #ai-agent
  214. agent-io

    SDK for building AI agents with multi-provider LLM support

    v0.3.2 #artificial-intelligence #multi-provider #sdk #llm
  215. armyknife-llm-redteam

    LLM red-teaming security scanner — nmap for LLMs

    v1.4.0 #mcp #llm #redteam #ai-security
  216. mecha10-nodes-llm-command

    Natural language command parsing via LLM APIs (OpenAI, Claude, Ollama)

    v0.1.39 #artificial-intelligence #command-parser #openai #node #llm #claude #motor #mecha10 #ollama #nlp
  217. onde-mistralrs-quant

    Fast, flexible LLM inference

    v0.8.2 #inference #llm-inference #llm #transformer #machine-learning
  218. pmetal-models

    LLM model architectures for PMetal

    v0.4.0 #apple-silicon #fine-tuning #llm #machine-learning #mlx
  219. llmtrace-proxy

    Transparent proxy server for LLM API calls

    v0.1.1 #openai #observability #api-security #prompt-injection #proxy-server #transparent-proxy #open-ai-compatible #real-time #llm #pii
  220. cortexai-providers

    LLM provider integrations for Cortex: OpenRouter, OpenAI, Anthropic and more

    v0.1.0 #openrouter #openai #llm #provider #api-bindings
  221. nexus-orchestrator

    Distributed LLM model serving orchestrator - unified API gateway for heterogeneous inference backends

    v0.4.0 #artificial-intelligence #ollama #openai #llm
  222. llm_client

    easiest Rust interface for local LLMs

    v0.0.7 330 #gguf #llama-cpp #openai #anthropic #llm
  223. llm-kit-xai

    xAI (Grok) provider implementation for the LLM Kit - supports chat, image generation, and agentic tools

    v0.1.0 #llm #xai #grok #provider #api-bindings
  224. ralphloop

    A CLI tool for creating and running Ralphloops with LLM integration

    v0.1.0 #artificial-intelligence #llm #automation
  225. rosetta-aisp-llm

    LLM fallback for AISP conversion using Claude SDK - extends rosetta-aisp with AI-powered conversion

    v0.3.0 #convert #claude #llm #aisp
  226. rake-sandbox

    Secure LLM agent sandbox — mount files, analyse with Claude/OpenAI/Ollama/llama.cpp, WASM-isolated

    v0.1.0 #llm-agent #wasm-sandbox #llm #analysis #wasm
  227. llm-incident-manager

    Enterprise-grade incident management system for LLM operations

    v1.0.1 #devops #observability #incident #llm
  228. neuromance

    controlling and orchestrating LLM interactions

    v0.0.5 #orchestration #llm #openai #api-bindings
  229. jamjet-models

    JamJet model adapter layer — unified interface for LLM providers

    v0.3.1 #mcp #a2a #workflow #llm #agent-workflow
  230. llm-toolkit

    A low-level, unopinionated Rust toolkit for the LLM last mile problem

    v0.63.1 #llm #prompt #json-parser #json #parser
  231. ironclad-llm

    LLM client pipeline with circuit breaker, ML model router, semantic cache, and multi-format translation

    v0.9.7 #ai-agent #llm #run-time #agent
  232. babel

    Provide Rust enums for Groq, SambaNova, Openrouter's llm model names

    v0.0.11 500 #llm #model #enums #groq #openrouter
  233. compression-prompt

    Fast statistical compression for LLM prompts - 50% token reduction with 91% quality retention

    v0.1.2 #llm #prompt #token-reduction #compression #optimization #prompt-optimization
  234. mentedb-extraction

    LLM-powered memory extraction engine for MenteDB

    v0.8.1 330 #artificial-intelligence #database #extract #llm
  235. gatekpr-rig-agent

    RIG-based agentic workflow for LLM orchestration with GLM/Claude

    v0.2.3 #ai-agent #shopify #rig #llm-agent #llm
  236. llm-here-core

    Detection + dispatch logic for LLM CLIs and API providers — the library half of llm-here

    v0.4.0 #llm-agent #llm #detect
  237. infernum-paimon

    LLM Studio - Teaches arts, sciences, and gives good familiars

    v0.2.0-rc.2 #llm #training #experiment #dataset #model #metrics #sciences #arts #teaches #infernum
  238. lsp-llm

    Opt-in LLM advisor for axon-lsp, gated behind the llm Cargo feature. Never on the critical path: deterministic capabilities (diagnostics, hover, completion) work without this crate.

    v0.1.1 #llm #axon #advisor #lsp
  239. llm-relay

    Shared types, format conversion, and HTTP client for Anthropic and OpenAI LLM APIs

    v0.2.1 #openai #anthropic #llm
  240. bare-metal-gguf

    GGUF binary format parser for bare-metal LLM inference — zero-copy mmap, all quantization types

    v0.7.1 #gguf #inference #llm-inference #llm #quantization
  241. fig2json

    CLI tool to convert Figma .fig files to LLM-friendly JSON format

    v0.3.0 #json #figma #llm #converter
  242. llm-kit-core

    Core functionality for the LLM Kit - unified interface for building AI-powered applications

    v0.1.0 #large-language-model #sdk #llm #openai
  243. orchard-rs

    Rust client for Orchard - high-performance LLM inference on Apple Silicon

    v2026.4.2 #apple-silicon #inference #nng #ipc #llm-inference #llm
  244. trimcp

    MCP proxy that reduces LLM token costs by 60–90% through compression and caching

    v0.1.0 #llm-token #mcp #llm #compression #proxy
  245. Try searching with DuckDuckGo or on crates.io.

  246. llm-kit-groq

    Groq provider implementation for the LLM Kit - supports chat and transcription models

    v0.1.0 #groq #llm #provider #api-bindings
  247. saorsa-ai

    Unified multi-provider LLM API

    v0.4.0 #openai #llm #streaming #anthropic
  248. meritocrab-llm

    LLM evaluator trait and implementations for the Meritocrab reputation system

    v0.1.4 #config #meritocrab #evaluator #pr #reputation #llm #contributors #github-actions #credits #github-webhook
  249. nuro-llm

    LLM provider abstractions and implementations for Nuro

    v0.1.0 #agent-sdk #nuro #llm #artificial-intelligence #provider #openai
  250. llm-security

    Comprehensive LLM security layer to prevent prompt injection and manipulation attacks

    v0.1.0 #prompt-injection #llm #gpt #llm-prompt #security
  251. mistralrs-vision

    Fast, flexible LLM inference

    v0.8.1 10K #inference #llm-inference #llm #transformer #machine-learning
  252. onde-mistralrs-paged-attn

    Fast, flexible LLM inference

    v0.8.2 #inference #llm-inference #llm #transformer #machine-learning
  253. llm-kit-openai

    OpenAI provider implementation for the LLM Kit

    v0.1.0 #llm #openai #chat-completion #kit #builder-pattern #function-calling #client-builder #gpt-4 #tier #tool-calling
  254. lc-cli

    LLM Client - A fast Rust-based LLM CLI tool with provider management and chat sessions

    v0.1.3 #openai #anthropic #llm
  255. llm-kit-huggingface

    Hugging Face provider for LLM Kit

    v0.1.0 #hugging-face #llm #api-bindings
  256. llm-cascade

    Resilient cascading LLM inference with automatic failover across multiple providers

    v0.1.0 #openai #cascade #llm #failover #anthropic
  257. llm-orchestrator-secrets

    Secret management for LLM Orchestrator with Vault, AWS Secrets Manager, and environment variable support

    v0.1.1 #secrets-manager #aws-secret-manager #secret-management #secret-store #vault #llm #cache #version-manager #secret-version #hashi-corp-vault
  258. rig-openapi-tools

    Turn any OpenAPI spec into LLM-callable tools for rig

    v0.1.5 #rig #llm #agent #tool #openapi #agent-tool
  259. dkdc-lm-cli

    CLI for dkdc-lm: local LLM inference management

    v0.2.1 #inference #cli #local #dkdc #llm #llm-inference
  260. astmap

    — code structure index with transitive impact analysis for LLM coding tools

    v0.0.2 #impact-analysis #transitive #index #coding-tool #llm
  261. legalis-llm

    LLM integration layer for Legalis-RS

    v0.1.5 #artificial-intelligence #model-name #llm #law #document #model-provider #legalis-rs #generate-text #mocking #nlp
  262. bare-metal-reference

    Numerical validation harness for bare-metal LLM inference kernels

    v0.7.1 #inference #llm-inference #validation #llm #testing
  263. ferrum-sampler

    Sampling strategies for Ferrum LLM inference engine

    v0.5.0 #inference-engine #llama #ferrum #sampling-strategies #llm #llm-inference #open-ai-compatible #api-compatible #metal #rust-native
  264. vex-llm

    LLM provider integrations for VEX

    v1.7.0 #artificial-intelligence #ai-agents #llm #openai #ollama
  265. llm-daemon

    LLM as a daemon

    v0.7.0 1.9K #llm #daemon #server
  266. serde_mask

    Mask sensitive data during serde serialization for LLM ingestion

    v0.1.2 #serde #llm #secret #mask #anonymize
  267. llm-registry-core

    Core domain types and models for the LLM Registry - A secure, production-ready registry for Large Language Models

    v0.1.0 #llm #ml #model #registry
  268. prompt-sentinel

    A high-performance CLI tool for LLM prompt regression testing

    v0.1.2 #llm #prompt-engineering #testing #cli-prompt #cli