2 releases
| new 0.1.1 | Apr 26, 2026 |
|---|---|
| 0.1.0 | Mar 28, 2026 |
#759 in Machine learning
215KB
5K
SLoC
lmkit
One config. Every major AI provider.
中文 | English
A unified Rust client for OpenAI, Anthropic, Google Gemini, Aliyun, Ollama, and Zhipu — built around a single trait and factory pattern. Switch providers by changing one config. Your business logic stays untouched.
Why use lmkit
- 🔌 Unified interface —
ChatProvider,EmbedProviderand friends abstract away provider differences; your code never talks to raw HTTP - 🔀 One-line switching — swap
ProviderConfigto move from OpenAI to Aliyun or a local Ollama, zero other changes - 📦 Compile only what you need — providers and modalities are Cargo features; unused ones add zero dependencies
- 🌊 Streaming + tool calls — native SSE streaming;
ChatEventenum carries text delta, tool call deltas, and finish reason as distinct variants - 🔍 Precise errors —
ProviderDisabled/Unsupported/Apitell you exactly what went wrong and where
Quick Start
Add the dependency
[dependencies]
lmkit = { version = "0.1", features = ["openai", "chat", "embed"] }
tokio = { version = "1", features = ["rt-multi-thread", "macros"] }
The defaults already include openai + chat + embed. Mix and match features as needed:
# Aliyun + multi-turn chat + embeddings + reranking
lmkit = { version = "0.1", features = ["aliyun", "chat", "embed", "rerank"] }
Send a message
use lmkit::{create_chat_provider, ChatRequest, Provider, ProviderConfig};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let cfg = ProviderConfig::new(
Provider::OpenAI,
std::env::var("OPENAI_API_KEY")?,
"gpt-4o-mini",
);
let chat = create_chat_provider(&cfg)?;
let out = chat
.complete(&ChatRequest::single_user("Explain Rust in one sentence."))
.await?;
println!("{}", out.content.unwrap_or_default());
Ok(())
}
Stream the response
use futures::StreamExt;
use lmkit::{create_chat_provider, ChatEvent, ChatRequest, Provider, ProviderConfig};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let cfg = ProviderConfig::new(
Provider::OpenAI,
std::env::var("OPENAI_API_KEY")?,
"gpt-4o-mini",
);
let chat = create_chat_provider(&cfg)?;
let mut stream = chat
.complete_stream(&ChatRequest::single_user("Tell me a joke."))
.await?;
while let Some(event) = stream.next().await {
match event? {
ChatEvent::Delta(text) => print!("{text}"),
ChatEvent::ToolCallDelta(deltas) => eprintln!("\n[tool calls: {deltas:?}]"),
ChatEvent::Finish(reason) => eprintln!("\n[finish: {reason:?}]"),
}
}
println!();
Ok(())
}
Switch providers
Change Provider::OpenAI to your target and update the API key — built-in providers have default base_url values:
// Aliyun Qwen
let cfg = ProviderConfig::new(
Provider::Aliyun,
std::env::var("DASHSCOPE_API_KEY")?,
"qwen-turbo",
);
// Local Ollama (no key required)
let cfg = ProviderConfig::new(
Provider::Ollama,
String::new(),
"llama3",
);
Use ProviderConfig::with_base_url when you need a proxy, private gateway, regional endpoint, or modality-specific path such as Aliyun native image generation.
Provider & Capability Matrix
| Provider | Chat | Embed | Rerank | Image |
|---|---|---|---|---|
| OpenAI | ✅ | ✅ | — | ✅ |
| Anthropic | ✅ | — | — | — |
| Google Gemini | ✅ | ✅ | — | — |
| Aliyun DashScope | ✅ | ✅ | ✅ | ✅ |
| Ollama | ✅ | ✅ | — | — |
| Zhipu | ✅ | ✅ | ✅ | — |
Chat primary API: complete (blocking) and complete_stream (SSE). chat / chat_stream are single-turn convenience wrappers.
Documentation
- 📖 Usage Guide — getting started, features, provider config, error handling
- 🔧 API Reference — Rust traits, factory functions, type definitions
- 🌐 HTTP Endpoints — per-provider request / response shapes
- 🏗️ Design Guidelines — architecture and extension principles
- 🤝 Contributing — how to add providers or modalities
License
Dependencies
~11–20MB
~281K SLoC