1 unstable release
| 0.1.0 | Dec 22, 2025 |
|---|
#604 in HTTP client
35KB
402 lines
Gemini Crate
A robust Rust client library for Google's Gemini AI API with built-in error handling, retry logic, and comprehensive model support.
Features
- 🚀 Simple API - Easy-to-use client for Gemini AI models
- 🔄 Automatic Retries - Built-in exponential backoff for network reliability
- 🌐 Starlink Optimized - Designed for satellite internet connections with dropout handling
- 📦 Model Discovery - List and discover available Gemini models
- 🛡️ Comprehensive Error Handling - Detailed error types for robust applications
- ⚡ Async/Await Support - Fully asynchronous with Tokio
- 🔧 Configurable - Flexible configuration options
Quick Start
1. Add to your project
[dependencies]
gemini_crate = "0.1.0"
tokio = { version = "1.0", features = ["full"] }
dotenvy = "0.15"
2. Set up your API key
Create a .env file in your project root:
GEMINI_API_KEY=your_gemini_api_key_here
Get your API key from Google AI Studio.
3. Basic usage
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load environment variables
dotenvy::dotenv().ok();
// Create client
let client = GeminiClient::new()?;
// Generate text
let response = client
.generate_text("gemini-2.5-flash", "What is the capital of France?")
.await?;
println!("Response: {}", response);
Ok(())
}
Usage Examples
List Available Models
use gemini_crate::client::GeminiClient;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
let models = client.list_models().await?;
for model in models.models {
println!("- {} ({})", model.name, model.display_name);
println!(" Methods: {:?}", model.supported_generation_methods);
}
Ok(())
}
Error Handling
use gemini_crate::{client::GeminiClient, errors::Error};
#[tokio::main]
async fn main() {
dotenvy::dotenv().ok();
let client = match GeminiClient::new() {
Ok(c) => c,
Err(Error::Config(msg)) => {
eprintln!("Configuration error: {}", msg);
eprintln!("Make sure GEMINI_API_KEY is set in your .env file");
return;
}
Err(e) => {
eprintln!("Failed to create client: {}", e);
return;
}
};
match client.generate_text("gemini-2.5-flash", "Hello!").await {
Ok(response) => println!("Success: {}", response),
Err(Error::Network(e)) => eprintln!("Network error: {}", e),
Err(Error::Api(msg)) => eprintln!("API error: {}", msg),
Err(e) => eprintln!("Other error: {}", e),
}
}
Batch Processing
use gemini_crate::client::GeminiClient;
use futures::future::try_join_all;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
dotenvy::dotenv().ok();
let client = GeminiClient::new()?;
let questions = vec![
"What is the capital of Japan?",
"Explain photosynthesis briefly",
"What's the largest planet?",
];
let tasks = questions.into_iter().map(|question| {
client.generate_text("gemini-2.5-flash", question)
});
let responses = try_join_all(tasks).await?;
for (i, response) in responses.iter().enumerate() {
println!("Response {}: {}", i + 1, response);
}
Ok(())
}
Available Models
The library supports all current Gemini models:
| Model | Best For | Speed | Context |
|---|---|---|---|
gemini-2.5-flash |
General tasks | Fast | 1M tokens |
gemini-2.5-pro |
Complex reasoning | Medium | 2M tokens |
gemini-flash-latest |
Latest features | Fast | Variable |
gemini-pro-latest |
Latest pro features | Medium | Variable |
Use client.list_models() to see all available models and their capabilities.
Examples
Run the included examples:
# Interactive chat
cargo run --example simple_chat
# List all models
cargo run --example list_models
# Batch processing demo
cargo run --example batch_processing
Error Types
The library provides comprehensive error handling:
Error::Network- Network connectivity issuesError::Api- Gemini API errors (rate limits, invalid requests)Error::Json- Response parsing errorsError::Config- Configuration issues (missing API key)
Best Practices
1. Environment Setup
# .env file
GEMINI_API_KEY=your_api_key_here
RUST_LOG=info # Optional: for debugging
2. Rate Limiting
use std::time::Duration;
use tokio::time::sleep;
// Add delays between requests
for prompt in prompts {
let response = client.generate_text("gemini-2.5-flash", prompt).await?;
println!("{}", response);
sleep(Duration::from_millis(500)).await; // Be nice to the API
}
3. Model Selection
// For quick responses
let model = "gemini-2.5-flash";
// For complex reasoning
let model = "gemini-2.5-pro";
// For latest features
let model = "gemini-flash-latest";
Network Reliability
The library is designed for unreliable connections (like Starlink):
- ✅ Automatic retry with exponential backoff
- ✅ Transient error detection
- ✅ Timeout handling
- ✅ Network dropout recovery
Configuration
Environment Variables
GEMINI_API_KEY(required) - Your Gemini API key
Custom Configuration
use gemini_crate::{client::GeminiClient, config::Config};
let config = Config::from_api_key("your_api_key".to_string());
let client = GeminiClient::with_config(config);
Documentation
- Full Usage Guide - Comprehensive examples and patterns
- API Documentation - Complete API reference
- Examples - Ready-to-run example applications
Requirements
- Rust 2024 edition
- Tokio async runtime
- Valid Google Gemini API key
Contributing
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass:
cargo test - Run clippy:
cargo clippy - Submit a pull request
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE)
- MIT License (LICENSE-MIT)
at your option.
Troubleshooting
Common Issues
"GEMINI_API_KEY must be set"
- Ensure your
.envfile is in the project root - Verify the API key is correct
- Call
dotenvy::dotenv().ok()before creating the client
"Model not found"
- Use
client.list_models()to see available models - Update to current model names (avoid deprecated ones like
gemini-pro)
Network timeouts
- The library has built-in retry logic
- For Starlink connections, consider application-level timeouts
- Check internet connectivity
For more help, see the full troubleshooting guide.
Dependencies
~6–14MB
~230K SLoC