htopfor your LLM spend — proxy-only.
TokLog is a local-first HTTP proxy for LLM spend visibility and control.
Route OpenAI-, Anthropic-, and Gemini-compatible traffic through a local proxy. TokLog logs usage locally, attributes cost by model/provider/program/tag, and turns raw traffic into actionable waste reports.
No hosted backend. No account. No prompt egress by default.
pip install toklog
tl proxy setup
tl proxy start --backgroundAfter setup, clients that support base URL overrides can route through TokLog with no app-specific SDK integration.
- Proxy-based capture — intercepts LLM traffic at the HTTP layer
- Cross-language — works with Python, TypeScript, Go, curl, and anything else that can point at a base URL
- Cross-provider — OpenAI, Anthropic, Gemini
- Local logs — normalized JSONL logs under
~/.toklog/logs/ - Spend reports — model, provider, endpoint, program, and tag breakdowns
- Waste detection — highlights expensive patterns worth fixing first
- Shareable output — terminal and exported reports
Full spend breakdown — models, processes, context composition, waste detectors.
tl report # last 7 days
tl report --last 30dCumulative savings opportunities — shows how much waste each detector has found since install.
tl gainHealth check — verifies config, proxy, env vars, logging, and traffic.
tl doctorGenerate a self-contained HTML report you can share — no server needed.
tl share # save to ~/.toklog/reports/
tl share --open # save and open in browserCheck if the proxy daemon is running and where it's listening.
tl proxy statustl proxy setup # interactive setup wizard
tl proxy start --background
tl proxy stop
tl tail # live stream of logged calls
tl categories # list detected call categories
tl pricing # show model pricing table
tl reset # clear all logs and configSet a daily spend limit. When the budget is exceeded, the proxy returns HTTP 429 immediately — no upstream request is made.
CLI flag (takes precedence over config):
tl proxy start --budget 10.00Or set it in ~/.toklog/config.json:
{
"proxy": {
"budget_usd": 10.00
}
}Requests get a 429 Too Many Requests response with a JSON error body:
{
"error": {
"message": "Daily budget of $10.00 exceeded",
"type": "budget_exceeded"
}
}No tokens are consumed. No upstream contact is made.
tl proxy status # shows current spend vs budget limit
tl report # includes budget bar and rejection warnings- Resets at midnight local time
- No budget configured = no enforcement (fully backward compatible)
- GET requests (e.g.
/v1/models) are never blocked
TBFUL-1.0 — free for non-commercial and small-scale use. Commercial license required above $10k annual LLM spend.