Skip to content

erogol/toklog

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TokLog

CI PyPI License: TBFUL Made with KeCHe

htop for your LLM spend — proxy-only.

TokLog is a local-first HTTP proxy for LLM spend visibility and control.

Route OpenAI-, Anthropic-, and Gemini-compatible traffic through a local proxy. TokLog logs usage locally, attributes cost by model/provider/program/tag, and turns raw traffic into actionable waste reports.

No hosted backend. No account. No prompt egress by default.


Install

pip install toklog
tl proxy setup
tl proxy start --background

After setup, clients that support base URL overrides can route through TokLog with no app-specific SDK integration.


What it does

  • Proxy-based capture — intercepts LLM traffic at the HTTP layer
  • Cross-language — works with Python, TypeScript, Go, curl, and anything else that can point at a base URL
  • Cross-provider — OpenAI, Anthropic, Gemini
  • Local logs — normalized JSONL logs under ~/.toklog/logs/
  • Spend reports — model, provider, endpoint, program, and tag breakdowns
  • Waste detection — highlights expensive patterns worth fixing first
  • Shareable output — terminal and exported reports

Core commands

tl report

Full spend breakdown — models, processes, context composition, waste detectors.

tl report           # last 7 days
tl report --last 30d

tl report

tl gain

Cumulative savings opportunities — shows how much waste each detector has found since install.

tl gain

tl gain

tl doctor

Health check — verifies config, proxy, env vars, logging, and traffic.

tl doctor

tl doctor

tl share

Generate a self-contained HTML report you can share — no server needed.

tl share             # save to ~/.toklog/reports/
tl share --open      # save and open in browser

tl share

tl proxy status

Check if the proxy daemon is running and where it's listening.

tl proxy status

tl proxy status

Other commands

tl proxy setup          # interactive setup wizard
tl proxy start --background
tl proxy stop
tl tail                 # live stream of logged calls
tl categories           # list detected call categories
tl pricing              # show model pricing table
tl reset                # clear all logs and config

Budget Kill Switch

Set a daily spend limit. When the budget is exceeded, the proxy returns HTTP 429 immediately — no upstream request is made.

Enable it

CLI flag (takes precedence over config):

tl proxy start --budget 10.00

Or set it in ~/.toklog/config.json:

{
  "proxy": {
    "budget_usd": 10.00
  }
}

What happens at the limit

Requests get a 429 Too Many Requests response with a JSON error body:

{
  "error": {
    "message": "Daily budget of $10.00 exceeded",
    "type": "budget_exceeded"
  }
}

No tokens are consumed. No upstream contact is made.

Monitoring

tl proxy status   # shows current spend vs budget limit
tl report         # includes budget bar and rejection warnings

Behavior notes

  • Resets at midnight local time
  • No budget configured = no enforcement (fully backward compatible)
  • GET requests (e.g. /v1/models) are never blocked

License

TBFUL-1.0 — free for non-commercial and small-scale use. Commercial license required above $10k annual LLM spend.

About

htop for your LLM endpoints.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors