Skip to content

Evaluate LLM provider costs and identify free/local options #27

@adeebahmed

Description

@adeebahmed

Before building multi-model support, understand the full cost picture and ensure a free path exists.

Scope

  • Ollama (local): Completely free. What models run well on typical hardware? (Llama 3, Mistral, Phi-3, Gemma). What are the quality trade-offs vs. hosted models for financial reasoning tasks?
  • Claude (Anthropic): Pricing per 1M tokens for Haiku vs. Sonnet vs. Opus. What's the realistic monthly cost for a personal finance app at typical usage?
  • OpenAI: GPT-4o-mini vs. GPT-4o pricing. Compare to Claude.
  • Free tiers: Any hosted models with free tiers sufficient for personal use? (Groq, Together AI, Google Gemini free tier)
  • Decision output: MVP should default to Ollama (free, local, privacy-preserving). Hosted models as opt-in with API key. Document the recommended default model per provider.

Metadata

Metadata

Assignees

No one assigned

    Labels

    aiAI/ML features and model workmvpRequired for MVP releaseresearchResearch and discovery tasks

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions