Skip to content

Performance benchmarks: document throughput, limits, and known bottlenecks #151

@Mnehmos

Description

@Mnehmos

Problem

Teams evaluating NLS for production have no data on compile times, runtime performance, or known limits (e.g. max file size, recursion depth, number of @use modules).

Acceptance Criteria

  • Benchmark suite measuring: parse time, compile time, runtime throughput for representative workloads
  • docs/PERFORMANCE.md documenting results and known limits
  • Benchmarks run in CI and tracked over time (regression detection)
  • Documented: what happens when limits are exceeded (graceful error vs crash)

Why This Blocks v1.0

Engineering teams need to know what they're getting into before betting a project on it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationpriority-lowNice to havetoolingDevelopment tools

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions