Skip to content

[codex] add benchmark harness and docs#10

Merged
alvinwan merged 2 commits intomainfrom
dev/alvin/benchmark-speed
Apr 5, 2026
Merged

[codex] add benchmark harness and docs#10
alvinwan merged 2 commits intomainfrom
dev/alvin/benchmark-speed

Conversation

@alvinwan
Copy link
Copy Markdown
Owner

@alvinwan alvinwan commented Apr 5, 2026

Summary

  • move speed benchmarking into a dedicated benchmarks/ area
  • add a benchmark harness for repo fixtures, TexSoup package runs, and baseline single-file comparisons
  • keep the top-level README focused by linking to the benchmark notes instead of inlining the latency table

Validation

  • PYTHONPATH=. .venv/bin/python benchmarks/benchmark_speed.py --help
  • PYTHONPATH=. .venv/bin/python benchmarks/benchmark_speed.py --pyminifier-root /private/tmp/pymini-pyminifier-src/pyminifier-2.1

@alvinwan alvinwan marked this pull request as ready for review April 5, 2026 07:08
@alvinwan alvinwan merged commit 90f2504 into main Apr 5, 2026
10 checks passed
@alvinwan alvinwan deleted the dev/alvin/benchmark-speed branch April 5, 2026 07:08
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d454edd2dc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

start = perf_counter()
result = transform(source)
samples.append(perf_counter() - start)
avg = mean(samples)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate iteration counts before computing mean

If any timed iteration flag is set to 0 (for example --example-iterations 0), samples stays empty and mean(samples) raises StatisticsError, terminating the benchmark run with a traceback. This makes the harness brittle for common tuning workflows (e.g., intentionally skipping a benchmark group) and can break automated experiment scripts that pass through zero-valued parameters. Guard these arguments as positive integers (or handle zero by skipping that section) before calling mean.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant