Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d454edd2dc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| start = perf_counter() | ||
| result = transform(source) | ||
| samples.append(perf_counter() - start) | ||
| avg = mean(samples) |
There was a problem hiding this comment.
Validate iteration counts before computing mean
If any timed iteration flag is set to 0 (for example --example-iterations 0), samples stays empty and mean(samples) raises StatisticsError, terminating the benchmark run with a traceback. This makes the harness brittle for common tuning workflows (e.g., intentionally skipping a benchmark group) and can break automated experiment scripts that pass through zero-valued parameters. Guard these arguments as positive integers (or handle zero by skipping that section) before calling mean.
Useful? React with 👍 / 👎.
Summary
benchmarks/areaValidation
PYTHONPATH=. .venv/bin/python benchmarks/benchmark_speed.py --helpPYTHONPATH=. .venv/bin/python benchmarks/benchmark_speed.py --pyminifier-root /private/tmp/pymini-pyminifier-src/pyminifier-2.1