Benchmark Framework
Performance testing framework for spatial index implementations.
Auto-discovers implementations from packages/@jim/spandex/src/index/.
Quick Reference
# Run benchmarks (quick, ~2 min)
deno task bench
# Update documentation (quick, ~2 min)
deno task bench:update
# Statistical analysis (slow, ~30 min)
deno task bench:analyze 5 docs/analyses/benchmark-statistics.md
When to Run Which Benchmark
deno task bench - Quick Validation
Use when:
- Verifying implementation changes
- Checking relative performance
- Quick local testing
Duration: ~2 minutes
Output: Terminal only
deno task bench:update - Documentation Update
Use when:
- After implementation changes
- Before committing code
- When BENCHMARKS.md is outdated
Duration: ~2 minutes
Output: Regenerates BENCHMARKS.md
Note: Auto-discovers active implementations from packages/@jim/spandex/src/index/
⚠️ IMPORTANT: Always run this before completing tasks if implementations changed.
deno task bench:analyze - Statistical Analysis
Use when:
- After major implementation changes
- Replacing an algorithm (e.g., Hilbert→Morton)
- For research experiments requiring statistical rigor
- When you need CV%, confidence intervals
Duration: 20-30 minutes (5 runs recommended)
Output: Overwrites docs/analyses/benchmark-statistics.md
Note: ALWAYS outputs to benchmark-statistics.md (don't create variants)
⚠️ WARNING: This is SLOW. Not for quick checks - use deno task bench instead.
Quick validation (10-15 min, less rigorous):
deno task bench:analyze 3 docs/analyses/benchmark-statistics.md
Workflow Integration
During Development
# Iterate quickly
deno task bench # Quick feedback (~2 min)
Before Completing Task
# Update both benchmark docs
deno task bench:update # Updates BENCHMARKS.md (~2 min)
deno task bench:analyze 5 docs/analyses/benchmark-statistics.md # Updates stats (~30 min)
Both must be current before completing/committing work.
Advanced Usage
Include Archived Implementations
deno task bench:archived
# or
deno bench benchmarks/performance.ts -- --include-archived
Exclude Implementations
deno bench benchmarks/performance.ts -- --exclude=CompactRTree
deno bench benchmarks/performance.ts -- --exclude=A --exclude=B
deno bench benchmarks/performance.ts -- --include-archived --exclude=LinearScan
Auto-Discovery
Active: Auto-discovered from packages/@jim/spandex/src/index/ (all .ts files).
Archived: Code removed from filesystem (preserved in git history). Use --include-archived to benchmark if archived implementations are temporarily restored to archive/src/implementations/.
Principles
- Active by default - Auto-discover from
packages/@jim/spandex/src/index/ - Selective exclusion - Use
--exclude=flag for filtering - No configuration needed - Zero-config auto-discovery
Output Files
BENCHMARKS.md (Quick Update)
Generated by: deno task bench:update
Purpose: Performance comparison tables for users
Format: Markdown tables, relative performance (Nx faster/slower)
Audience: Library users making algorithm choices
⚠️ NEVER EDIT MANUALLY - Your changes will be lost on regeneration.
docs/analyses/benchmark-statistics.md (Statistical Analysis)
Generated by: deno task bench:analyze
Purpose: Statistical validation (CV%, confidence intervals)
Format: Win rates, scenario breakdowns, stability metrics
Audience: Researchers, contributors validating experiments
⚠️ ALWAYS OVERWRITES - Don't create separate files for experiments.
See Also
- Implementation Lifecycle - Adding/archiving implementations
- analyses/benchmark-statistics.md - Latest statistical analysis