HTTP benchmarking CLI for load testing web services with concurrent connections, latency stats, and Lua scripting hooks.
$make
AI Analysis
wrk is a command-line HTTP benchmarking tool for driving high-concurrency load against a web service from a single machine. It focuses on throughput, latency, and error-rate measurement, with Lua hooks when you need more than a fixed request.
What It Enables
- Run repeatable HTTP or HTTPS benchmarks with configurable threads, open connections, duration, headers, timeouts, and optional latency breakdowns.
- Exercise custom request patterns with Lua, including POST bodies, dynamic request generation, delays, per-response inspection, and custom end-of-run summaries.
- Compare request rate, transfer rate, latency, and error counts before and after deploys, config changes, or infrastructure tuning.
Agent Fit
- The CLI surface is small, non-interactive, and easy to rerun in scripts or CI when an agent needs a quick load or regression check against a known endpoint.
- Automation is weaker than the benchmark engine itself: default results are text summaries, there is no built-in JSON flag, and richer machine-readable reporting usually means custom Lua or stdout parsing.
- Best used as a verification primitive in deploy and performance loops where the agent already knows the target URL and the thresholds that should fail the workflow.
Caveats
- Client-side limits matter: README notes ephemeral ports, socket recycling, and listen backlog tuning can distort results if the load generator is the bottleneck.
- Completed runs do not fail on bad HTTP statuses by themselves, so agents need explicit parsing or scripted reporting to turn benchmark output into pass or fail signals.