How-to guides
Task-oriented recipes for users who already know what they want to accomplish.
- Submit detections as numpy arrays or DLPack tensors — skip JSON for tight model→evaluator loops.
- Evaluate on a background thread —
BackgroundEvaluatorfor training loops where the kernel measurably stalls the main thread. - Evaluate with boundary IoU — the
Boundary()kernel and its dilation knob. - Evaluate with
vernier eval— the static CLI binary for CI pipelines without a Python interpreter. - Configure the evaluator — the entry-point
survey of
iou/parity_mode/max_dets/use_cats/cast_inputsand how they compose. - Custom evaluation grids — the
ADR-0040 axes (
iou_thresholds,recall_thresholds,area_ranges) and theevaluate_tablesroute. - Distributed evaluation across ranks — rank-local + gather across instance / semantic / panoptic.
- Evaluate keypoints with OKS —
Keypoints()with per-category sigmas. - Use result tables — the per-image / per-class / per-detection / per-pair polars DataFrames.