Operations
Investigate Spike current vs baseline: a practical playbook
Use a repeatable current-vs-baseline workflow to isolate the real driver behind spend jumps in minutes.
Full guide: LLM cost attribution: endpoint, prompt version, tenant, and user
What this comparison answers
- Which buyer problem each product handles best.
- Where attribution, governance, or tracing tradeoffs start to matter.
- When Opsmeter.io is the better fit for bill-shock prevention workflows.
What to alert on
- cost/request drift by endpointTag or promptVersion
- unexpected tenant concentration in Top Users
- request burst with falling success ratio
- budget warning, spend-alert, and exceeded state transitions
Execution checklist
- Confirm spike type: volume, token, deploy, or abuse signal.
- Assign one incident owner and one communication channel.
- Apply immediate containment before deep optimization.
- Document the dominant endpoint, tenant, and promptVersion driver.
- Convert findings into one permanent guardrail update.
Why equal windows are mandatory for spike analysis
Comparing current 7 days to an arbitrary baseline creates false narratives. The baseline must be the immediately previous window with the same duration.
Equal windows keep traffic seasonality and business cadence aligned, so deltas are decision-ready.
Use this workflow
Turn diagnosis into action
Identify the cost driver, validate it with attribution, then apply one durable control before the next billing cycle.
Apply in your workspace
Re-run this workflow on your own spend data
Follow the same path from article insight to telemetry verification, then validate with your own cost signals.
Default setup that works in production
- Current window: selected incident period (for example last 7d)
- Baseline window: previous equal period (previous 7d)
- Compare dimensions: endpointTag, promptVersion, tenantId, userId
- Primary metric: cost delta first, then cost/request and token deltas
15-minute execution order
- Confirm current and baseline windows are equal and contiguous.
- Rank top endpoint drivers by absolute cost delta.
- Check promptVersion and token shifts on the top endpoint.
- Check tenant/user concentration before declaring abuse.
- Contain first, then document one permanent guardrail.
Common false positives to filter out
- Demo/test traffic mixed into production windows
- Backfill events arriving late and polluting the period
- Unpriced unknown-model rows creating temporary deltas
- Retry bursts counted as new work due to poor request idempotency
FAQ
Is userId required?
No. userId is optional, but recommended for tenant-level attribution. If needed, send a hashed identifier.
Where should token usage values come from?
Prefer provider usage fields first. If unavailable, use tokenizer estimates and mark uncertainty in your workflow.
How should retries be handled?
Keep the same externalRequestId for the same logical request so idempotency remains stable across retries.
Can telemetry break production flow?
It should not. Use short timeouts, catch errors, and keep telemetry asynchronous so provider calls keep running.
Related guides
Evaluation resources
For security and procurement reviews, use our trust summary before final tool selection.