Budget controls
How to configure LLM budget alerts in 10 minutes (operator setup)
Use this when you need the exact setup sequence. For policy design and escalation model, use the budget-alert pillar page.
Full guide: LLM budget alert policy: thresholds and escalation
What to configure first
- Daily and monthly budget thresholds
- Warning ratio (example: 80 percent)
- Exceeded state behavior and owner notification channel
10-minute setup flow
- Set workspace budget limits in Settings.
- Enable alert delivery channel and schedule.
- Run sample telemetry and verify budgetWarning/budgetExceeded flags.
- Document response owner for exceeded state.
Threshold defaults (start simple, then refine)
Avoid over-engineering alerts on day one. A clear warning threshold plus a clear exceeded threshold beats a complex policy nobody trusts.
Once you have stable baselines, add burn-rate checks and per-endpoint or per-tenant thresholds for your top drivers.
- Warning: early signal while you still have time to contain spend.
- Exceeded: incident trigger that requires an owner decision (approve overrun or degrade).
- Burn-rate: detects "something changed" even when totals are still small.
What alerts should trigger operationally
- Check endpoint and tenant concentration immediately.
- Verify recent promptVersion rollouts.
- Review retry storms and abnormal traffic bursts.
Alert payload (what the on-call needs to respond fast)
- Budget state (warning/exceeded) + current burn-rate vs baseline
- Top endpointTag drivers (cost/request deltas)
- Top tenants/users and concentration percentage
- PromptVersion changes in the same window (deploy correlation)
- Retry ratio and latency shifts (multiplier detection)
- Unknown-model ratio (pricing coverage issues)
Runbook: what to do when exceeded triggers
- Confirm alert is real (dataMode, environment, time window).
- Identify the dominant driver (endpointTag + tenant/user).
- Contain first: cap output, throttle non-critical endpoints, or route to cheaper tiers.
- Stop multipliers: reduce retries, fix timeouts, block abuse patterns.
- Write one post-incident policy update so the same incident does not repeat.
Common mistake
Teams configure alerts but do not define an action owner. Treat alerts as workflow triggers, not inbox noise.
What to alert on
- burn-rate acceleration vs baseline
- endpointTag concentration changes in short windows
- unexpected tenant concentration in Top Users
- budget warning, spend-alert, and exceeded state transitions
Execution checklist
- Confirm alert is real: dataMode, environment, and time window.
- Identify dominant endpointTag and tenant/user contributors.
- Contain: cap output, lower max tokens, or throttle non-critical paths.
- Assign one incident owner and one communication channel.
- Update policy thresholds or ownership to prevent repeat incidents.
FAQ
Is userId required?
No. userId is optional, but recommended for tenant-level attribution. If needed, send a hashed identifier.
Where should token usage values come from?
Prefer provider usage fields first. If unavailable, use tokenizer estimates and mark uncertainty in your workflow.
How should retries be handled?
Keep the same externalRequestId for the same logical request so idempotency remains stable across retries.
Can telemetry break production flow?
It should not. Use short timeouts, catch errors, and keep telemetry asynchronous so provider calls keep running.
Related guides
Evaluation resources
For security and procurement reviews, use our trust summary before final tool selection.