Operations
OpenAI token pricing changes: keep your cost table updated
Model prices change over time. If your cost model is not versioned, your historical reporting becomes unreliable.
Full guide: LLM pricing tables: keep costs accurate and handle unknown models
Core rule: snapshot request cost at ingest
Store inputCostUsd, outputCostUsd, and totalCostUsd on each request row.
Historical rows should remain stable even if provider price tables change later.
Pricing sync workflow
- Pull source pricing feed on schedule.
- Update model catalog entries with effective timestamps.
- Mark unknown or disabled models explicitly.
- Validate catalog updates before applying to ingestion path.
Failure modes to avoid
- Backfilling historical requests with new prices
- Silent unknown-model fallback without visibility
- Mixing test/demo traffic into production pricing decisions
Version rates by effective date (not by deploy date)
Pricing changes are a time-series problem. The same model name can have different effective rates at different dates.
Store effectiveFrom timestamps and apply the correct rate at ingest time so charts remain consistent.
- Keep one catalog row per provider/model/effectiveFrom.
- Do not overwrite older catalog rows; append new versions.
- Record the pricing source and verification timestamp.
Unknown models are an operations queue
Unknown models are not just a data issue; they are a reporting risk. If you cannot price traffic, you cannot trust cost totals.
Treat unknown-model rows as a daily triage workflow with an owner and SLA.
- Alert when unknown-model ratio exceeds threshold.
- Create a pricing request with evidence and effective date.
- Approve and publish the catalog update.
- Verify new ingests are priced without rewriting history.
Token classes: cached, reasoning, and other usage fields
Providers may expose multiple usage fields that do not map cleanly to simple input/output billing. Keep raw usage fields and normalize with documented rules.
When usage is missing, treat it as uncertainty rather than zero.
- Store raw usage fields (source of truth) alongside normalized totals.
- Document mapping rules and update them when provider fields change.
- Reconcile against invoice exports to validate the mapping.
Reconciliation: keep finance trust
- Export provider usage totals for the billing period.
- Compare to internal aggregates for the same UTC window.
- Investigate deltas: unknown models, missing usage, window mismatches.
- Record the outcome with owner and timestamp.
- Attach reconciliation notes to your monthly CFO pack.
What to send (payload example)
{
"externalRequestId": "req_01HZXB6MQZ2WQ9D2KCF9M4V2QY",
"provider": "openai",
"model": "gpt-4o-mini",
"endpointTag": "catalog.pricing_reconcile",
"promptVersion": "pricing_v2",
"userId": "tenant_acme_hash",
"inputTokens": 320,
"outputTokens": 120,
"latencyMs": 892,
"status": "success",
"dataMode": "real",
"environment": "prod"
}Common mistakes
- Overwriting historical price snapshots instead of versioning by effective date.
- Ignoring unknown-model rows until dashboards become untrustworthy.
- Missing cached/reasoning token nuances and mispricing requests.
- Mixing test/demo traffic into production pricing reconciliation.
How to verify in Opsmeter Dashboard
- Open Catalog to confirm model mapping and pricing effective dates.
- Check unknown-model visibility and resolve pending pricing rows.
- Spot-check cost snapshots on recent requests to validate ingestion accuracy.
- Reconcile aggregates against provider usage exports for the same window.
Related guides
Evaluation resources
For security and procurement reviews, use our trust summary before final tool selection.