Opsmeter logo
Opsmeter
AI Cost & Inference Control
Integration docs

Developer Docs

Ship telemetry fast with LLM cost tracking and OpenAI cost monitoring. Request-level budget alerts are available on Pro+ plans.

Updated for 2026API v1GitHub

Provider and model catalog

Before sending ingest payload, always fetch GET /v1/public/pricing-models and use the provider/model strings exactly as returned.

Rule: 1) Pull catalog, 2) copy provider/model exactly, 3) send payload.

If provider/model is not found in catalog, ingest falls back to unknown and costUsd = 0 until pricing is added.

OpenAI

ModelInput PriceOutput PriceEffective fromAction
gpt-4o-mini
$0.150/ 1M tokens
$0.600/ 1M tokens
2026-02-06 13:38
gpt-4-1
$2.000/ 1M tokens
$8.000/ 1M tokens
2026-02-06 13:38
gpt-4-1-mini
$0.400/ 1M tokens
$1.600/ 1M tokens
2026-02-06 13:38
gpt-4o
$2.500/ 1M tokens
$10.000/ 1M tokens
2026-02-06 13:38
ada
$0.400/ 1M tokens
$0.400/ 1M tokens
2026-02-06 13:38
ada-batch
$0.200/ 1M tokens
$0.200/ 1M tokens
2026-02-06 13:38
babbage
$0.500/ 1M tokens
$0.500/ 1M tokens
2026-02-06 13:38
babbage-batch
$0.250/ 1M tokens
$0.250/ 1M tokens
2026-02-06 13:38
Showing 1-8 of 160