The LLM Capabilities and Pricing API is Live

The LLM Capabilities and Pricing API is Live

Update: The Parsera API has been sunsetted. RubyLLM now uses models.dev for model capabilities and pricing.

The LLM Capabilities API I announced last month is live. Browse the models or hit the API directly.

id: gpt-4o-mini
name: GPT-4o mini
provider: openai
context_window: 128000
max_output_tokens: 16384

modalities:
  input:
    - text
    - image
  output:
    - text

capabilities:
  - function_calling
  - structured_output
  - streaming
  - batch

pricing:
  text_tokens:
    standard:
      input_per_million: 0.15
      output_per_million: 0.6
      cached_input_per_million: 0.075

Parsera scrapes provider docs and keeps the data current. Context windows, pricing, capabilities, and modalities are all in one place.

Already in RubyLLM

RubyLLM 1.3.0 pulls from this API directly:

RubyLLM.models.refresh!

model = RubyLLM.models.find("gpt-4.1-nano")
puts model.context_window        # => 1047576
puts model.capabilities          # => ["batch", "function_calling", "structured_output"]
puts model.pricing.text_tokens.standard.input_per_million  # => 0.1

The API is open to everyone: any language, any framework. Found a missing model? Report it.

Providers should expose this data themselves. Until they do, this works.

Newsletter