I released RubyLLM 1.15 today.
It ships image editing, cost tracking, cleaner token accounting, inferred tool parameters, additive callbacks, and Rails fixes.
The theme is simple: stop making me write glue code. If the computer can infer it, RubyLLM should infer it. If a provider reports usage, RubyLLM should turn it into cost. If Rails already has a blob, RubyLLM should not download it and upload it again.
Image Editing
RubyLLM.paint could already generate images:
image = RubyLLM.paint("A watercolor robot holding a Ruby gem")
Now with: turns it into an image edit:
image = RubyLLM.paint(
"Turn the logo green and keep the background transparent",
model: "gpt-image-1",
with: "logo.png"
)
Same method, same attachment shape.
The source can be a path, a URL, an IO-like object, or an Active Storage attachment. Multiple source images work too:
image = RubyLLM.paint(
"Combine these references into a postcard illustration",
model: "gpt-image-1",
with: ["person.png", "style-reference.png"]
)
And if you need to constrain the edit, pass a mask:
image = RubyLLM.paint(
"Replace only the background with a sunset sky",
model: "gpt-image-1",
with: "portrait.png",
mask: "portrait-mask.png"
)
That’s it. paint paints. Sometimes from scratch, sometimes from an existing image.
Cost Tracking
RubyLLM has tracked tokens since 1.0. But “this used 18,432 tokens” is only half the answer. The next question is always: how much did that cost?
Calculating that was never hard. Take the input tokens, output tokens, cached tokens, maybe reasoning tokens. The pricing is already in RubyLLM’s model registry. Multiply by the per-million rate.
But why should every app have to write that code?
RubyLLM already has the usage. RubyLLM already knows the model. RubyLLM already ships the model registry. So now it does the boring math for you.
Now you can ask:
response = chat.ask("Summarize Ruby's object model.")
response.cost.total
chat.cost.total
agent.cost.total
Same for images:
image = RubyLLM.paint("A small watercolor robot", model: "gpt-image-1")
image.tokens.input
image.tokens.output
image.cost.input
image.cost.output
image.cost.total
If RubyLLM does not have pricing for part of the usage, the cost is nil. Better no answer than a fake one.
A chat with ten messages can tell you the total. An agent can tell you the total. A generated image can tell you the total. No more handrolled sums.
Token Counts That Mean What They Say
Prompt caching made token counts messy.
Some providers include cache reads in prompt tokens. Some report cache creation separately. Some don’t. If you multiply the wrong number by the wrong price, your cost tracking is wrong before it starts.
So 1.15 separates the different kinds of tokens before exposing them:
response.tokens.input # standard input tokens
response.tokens.output # billable output tokens
response.tokens.cache_read # prompt cache reads
response.tokens.cache_write # prompt cache writes
tokens.input now means normal input tokens. Cache reads and cache writes are separate. tokens.output always mean billable output tokens.
The old top-level helpers still work. New code should use response.tokens.*.
No new Rails migration is required if you already ran the 1.9 token migration. If you display token counts directly, read the 1.15 upgrade notes.
Less Tool Boilerplate
Tools in RubyLLM are Ruby classes. But for very simple tools, RubyLLM still made you repeat yourself:
class Weather < RubyLLM::Tool
description "Gets current weather for a location"
param :latitude # why?
param :longitude # DRY!
def execute(latitude:, longitude:)
# ...
end
end
That is silly. The method signature already says there is a latitude and a longitude.
Now this works:
class Weather < RubyLLM::Tool
desc "Gets current weather for a location"
def execute(latitude:, longitude:, units: "metric")
# ...
end
end
Required keywords become required string parameters. Optional keywords become optional string parameters.
Ruby method signatures don’t tell us JSON Schema types or descriptions, so if those matter, keep using param:
param :units, type: :string, desc: "metric or imperial", required: false
And when you need nested objects, arrays, enums, or full schema control, use params. Nothing changed there.
Also:
descis now an alias fordescriptionparamacceptsdescription:as an alias fordesc:- the tool generator now emits
desc - we retain full backwards compatibility!
Callbacks That Stack
The old on_* callbacks were replace-style callbacks. Register another one and you replaced the previous one.
That caused an obvious problem: Rails persistence wants callbacks, and your app also wants callbacks. Logging wants callbacks. Analytics wants callbacks. Replacing the previous callback is the wrong default.
So 1.15 adds additive callbacks:
chat.before_message { ... }
chat.after_message { |message| ... }
chat.before_tool_call { |tool_call| ... }
chat.after_tool_result { |result| ... }
Register five callbacks, all five run.
Rails persistence uses these internally now. Your app can layer its own callbacks on top without breaking persistence.
The old on_* callbacks are deprecated. They’ll go away in RubyLLM 2.0.
Rails Fixes
Rails got a lot of boring, important fixes:
- Action Text-backed message content is converted to plain text before being sent to the model.
- ActiveRecord support no longer sits in the core gem eager-load path, fixing standalone
require "ruby_llm"with Zeitwerk eager loading. - The
acts_asAPI follows Rails association inference more closely. - Existing Active Storage blobs and attachments passed through
with:are reused instead of downloaded and re-uploaded.
Providers and Models
Empty tool results are now handled consistently across Anthropic, Bedrock, and Gemini. When a tool returns nothing, RubyLLM sends a small placeholder instead of provider-invalid empty content.
Streaming and non-streaming token usage is normalized across OpenAI, OpenRouter, Bedrock, and Gemini before cost calculation.
The model registry has been refreshed too: cache read/write pricing, reasoning output pricing, GPT Image pricing, and new aliases including Claude Opus 4.7, DeepSeek V4, Gemini Embedding 2, Gemma 4, and GPT-5.5.
Use It
gem 'ruby_llm', '~> 1.15'
Then:
bundle update ruby_llm
Full release notes on GitHub.