RubyLLM 1.3.0 ships three things: attachments that figure themselves out, isolated configuration contexts for multi-tenant apps, and the end of manually tracking model capabilities.
Attachments
Before, you had to tell RubyLLM what kind of file you were sending:
chat.ask "What's in this image?", with: { image: "diagram.png" }
chat.ask "Describe this meeting", with: { audio: "meeting.wav" }
chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
Now just hand it the file:
chat.ask "What's in this file?", with: "diagram.png"
chat.ask "Describe this meeting", with: "meeting.wav"
chat.ask "Summarize this document", with: "contract.pdf"
# Multiple files, mixed types
chat.ask "Analyze these files", with: [
"quarterly_report.pdf",
"sales_chart.jpg",
"customer_interview.wav",
"meeting_notes.txt"
]
# URLs work too
chat.ask "What's in this image?", with: "https://example.com/chart.png"
RubyLLM detects the type and does the right thing. You shouldn’t have to think about file types when the computer can figure it out.
Configuration Contexts
Global config is fine until you need different API keys per customer. Passing config objects everywhere is tedious. So we built contexts:
tenant_context = RubyLLM.context do |config|
config.openai_api_key = tenant.openai_key
config.anthropic_api_key = tenant.anthropic_key
config.request_timeout = 180
end
response = tenant_context.chat.ask("Process this customer request...")
# Global configuration stays untouched
RubyLLM.chat.ask("This still uses your default settings")
Each context is isolated, thread-safe, and garbage-collected when you’re done with it. Works for multi-tenancy, A/B testing providers, or anything where you need scoped configuration.
Ollama
Your dev machine shouldn’t phone home to OpenAI every time you want to test something:
RubyLLM.configure do |config|
config.ollama_api_base = 'http://localhost:11434/v1'
end
chat = RubyLLM.chat(model: 'mistral', provider: 'ollama')
response = chat.ask("Explain Ruby's eigenclass")
Same API, local model. Good for development, testing, compliance, costs.
OpenRouter
One API key, hundreds of models:
RubyLLM.configure do |config|
config.openrouter_api_key = ENV['OPENROUTER_API_KEY']
end
chat = RubyLLM.chat(model: 'anthropic/claude-3.5-sonnet', provider: 'openrouter')
No More Manual Model Tracking
Update: RubyLLM has since moved from Parsera to models.dev for model data.
We’ve been maintaining model capabilities and pricing by hand since 1.0. Every time a provider changes pricing or ships a new model, someone updates a file. That’s over.
We partnered with Parsera to build a continuously updated API that scrapes model information from provider docs. RubyLLM.models.refresh! now pulls from that API. Context windows, pricing, capabilities, and modalities are always current.
We kept our capabilities.rb files for older models that providers don’t document well anymore. Between the two sources, virtually every model worth using is covered.
Rails
ActiveStorage now works properly with attachments:
class Message < ApplicationRecord
acts_as_message
has_many_attached :attachments
end
chat_record.ask("Analyze this upload", with: params[:uploaded_file])
chat_record.ask("What's in my document?", with: user.profile_document)
chat_record.ask("Review these files", with: params[:files])
Full parity with the plain Ruby implementation.
Also in 1.3.0
- Custom embedding dimensions:
RubyLLM.embed("text", model: "text-embedding-3-small", dimensions: 512) - Enterprise OpenAI: Organization and project ID support
- Ruby 3.1–3.4, Rails 7.1–8.0: Officially tested
- 13 new contributors across foreign key fixes, HTTP proxy support, and more
Thanks to @papgmez, @timaro, @rhys117, @bborn, @xymbol, @roelbondoc, @max-power, @itstheraj, @stadia, @tpaulshippy, @Sami-Tanquary, and @seemiller.
gem 'ruby_llm', '1.3.0'
Full backward compatibility. GitHub.