Ruby Concurrency: What Actually Happens
Every 'what happens when' question about Ruby concurrency, answered with diagrams.
Every 'what happens when' question about Ruby concurrency, answered with diagrams.
I tried Async::Job for my LLM apps, hit its limits, and patched Solid Queue to run jobs as fibers instead.
The Ruby community doesn't have a great documentation theme. So I made one. Jekyll VitePress Theme brings VitePress's docs UX to Jekyll.
RubyLLM 1.14 ships a Tailwind chat UI, Rails generators for agents and tools, and a simplified config DSL. Watch the full setup in 1:46.
A pragmatic, code-first argument for Ruby as the best language to ship AI products in 2026.
Agents aren't magic. They're LLMs that can call your code. RubyLLM 1.12 adds a clean DSL to define and reuse them.
Nano Banana hides behind Google's chat endpoint. Here's the straight line to ship it with RubyLLM.
Structured output that works, Rails generators that didn't exist, and why we shipped Wednesday, Friday, and Friday again.
How Ruby's async ecosystem transforms resource-intensive LLM applications into efficient, scalable systems - without rewriting your codebase.
Attachments figure themselves out, contexts isolate configuration per tenant, and model data stays current automatically.
The standard API for LLM model information I announced last month is now live and already integrated into RubyLLM 1.3.0.
No provider exposes model capabilities and pricing through their API. So we're building one.
One Ruby API for OpenAI, Claude, Gemini, and more. Chat, tools, streaming, Rails integration. No ceremony.