Announcing llm_composer: an Elixir library for working with LLMs
Hi everyone — at Doofinder we’ve been building llm_composer
for new apps and thought it would be useful to share it with the community.
llm_composer
is an Elixir library that simplifies working with large language models (LLMs) such as OpenAI (GPT), OpenRouter, Ollama, AWS Bedrock, and Google Gemini/Vertex AI.
Key features
- Multi-provider support (OpenAI, OpenRouter, Ollama, Bedrock, Google Gemini/Vertex AI)
- System prompts and message history management
- Streaming responses
- Function calls with optional auto-execution
- Structured outputs with JSON Schema validation
- Built-in cost tracking (currently for OpenRouter)
- Easy extensibility for custom use cases
Provider router and failover
A core feature is the provider router that handles failover automatically. It will use the primary provider until it fails, then fall back to the next provider in the list, applying an exponential backoff strategy. This makes llm_composer
resilient in production environments where provider APIs can be temporarily unavailable.
Under the hood
llm_composer
uses Tesla
as the HTTP client. For production, especially when using streaming, we recommend running with Finch
for optimal performance.
Where to find it
- HexDocs: LlmComposer — llm_composer v0.11.1 — https://hexdocs.pm/llm_composer/readme.html
- GitHub: https://github.com/doofinder/llm_composer
Get started
- Add
{:llm_composer, "~> 0.11.1"}
to yourmix.exs
dependencies. - Check the docs on HexDocs or the README on GitHub for configuration and examples (providers, streaming, function calls).
Examples
Here are a couple of simple examples to give you a feel for how llm_composer
works. For more advanced use cases (streaming, structured outputs, custom functions, Vertex AI, etc.) check the full docs.
Simple chat with OpenAI
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Application.put_env(:llm_composer, :open_ai, api_key: "<your api key>")
defmodule MyChat do
@settings %LlmComposer.Settings{
providers: [
{LlmComposer.Providers.OpenAI, [model: "gpt-4.1-mini"]}
],
system_prompt: "You are a helpful assistant."
}
def say_hi() do
{:ok, res} = LlmComposer.simple_chat(@settings, "hi there")
IO.inspect(res.main_response)
end
end
This sets up a basic chatbot that responds using OpenAI.
Multi-provider with router and failover
1
2
3
4
5
6
7
8
9
@settings %LlmComposer.Settings{
providers: [
{LlmComposer.Providers.OpenAI, [model: "gpt-4.1-mini"]},
{LlmComposer.Providers.Google, [model: "gemini-2.5-flash"]}
],
system_prompt: "You are a helpful assistant."
}
{:ok, res} = LlmComposer.simple_chat(@settings, "hello")
Here the provider router will try OpenAI first, and if it fails, it will fall back to Google Gemini with exponential backoff retries.
For more details and complete guides, check out the docs:
- HexDocs: https://hexdocs.pm/llm_composer/readme.html
- GitHub: https://github.com/doofinder/llm_composer
Thanks for reading — we hope llm_composer
helps you build robust LLM-powered apps in Elixir.