[[tags: egg]]
== llm
A provider-agnostic LLM chat API client for CHICKEN Scheme with tool calling support.
[[toc:]]
=== Description
This egg provides a high-level interface for interacting with Large Language Model APIs. It supports:
* Multi-turn conversations with history management
* Tool/function calling with automatic execution
* File attachments (images, PDFs, text files)
* Token usage and cost tracking
* Provider abstraction for multiple LLM backends
OpenAI is included as the default provider, and Anthropic is also available. The modular architecture allows adding other providers (Mistral, etc.) without changing application code.
=== Author
Rolando Abarca
=== Repository
[[https://forgejo.rolando.cl/cpm/llm-egg]]
=== Requirements
* [[medea]]
* [[base64]]
* [[uri-common]]
* [[http-client]]
* [[intarweb]]
* [[openssl]]
* [[srfi-1]]
* [[srfi-13]]
* [[logger]]
=== Documentation
==== Quick Start
(import llm)
;; Simple one-shot text generation
(print (llm/generate-text "What is 2 + 2?"))
;; Generate an image and save to file
(llm/generate-image-save "A cute cat wearing a hat" '(file . "/tmp/cat.png"))
;; For multi-turn conversations, use llm/chat and llm/send
(define conv (llm/chat system: "You are a helpful assistant."))
(let-values ([(conv ok?) (llm/send conv "What is 2 + 2?")])
(when ok?
(print (llm/get-last-response conv))))
==== Environment Variables
; {{OPENAI_API_KEY}} : Required for the OpenAI provider.
; {{ANTHROPIC_API_KEY}} : Required for the Anthropic provider.
==== Module: llm
The main module providing the public API.
(llm/chat #!key system tools history model temperature max-tokens on-response-received on-tool-executed provider)
Create a new conversation. Returns a conversation state alist.
; {{system}} : Optional system prompt string.
; {{tools}} : Optional list of tool name symbols (kebab-case) to enable.
; {{history}} : Optional existing message history to continue.
; {{model}} : Optional model name string (default: provider's default model).
; {{temperature}} : Optional temperature float (default: 1).
; {{max-tokens}} : Optional maximum completion tokens (default: 4000).
; {{on-response-received}} : Optional callback {{(lambda (message) ...)}}.
; {{on-tool-executed}} : Optional callback {{(lambda (name args result) ...)}}.
; {{provider}} : Optional provider record (default: {{openai-provider}}).
;; Basic conversation
(define conv (llm/chat system: "You are a helpful assistant."))
;; With all options
(define conv (llm/chat
system: "You can check the weather."
tools: '(get-weather)
model: "gpt-4o"
temperature: 0.7
max-tokens: 1000
on-response-received: (lambda (msg) (print "Response: " msg))
on-tool-executed: (lambda (name args result) (print "Tool called: " name))))
(llm/send conversation message #!key file)
Send a message and get a response. Returns two values: the updated conversation and a success boolean.
; {{conversation}} : Conversation state from {{llm/chat}}.
; {{message}} : String message to send.
; {{file}} : Optional local file path to attach (image, PDF, or text file).
;; Text only
(let-values ([(conv ok?) (llm/send conv "What is 2+2?")])
(if ok?
(print (llm/get-last-response conv))
(print "Request failed")))
;; With image attachment
(let-values ([(conv ok?) (llm/send conv "What's in this image?" file: "photo.jpg")])
(when ok?
(print (llm/get-last-response conv))))
;; With PDF attachment
(let-values ([(conv ok?) (llm/send conv "Summarize this document" file: "report.pdf")])
(when ok?
(print (llm/get-last-response conv))))
(llm/get-last-response conversation)
Get the text content of the last assistant message from the conversation. Returns {{#f}} if no assistant message exists.
(llm/register-tool! name schema implementation)
Register a tool in the global registry.
; {{name}} : Symbol identifier in kebab-case (e.g., {{'get-weather}}).
; {{schema}} : Alist defining the function schema (use snake_case in {{function.name}}).
; {{implementation}} : Procedure {{(lambda (params-alist) ...)}} returning a result alist.
(llm/register-tool!
'get-weather
'((type . "function")
(function . ((name . "get_weather")
(description . "Get current weather for a location")
(parameters . ((type . "object")
(properties . ((location . ((type . "string")
(description . "City name")))))
(required . #("location")))))))
(lambda (params)
(let ((location (alist-ref 'location params)))
`((success . #t)
(temperature . 72)
(conditions . "sunny")
(location . ,location)))))
(llm/get-registered-tools #!optional tool-names)
Get registered tool schemas as a vector. If {{tool-names}} (a list of kebab-case symbols) is provided, only those tools are returned.
(llm/get-cost conversation)
Get the total cost of the conversation in USD.
(llm/get-tokens conversation)
Get total token counts as a pair {{(input-tokens . output-tokens)}}.
(let-values ([(conv ok?) (llm/send conv "Hello!")])
(when ok?
(let ((tokens (llm/get-tokens conv))
(cost (llm/get-cost conv)))
(print "Input tokens: " (car tokens))
(print "Output tokens: " (cdr tokens))
(print "Total cost: $" cost))))
llm/use-provider
Parameter to get or set the current default provider. Re-export of {{current-provider}} from {{llm-provider}}.
;; Get current provider
(llm/use-provider) ; => openai-provider
;; Set a different default provider
(llm/use-provider some-other-provider)
===== Simple Convenience Functions
(llm/generate-text prompt #!key system model temperature max-tokens history provider)
Simple one-shot text generation. Creates a conversation, sends the prompt, and returns the response text.
; {{prompt}} : String prompt to send.
; {{system}} : Optional system message.
; {{model}} : Optional model name.
; {{temperature}} : Optional temperature (0-2).
; {{max-tokens}} : Optional max completion tokens.
; {{history}} : Optional list of previous messages.
; {{provider}} : Optional provider (default: {{current-provider}}).
Returns the response text as a string, or {{#f}} on error.
;; Simple usage
(print (llm/generate-text "What is 2+2?"))
;; With options
(print (llm/generate-text "Explain quantum computing"
system: "You are a physics teacher."
model: "gpt-4o"
temperature: 0.7))
(llm/generate-image prompt #!key model size quality output-format n provider)
Generate an image from a text prompt.
; {{prompt}} : String describing the image to generate.
; {{model}} : Optional model ({{"gpt-image-1"}}, {{"dall-e-3"}}, {{"dall-e-2"}}). Default: {{"gpt-image-1"}}.
; {{size}} : Optional size ({{"1024x1024"}}, {{"1024x1536"}}, {{"1536x1024"}}). Default: {{"1024x1024"}}.
; {{quality}} : Optional quality ({{"high"}}, {{"medium"}}, {{"low"}} for gpt-image-1; {{"standard"}}, {{"hd"}} for dall-e-3). Default: {{"high"}}.
; {{output-format}} : Optional format ({{"png"}}, {{"jpeg"}}). Default: {{"png"}}.
; {{n}} : Optional number of images to generate. Default: 1.
; {{provider}} : Optional provider (default: {{current-provider}}).
Returns a list of base64-encoded image strings, or {{#f}} on error. Raises an error if the provider doesn't support image generation.
;; Generate an image (returns a list)
(let ((images (llm/generate-image "A sunset over mountains")))
(print "Generated " (length images) " image(s)")
(print "First image length: " (string-length (car images))))
;; With options
(let ((images (llm/generate-image "A red apple"
model: "dall-e-3"
size: "1024x1024"
quality: "hd")))
(print "Generated " (length images) " image(s)"))
(llm/generate-image-save prompt destination #!key model size quality provider)
Generate an image and save it to a file.
; {{prompt}} : String describing the image to generate.
; {{destination}} : Pair {{(type . path)}} where type is {{'file}}.
; {{model}}, {{size}}, {{quality}}, {{provider}} : Same as {{llm/generate-image}}.
Returns the file path on success, or {{#f}} on failure.
;; Save image to file
(llm/generate-image-save "A blue circle on white background"
'(file . "/tmp/circle.png"))
;; With options
(llm/generate-image-save "A landscape painting"
'(file . "/tmp/landscape.png")
model: "gpt-image-1"
quality: "high")
(llm/transcribe-audio file-path #!key model response-format language provider)
Transcribe audio to text.
; {{file-path}} : Path to the audio file (mp3, wav, etc.).
; {{model}} : Optional model ({{"gpt-4o-transcribe"}}, {{"whisper-1"}}). Default: {{"gpt-4o-transcribe"}}.
; {{response-format}} : Optional format ({{"text"}}, {{"json"}}, {{"verbose_json"}}). Default: {{"text"}}.
; {{language}} : Optional language code (e.g., {{"en"}}, {{"es"}}).
; {{provider}} : Optional provider (default: {{current-provider}}).
Returns the transcription as a string (for {{"text"}} format) or alist (for JSON formats), or {{#f}} on error. Raises an error if the provider doesn't support audio transcription.
;; Simple transcription
(print (llm/transcribe-audio "/path/to/audio.mp3"))
;; With options
(print (llm/transcribe-audio "/path/to/audio.mp3"
model: "whisper-1"
language: "en"))
==== Module: llm-provider
Defines the provider abstraction layer.
Record type for LLM providers with the following fields:
; {{name}} : Symbol identifying the provider (e.g., {{'openai}}, {{'anthropic}}).
; {{default-model}} : String with the default model name for this provider.
; {{prepare-message}} : Procedure {{(message include-file?) -> provider-format-message}}.
; {{build-payload}} : Procedure {{(messages tools model temp max-tokens) -> payload-alist}}.
; {{call-api}} : Procedure {{(endpoint payload) -> response-alist}}.
; {{parse-response}} : Procedure {{(response-data) -> normalized-response}}.
; {{format-tool-result}} : Procedure {{(tool-call-id result) -> tool-message}}.
; {{get-model-pricing}} : Procedure {{(model-name) -> pricing-alist}}.
; {{extract-tool-calls}} : Procedure {{(response-message) -> list-of-tool-calls}}.
; {{generate-image}} : Optional procedure {{(prompt params-alist) -> list-of-base64-strings}} or {{#f}}.
; {{transcribe-audio}} : Optional procedure {{(file-path params-alist) -> string-or-alist}} or {{#f}}.
(make-llm-provider name default-model prepare-message build-payload call-api parse-response format-tool-result get-model-pricing extract-tool-calls generate-image transcribe-audio)
Constructor for provider records.
(llm-provider? obj)
Predicate to check if {{obj}} is an {{}} record.
(llm-provider-name provider)
(llm-provider-default-model provider)
(llm-provider-prepare-message provider)
(llm-provider-build-payload provider)
(llm-provider-call-api provider)
(llm-provider-parse-response provider)
(llm-provider-format-tool-result provider)
(llm-provider-get-model-pricing provider)
(llm-provider-extract-tool-calls provider)
(llm-provider-generate-image provider)
(llm-provider-transcribe-audio provider)
Accessor procedures for provider record fields. {{llm-provider-generate-image}} and {{llm-provider-transcribe-audio}} return {{#f}} if the provider doesn't support those capabilities.
current-provider
Parameter holding the current default provider. Initially {{#f}} until set by importing {{llm}} (which sets it to {{openai-provider}}).
==== Module: llm-openai
OpenAI provider implementation.
openai-provider
The OpenAI provider instance. This is the default provider when using the {{llm}} module.
openai-http-client
Parameter holding the HTTP client procedure. Can be overridden for testing or custom HTTP handling. The procedure signature is {{(endpoint payload) -> response-alist}}.
;; Mock the HTTP client for testing
(openai-http-client
(lambda (endpoint payload)
'((choices . #(((message . ((content . "Test response")
(role . "assistant")))
(finish_reason . "stop"))))
(usage . ((prompt_tokens . 10)
(completion_tokens . 5))))))
(openai-call-api endpoint payload)
Make an API request using the current {{openai-http-client}}.
(openai-generate-image prompt params)
Generate an image using OpenAI's image generation API.
; {{prompt}} : String describing the image.
; {{params}} : Alist of parameters ({{model}}, {{size}}, {{quality}}, {{output_format}}, {{n}}).
Returns list of base64-encoded image strings, or {{#f}} on failure.
(openai-transcribe-audio file-path params)
Transcribe audio using OpenAI's transcription API.
; {{file-path}} : Path to the audio file.
; {{params}} : Alist of parameters ({{model}}, {{response_format}}, {{language}}).
Returns transcription string (for "text" format) or alist (for JSON formats), or {{#f}} on failure.
*openai-default-model*
Default model string: {{"gpt-5-nano"}}.
*openai-default-temperature*
Default temperature: {{1}}.
*openai-default-max-tokens*
Default max completion tokens: {{4000}}.
==== Module: llm-anthropic
Anthropic provider implementation.
anthropic-provider
The Anthropic provider instance. Supports chat completions with tool calling, and file attachments (images and PDFs). Does not support image generation or audio transcription.
(import llm llm-anthropic)
;; Use Anthropic as the default provider
(llm/use-provider anthropic-provider)
;; Or use it for a specific conversation
(define conv (llm/chat system: "You are Claude."
provider: anthropic-provider))
anthropic-http-client
Parameter holding the HTTP client procedure. Can be overridden for testing or custom HTTP handling. The procedure signature is {{(endpoint payload) -> response-alist}}.
(anthropic-call-api endpoint payload)
Make an API request using the current {{anthropic-http-client}}. Note that the endpoint parameter is ignored; Anthropic only uses the {{/v1/messages}} endpoint.
*anthropic-default-model*
Default model string: {{"claude-sonnet-4-5-20250929"}}.
*anthropic-default-temperature*
Default temperature: {{1}}.
*anthropic-default-max-tokens*
Default max completion tokens: {{4096}}.
===== Anthropic API Differences
The Anthropic provider handles the following API differences automatically:
* System messages are extracted and sent in the top-level {{system}} field
* Tools use {{input_schema}} instead of {{parameters}}
* Tool calls appear as {{tool_use}} content blocks instead of a {{tool_calls}} array
* Tool results use {{role: "user"}} with {{type: "tool_result"}} content blocks
* Images use {{type: "image"}} with {{source.data}} instead of {{image_url}}
* PDFs use {{type: "document"}} with {{source.data}} instead of {{file}}
* Authentication uses {{x-api-key}} header instead of {{Authorization: Bearer}}
==== Module: llm-common
Shared utilities used by providers.
(detect-mime-type path)
Detect MIME type from file extension. Supports JPEG, PNG, GIF, WebP, PDF, and common text files.
(image-mime-type? mime)
Returns {{#t}} if {{mime}} starts with {{"image/"}}.
(pdf-mime-type? mime)
Returns {{#t}} if {{mime}} is {{"application/pdf"}}.
(vision-mime-type? mime)
Returns {{#t}} if {{mime}} is an image or PDF (types supported by vision models).
(read-file-base64 path)
Read a file and return its contents as a base64-encoded string.
(read-text-file path)
Read a text file and return its contents as a string.
(kebab->snake symbol)
Convert a kebab-case symbol to snake_case. Example: {{'get-current-time}} becomes {{'get_current_time}}.
==== Creating a New Provider
To add support for a new LLM backend, implement the provider interface:
(import llm-provider llm-common)
(define (my-prepare-message msg include-file?)
;; Convert internal message format to API-specific format
msg)
(define (my-build-payload messages tools model temp max-tokens)
;; Build the API request body
`((model . ,model)
(messages . ,(list->vector messages))))
(define (my-call-api endpoint payload)
;; Make HTTP request, return response alist
...)
(define (my-parse-response response-data)
;; Return alist with: success, message, content, tool-calls,
;; finish-reason, input-tokens, output-tokens
`((success . #t)
(message . ...)
(content . "response text")
(tool-calls . #f)
(finish-reason . "stop")
(input-tokens . 100)
(output-tokens . 50)))
(define (my-extract-tool-calls message)
;; Return list of ((id . "...") (name . "...") (arguments . "json-string"))
'())
(define (my-format-tool-result tool-call-id result)
;; Return tool result message in API format
`((role . "tool")
(tool_call_id . ,tool-call-id)
(content . ,(json->string result))))
(define (my-get-model-pricing model-name)
;; Return ((input-price-per-1m . N) (output-price-per-1m . M))
'((input-price-per-1m . 1.00)
(output-price-per-1m . 3.00)))
(define my-provider
(make-llm-provider
'my-provider ;; name
"my-default-model" ;; default-model
my-prepare-message
my-build-payload
my-call-api
my-parse-response
my-format-tool-result
my-get-model-pricing
my-extract-tool-calls
#f ;; generate-image (not supported)
#f)) ;; transcribe-audio (not supported)
;; Use it
(llm/use-provider my-provider)
;; Or per-conversation
(llm/chat system: "Hello" provider: my-provider)
=== Examples
==== Simple Chat
(import llm)
(define conv (llm/chat system: "You are a helpful assistant."))
(let-values ([(conv ok?) (llm/send conv "Tell me a joke.")])
(when ok?
(print (llm/get-last-response conv))
(print "Cost: $" (llm/get-cost conv))))
==== Multi-turn Conversation
(import llm)
(define conv (llm/chat system: "You are a math tutor."))
(let*-values ([(conv ok?) (llm/send conv "What is calculus?")]
[(_ _) (when ok? (print (llm/get-last-response conv)))]
[(conv ok?) (llm/send conv "Can you give me a simple example?")]
[(_ _) (when ok? (print (llm/get-last-response conv)))])
(print "Total tokens: " (llm/get-tokens conv)))
==== Simple Text Generation
(import llm)
;; One-liner for quick prompts
(print (llm/generate-text "What is the capital of France?"))
;; With system prompt
(print (llm/generate-text "Explain recursion"
system: "You explain concepts simply, using analogies."))
==== Using Anthropic Provider
(import llm llm-anthropic)
;; Set Anthropic as the default provider
(llm/use-provider anthropic-provider)
;; Now all calls use Claude
(print (llm/generate-text "Hello, Claude!"))
;; Or use it for a specific conversation while keeping OpenAI as default
(import llm-openai)
(llm/use-provider openai-provider) ;; OpenAI is default
(define conv (llm/chat system: "You are a helpful assistant."
provider: anthropic-provider)) ;; This one uses Claude
(let-values ([(conv ok?) (llm/send conv "What model are you?")])
(when ok?
(print (llm/get-last-response conv))
(print "Cost: $" (llm/get-cost conv))))
==== Image Generation
(import llm)
;; Generate and save an image
(let ((path (llm/generate-image-save
"A serene Japanese garden with a red bridge"
'(file . "/tmp/garden.png")
quality: "high")))
(when path
(print "Image saved to: " path)))
;; Get base64 data directly (returns a list)
(let ((images (llm/generate-image "A cute cartoon robot")))
(when images
(print "Generated " (length images) " image(s)")))
==== Audio Transcription
(import llm)
;; Simple transcription
(let ((text (llm/transcribe-audio "/path/to/recording.mp3")))
(when text
(print "Transcription: " text)))
;; With language hint
(let ((text (llm/transcribe-audio "/path/to/spanish-audio.mp3"
language: "es")))
(when text
(print text)))
==== Tool Calling
(import llm srfi-19)
(llm/register-tool!
'get-current-time
'((type . "function")
(function . ((name . "get_current_time")
(description . "Get the current date and time")
(parameters . ((type . "object")
(properties . ())
(required . #()))))))
(lambda (params)
`((success . #t)
(time . ,(date->string (current-date) "~Y-~m-~d ~H:~M:~S")))))
(define conv (llm/chat
system: "You can tell the user the current time."
tools: '(get-current-time)))
(let-values ([(conv ok?) (llm/send conv "What time is it?")])
(when ok?
(print (llm/get-last-response conv))))
=== License
BSD-3-Clause
=== Version History
; 0.0.6 : Fix Anthropic message handling for tool calls.
; 0.0.5 : Add Anthropic provider with chat, tool calling, and file attachment support. Add {{default-model}} field to provider record. Update model pricing.
; 0.0.4 : Add image generation, audio transcription, and simple text generation APIs.
; 0.0.3 : Provider abstraction and modular architecture.
; 0.0.1 : Initial release with OpenAI support.