Introducing agents-exe

On Fri, 21 Mar 2025, by @lucasdicioccio, 1075 words, 1 code snippets, 8 links, 0images.

I’ve spent a bit of time catching up on recent advances in Artificial Intelligence, particularly generative-AI. The readings page is now beefy. Among the hot topics of generative-AI, there’s a clear distinction between model builders (which require abnormally large capital) and application builders (which require little capital). Here, I take the approach of writing an application, in the form of an “Agent” framework.

You can find agents-exe on my GitHub repository. Soon you’ll be able to run agents-exe directly from Docker or Podman as I’ll likely distribute a container.

Overall Vision

The vision behind agents-exe is to explore the space of autonomous agents, which I would loosely define as agent = puppet master + experts with experts = prompt + specialized tool. In this vision, we need to quickly iterate on creating experts prompts and tools. Thus, I wanted something were adding a new agent and a new tool is low-effort, running all that on my machine should be little effort.

I built agents-exe with the following user experience in mind: experiment prompting with just a binary and some file-conventions, run an agent as a puppet-master of specialized agents. Ultimately, a specialized-agent should be capable of crafting other agents and tools (you’ll find Agent Smith and Tool Smith) but this is not a goal per se. Rather, a goal would be to support cases where you create your own set of agents, possibly per-project set of agents.

Implementation

I’ve implemented agents-exe in Haskell, my go-to language for systems when I don’t need to convince people Haskell is the right tool for many jobs. Indeed, I’ve quite a lot of personal libraries to help with contravariant-tracing, background values, observability metrics etc.

The best place to read the implementation is to look directly at the code, so I’ll only give high-level architectural hints here.

The key architectural bits are:

  • an internal definition of what is an agent, a tool description
  • file-loaders and mapping functions mapping agents files onto in-memory agents (prompts and models)
  • file-loaders and mapping functions mapping bash files onto tools (OpenAI functions)
  • an “agent” module that codifies a few callbacks so that we can tune how to get the next user request, ask to stop, or handle edge cases
  • a set of ‘main functions’ bundling all these primitives together depending on whether you want to run single prompts, have a conversation, expose an agent as an MCP tool to Claude etc
  • auditing of every single API call and every bash-tool calls

In technical-parlance, we want to map back-and-forth between files on disk on one hand, and prompts and tool-definitions on the other hand. We also want our code to be compositional enough so that “an agent with tools also can be a tool for an agent”.

Currently, covering many LLM providers is a secondary goal: I stick with OpenAI for simplicity. Extra support would be more than welcome although we need to insert an indirection first.

Results so far

So far, my agents have nothing spectacular but the frameworks works well, I see no blockers or weak spots if I were to productize it in a way or another. My agents can do some basic network checks, collect data and reformat data, and notify me when they progress. More mundane work is required to better support environment-tuning (e.g., to pass secrets), but there is nothing out of reach at this point. I’ll likely add some Postgres data-sinks so that monitoring and exploring results with Postgrest-Table is straightforward.

Lessons and speculation

Some lessons and speculation based on writing a framework in a few days.

barrier to entry in agentic systems is low

Overall, my stint took me less than a week and I got an acceptable “framework” for little effort. The initial skeleton was actually generated by OpenAI, although the code would not compile it gave me the basics right (see the annex below to see where I started from).

I spent around 10 US-dollars in OpenAI credits while poking around, with a maximum of 3.72 US-dollars in one day to get to the point my agent knew how to create agents.json file for sub-agents and bash files wrapped so that minimal edition is then required. Fine-tuning would likely bring us to the point where the agent can autonomously create new agents and its set of specialized tools.

Overall. I think companies wrapping LLMs are set for a market that ressembles “mobile app” with a lot of competition and where customers find difficult to decide between general but un-differentiated versus specialized applications. Tuning experts will bring some value but it feels like building such experts itself will have a low barrier to entry.

functional programming is a good fit for agentic systems

Or, at least, there is no reason to go back to objects with mutable states. Down to Earth, Agent executions are trees of function calls. Each function call may or may not run concurrently, you are likely to hit timeouts, erroneous tool-calls when the LLM requests wrong parameters, or when you want to retry calls. Overall a non-negligible amount of the value an agent framework will bring is around bookkeeping ongoing processes and tasks.

As a results, programming languages with a rich runtime like Haskell having Asyncs, atomic IORefs, and STM; as well as Erlang’s supervision trees native primitives, I’d say that the languages have solid ground to build non-trivial agentic applications. Inded, they make it easy to defined, capture, resume the state you care about whilst keeping control of side-effects.

opportunities and risks

Even though the price of tokens is going down, the price per prompt will likely remain a linear function of the number of tokens. That is, token-saving systems and patterns will remain relevant for quite some time. For instance, performing HTML to text conversion when reading some web-page, or summarization (via other specialized agents). In agents-exe, this means exploiting further the tree structure of iterative calls to experts and teaching agents to delegate tasks properly.

A thing that will grow with AI usage are security risk. The multiplication of MCP tools, the data-leaks, an denial-of service risks are pretty high. The jury is out to understand if risk will grow sub-linearly, linearly, or super-linearly with respect to AI tools. Indeed two effects will occur: AI-agents will get better at being cautious leading to reduced risk, however the usage and combination of usages of AI-agents will lead to increased risk.

Summary

I pretty much enjoyed building agents-exe and I’ll likely keep iterating on it. I have a form of unordered roadmap to support using MCP-tools as clients, providing yet more integration patterns and more LLM-providers. What is not clear to me is: what should the end-goal for these frameworks given that the barrier to entry is low?

Annexes

the one-shotted code

This is the code ChatGPT eventually spat after a bit of discussing what type of Agent I intended to write.

As far as I remember, the code would not compile for minor things (iirc, the errors were revolving arount the usage of the Aeson library).

I removed Req as I prefer the lower-level HTTP client APIs.

{-# LANGUAGE OverloadedStrings #-}

module Main where

import Network.HTTP.Req
import Data.Aeson
import qualified Data.Text as T
import qualified Data.ByteString.Char8 as B
import System.Process (readProcess)

-- Define API Key (store securely in real applications)
apiKey :: B.ByteString
apiKey = "sk-your-api-key"

-- Define LLM API call
callLLM :: T.Text -> IO Value
callLLM prompt = runReq defaultHttpConfig $ do
    let payload = object ["model" .= ("gpt-4-turbo" :: T.Text), "messages" .= [object ["role" .= ("user" :: T.Text), "content" .= prompt]]]
    response <- req POST (https "api.openai.com" /: "v1" /: "chat" /: "completions")
        (ReqBodyJson payload) jsonResponse (header "Authorization" ("Bearer " <> apiKey) <> header "Content-Type" "application/json")
    return $ responseBody response

-- Parse the LLM response to check for tool usage
parseLLMResponse :: Value -> Maybe T.Text
parseLLMResponse (Object obj) = do
    choices <- obj .: "choices"
    firstChoice <- (head choices) .: "message"
    firstChoice .: "content"
parseLLMResponse _ = Nothing

-- Define tool execution (e.g., run a Bash script)
runTool :: T.Text -> IO T.Text
runTool input = T.pack <$> readProcess "./get-price.sh" [T.unpack input] ""

-- Main agent loop
agentLoop :: T.Text -> IO ()
agentLoop query = do
    llmResponse <- callLLM query
    case parseLLMResponse llmResponse of
        Just toolCall -> do
            putStrLn $ "LLM requested tool: " ++ T.unpack toolCall
            result <- runTool toolCall
            putStrLn $ "Tool result: " ++ T.unpack result
            agentLoop result -- Looping by feeding tool result back to LLM
        Nothing -> putStrLn "LLM did not request a tool."

main :: IO ()
main = do
    putStrLn "Enter query:"
    query <- T.pack <$> getLine
    agentLoop query