2024-05-05 AI Generation Pipeline: systematical code and text generation with AI in your build process

#ChatGPT #AI #Productivity #Development

AI Generation Pipeline: systematical code and text generation with AI in your build process

AI Based Code Generation Pipeline - generate code, texts systematically with LLM in your build process

Would you like to call an AI from the command line or a script to have it perform tasks to generate or transform code or text, like generating code from request examples , collect summarizations from text documents and keeping that automatically up to date, generate documentation with nice tables from Java source code annotations, perhaps even generate small applications or application fragments from a specification and update that automatically from changed specifications? These are things you can do with the AI Generation Pipeline. It can often be a tool for things that would be hard to automate traditionally, but are easily automated using AI given the right tools. For instance, "generate a markdown table of the configuration attributes mentioned in AppConfig.java, but only if that was changed" could be as easy as

aigenpipeline -p markdowntableprompt.txt -o AppConfigAttributes.md AppConfig.java

This blog gives you some background and an overview over my AI Generation Pipeline, a command line tool and framework that allows you to easily delegate small jobs repeatably to AI e.g. in a build process or on cue by calling a script, while only calling the AI if the input files or instructions have changed. It can use OpenAI APIs, Anthropics Claude, or local models with similar interfaces as backends.

The idea

I very often enjoy asking ChatGPT for answers, discussion, help and it often saves a couple of hours. But I do not trust AI to do complicated multistep jobs yet, though I see many attempts to do that, like Devin or Aider, and I had my own Co-Developer GPT Engine do some (at least for me) quite impressive things, like porting itself from the Undertow web server to the Jetty server, admittedly with some insistent pushing. But LLM are already smart enough to do lot of small to medium tasks and I want it to be easy to use it in my projects for those things, without having to copy texts back and forth to ChatGPT or similar. So I wrote the AI Generation Pipeline's command line tool to make that easy and use it at scale in a project. You are invited to use it, too - it's free and open source.

Of course there are already tools to access AI from the command line, like Simon Willison's really impressive do everything and it's grandmother tool LLM or the lean OpenAI focused chatgpt script from my ChatGPT tools, but there is more I'd want from a tool for this purpose:

  • I'd like to give it a couple of input files and prompt files, and it needs to label and transfer those files to the AI and write the output to the file that needs to be generated.
  • It should be able to write output files as needed when input and/or prompt files change.
  • If neither input and prompt files change, it should do nothing, since using the AI costs money, time, and it's often necessary to verify the generated content.

That's what a basic call to the tool does:

aigenpipeline -o outputfile -p promptfile input1 input2 ...

As I'm more and more using it, it's expanding beyond that functionality, but let's first go into the basics.

Handling of generated files / Versioning

Since AIs are not as cheap, reliable, deterministic and fast as a compiler, I want to avoid unnecessary calls to the AI and make sure the AI generated files are only regenerated when the inputs change. Thus, it doesn't seem appropriate to use an AI to generate files "on the fly" in a build process, but rather to put the generated files into the version control system (e.g. Git), too. This way new generated files / changes to generated files can easily be inspected. Unfortunately it is often not sufficient to rely on timestamps for determining whether any inputs have changed after the output was last generated, since e.g. in Git the timestamps of the files are meaningless.

Thus, the tool normally puts version comments in the output files like this:

/* AIGenVersion(35ff6cec, 1html.prompt-2.3, README.md-2deb7062, dialogelements.txt-4684af8d) */

The version comment contains both a version number for the output file (here a35ff6cec), and the versions of the input files and prompt files used to create it. If the versions of the input files and prompt files have not changed, the output file is not regenerated.

The numbers are usually abbreviated SHA-256 hashes of the contents of the files - 35ff6cec is the hash of the generated output, 2deb7062 of the README.md file, and so on. If you tend to often change details of a file but do not want already generated files to be regenerated because of each tiny change, you can also give it an explicit version, like the 1html.prompt-2.3 in the example by putting a AIGenVersion(2.3) into that file, and update that number only when you do major changes to the file that should trigger a regeneration of all files that have been generated using that file.

Background: how it talks to the AI

If you give the option -v, it'll print the request sent to the AI. This is using the chat completion API of OpenAI or a similar interface of other LLM, and follows my put it into the AI's mouth pattern: We send a made up "conversation" where the logical followup would be the AI printing the required content of the output file - of course the actual contents of the files would be put in there:

---------- system ----------
(This is replaced by some basic instructions the AI should follow)
---------- user ----------
Retrieve the content of the input file 'input1'
---------- assistant ----------
(This is replaced by the content of the input1 file.)
---------- user ----------
Retrieve the current content of the output file 'input2'
---------- assistant ----------
(This is replaced by the content of the input2 file.)
---------- user ----------
(Here come the instructions form the promptfile which the AI should follow.)

(BTW: you can think of that as a kind of "conversation engineering", sort of an extension of "prompt engineering".)

It is also possible to give the current output file as an additional input file to the AI, so it can check and update the file according to the new input files. Then the user message announcing that file would currently be e.g.:

Retrieve the current content of the output file 'output.txt'. 
Later you will take this file as basis for the output, check it and possibly modify it, 
but minimize changes.

(Note that the AI doesn't actually retrieve those files but the tool does that before contacting the AI. The conversation is however structured as if the AI did retrieve that, since that nicely structures the conversation, and makes it more unlikely that the AI will confuse the file content with instructions - aka prompt injection.)

How to get the tool

The source is available on GitHub , see the releases for the latest version and instructions how to install it. There is a zip file available from the central maven repository that contains a Java jar file and the shell script aigenpipeline to run it. The instructions are available through the project site, short instructions are available by calling the tool with the -h / --help option.

Examples for using the tool

The folder examples in the sources contains various examples, from simple rewriting, text translation, implementing a domain specific language, generating code from examples to creating an actual mini application from a specification, and accessing different LLM (OpenAI, Anthropic Claude, local models).

Additional functionality

The tool can also replace parts of files instead of regenerating a whole file. This allows easily mixing manually created content and AI generated content in one file, possibly coming from several sources. For example, you can generate a table from a JSON file and embed that into a document, you can freely edit the file, but use the tool to update the table when the JSON file changes.

If the tool is generating something you don't understand, you can also ask it. Just run the last command line again with --explain and your question. Instead of regenerating the output file, it reproduces the chat that has been used to generate the output file, and adds this question to the AI. That can also be used to have it make suggestions how to improve the prompts.

Planned functionality

Since I'm using it more and more, now and then I get a fresh interesting idea how to use it. One thing I'd like do soon is to embed the prompt and command line into the output file, instead of having to use a separate prompt file and a shell script. So you'd basically declare in a file that it / a part of it should be generated from which inputs / prompts.