Improving the experience of working with Julia packages with llms.txt

With the 1.0.0 rewrite of gRPCClient.jl, I realized that there are not going to be any examples of using the current version of the package in LLM training data. In an effort to try and bootstrap the package for usage with LLM / agents, I used Documenter.jl to take all of my documentation markdown files + inline documentation to produce a single output Markdown file which I named llms.txt. Essentially the idea is to follow the loose standard defined here: https://llmstxt.org/

I then made sure that my documentation action then copies the llms.txt to the root directory of the documentation site, ie https://juliaio.github.io/gRPCClient.jl/llms.txt.

Have any other package maintainers tried something like this before? Regardless of your personal feelings on LLM assisted coding, it seems like it would only be a good thing to improve the experience of using these tools to write Julia code; I say that as someone who finds a great deal of satisfaction in working with Julia as a language, especially when I’m writing it by hand. As such, anything that could help drive adoption of Julia would be a good thing in my eyes. Keep in mind this could also help potential users discover available packages and have up to date information about them, even if they do not use them directly to generate code beyond examples.

Once I have more experience with how well this works for gRPCClient.jl I will report back my findings.

Lets try to keep discussion ontopic here instead of letting it devolve into a general critique of “AI”. There is a time and place for that, but it should be done in separate thread.

2 Likes