One additional issue here. I had been developing this code within modules and importing the Tachikoma functions with import Tachikoma: should_quit, update!, view, init!, cleanup!, task_queue instead of doing using Tachikoma. It turns out that some of the exported functions clash with things in Base especially Base.view.
I’ve stopped exporting the functions you need to extend like view and others. Now you need to import those definitions, or alternatively I added a macro @tachikoma_app which does that for you as a convenience.
I’ve updated docs to reflect this.
Thanks for your patience as I work out some of these bumps along the way.
Hey for anyone who wants to try on Windows, I’ve fixed a bug that was preventing output on windows terminals. The Windows terminal support is not nearly as full featured or robust as offerings on Linux/Mac where we have kitty, xterm, ITerm, etc.
It may be difficult to support all the bells and whistles in these environments but the basics are there. If anyone wants to explore what’s possible and what the best terminal options are in the Windows OS ecosystem, get in touch. My best recommendation now would be to install WSL2 and get kitty running, I haven’t tried it but I assume it would work very well.
(jl_hfHFSl) pkg> add Tachikoma
Resolving package versions...
ERROR: Unsatisfiable requirements detected for package Tachikoma [468859d6]:
Tachikoma [468859d6] log:
├─possible versions are: 1.0.0 or uninstalled
├─restricted to versions * by an explicit requirement, leaving only versions: 1.0.0
└─restricted by julia compatibility requirements to versions: uninstalled — no versions left
I am unable to install Tachikoma on julia 1.10.10 in windows 11. It seems like
dependency/compatibility requirements in the package metadata.
I’ll take a look, the package just got put into the general registry this morning! I might need to produce a new release for the recent changes. But could you try adding from the GitHub repo first? add https://github.com/kahliburke/Tachikoma.jl
Can you take a look with the 1.0.1 release and see what’s happening? If you get in touch I’ll also try to work out your issue 1:1. I now have a CI job in GitHub which builds against 1.10 so I don’t know exactly where the issue is coming from but I’d start by looking at your jl_hfHFSl project and inspect what status returns. Remove any of these packages and add again?
Was this developed from scratch using Claude or similar? Or is it an AI translation / port of some kind? I always thought one should use ncurses and such established libraries because correctly handling all the different terminals was so tricky. But maybe with AI it’s not so difficult anymore to cover all that code quickly? When I tried it in VSCode and wezterm it worked pretty flawlessly, even with mouse interaction. Nice to have this available in Julia!
I developed it from scratch and I did use AI tooling. But I put a lot of thought into it and have tried to construct something fairly robust with many tests. It is of course influenced by other existing TUI frameworks such as ratatui, but I wanted to add significant features on top. Awesome to hear that it worked flawlessly for you (or close to it ), I’ve done some stress testing with it and with the right terminal support it can be really fast.
One example is the built in screen recording functionality, press ctrl-r (default keybinding but can be changed of course) and you’ll start recording the interaction, press again and you can export an animated GIF or SVG. The framework also produces a smaller ‘source’ file which I’ve called .tach format. This can be used to produce animations in various formats at a later time and is generally small, using zstd compression automatically.
I use this internally for producing all the documentation. All examples shown, static GIFs of widgets as well as every single animation, demo, and app example is generated 100% programmatically. Most of them are created directly from the Julia code snippets in the Markdown files. I did this so that there would never be any question that whatever you see in the docs is 100% authentically from the framework.
One thing that I’ve also found is that AI tooling can understand and interact with the framework quite readily. I can have an agent whip up a quick gui for various purposes in minutes. Combined with Kaimon.jl for the MCP connection, this is a pretty powerful way to construct software. Kaimon.jl itself uses Tachikoma.jl for its TUI for example.
The sixel support works pretty well but it’s not that fast. Maybe there are ways of optimizing it further, but for better performance right now the kitty gfx protocol using shared memory is really nice. I have a little project for fun that I was working on:
My main limitation for windows support is that I don’t run it or have access to a Windows environment at the moment. I realize I could set up some cloud server, but I just haven’t gotten to it. Not a huge windows fan so my motivation is lower, to be honest.
But I will certainly support others who want to put in a little time to improve the support there. There were a couple minor issues in the first release but it should work well. There is an environment variable that Tachi uses to configure the gfx support. There is some attempt at auto detection but it’s likely not working everywhere in all terminals. Try:
export TACHIKOMA_GFX=sixel (or the windows equivalent) and see if you can get some of the demos using it to work?
Regarding AI assisted building of UIs, how can one write interaction tests with Tachikoma? Faking user input and testing that this leads to the desired outcome I mean? The closed feedback loop is important to get good results from agents and user interaction is always a bit trickier in that regard.
Sorry, not sure if I’m misunderstanding you or if you misunderstood my previous commend?
I was describing my experiences working with AI tooling and some of the testing capabilities that the framework offers. I was not trying to imply that a user would never need to test or interact with the resulting interface.
My point about the AI tooling is that you can generate prototypes rapidly and through a feedback cycle iterate on them and refine them, in a tight loop. The resulting interfaces are in a form that make them amenable to automated unit testing, to ensure that specific input maps to the correct behaviors. Having this capability built into the framework is a benefit, in my opinion. AI tooling can easily write and maintain these tests. The fact that the UI is text based is an advantage as it maps very well to an LLM’s representation.
APP_EVENTS["my_app"] = EventScript(
(1.0, key('r')), # fire 1s from start
rep(key('r'), 3), # 3 more rolls, each 1s later → t=2, 3, 4
(1.0, key('b')), # bank 1s after the last roll → t=5
pause(2.0), # wait 2s with no event → cursor at t=7
seq(key(:down), key(:up)), # two nav events, 1s apart → t=8, 9
)