If you use AI coding assistants (Claude Code, Claude Desktop, VS Code Copilot), you’ve probably noticed they pay Julia’s startup and compilation costs on every single code execution and test run. That adds up fast during iterative development.
julia-mcp is an MCP server that keeps Julia sessions alive across calls. Variables, functions, and loaded packages persist between executions, and the assistant can iterate on code without waiting for recompilation each time.
Works for tests as well!
Key design choices:
Fully automatic — sessions start on demand, recover from crashes, and clean up on shutdown. The only setup is a one-time MCP registration.
Isolated per project and per AI session — each project directory gets its own Julia process. No cross-contamination between environments and sessions.
Pure stdio — no TCP ports or sockets to manage.
Revise.jl is loaded automatically if available, so code changes are picked up without restarting.
There are other projects in this space — MCPRepl.jl, REPLicant.jl, DaemonConductor.jl — each with a different take. julia-mcp’s angle is zero manual management: you register it once and forget about it.
Setup for Claude Code:
cd /any_directory
git clone https://github.com/aplavin/julia-mcp.git
claude mcp add --scope user julia -- uv run --directory /any_directory/julia-mcp python server.py
The server is written in Python (the MCP protocol ecosystem is most mature there), the only dependencies are uv and Julia itself.
I love seeing all the experimentation here. I tried a similar path (using a Jupyter kernel for a persistent Julia session with MCP) but ultimately I wanted a REPL that I could seamlessly interact with manually or with Claude. It turns out that Claude is quite good at reading from/writing to tmux sessions, so I basically just ended up with a skill and a few scripts to have it launch a juliarepl_project tmux session.
This is definitely more of a DYI approach, but I’d recommend it to people who enjoy heavily customizing their workflows.
I think the terminology surrounding MCP is a bit weird to be honest Especially the “server” name can sound confusing in the common case when the LLM starts a new “server” for each session anyway, they are not shared and just communicate through stdio.
Yes, the focus here is definitely not towards a “shared” repl between LLM and person. julia-mcp is intended to be a better (more efficient) way for LLM to run Julia code and tests, with the design goal to be fully automatic without any management.
Nothing more, nothing less (: