Hi asprionj - I have a similar desired use-case in wanting to compile a shared library, then calling that library from Julia. I assume when you say you’re now using JuliaC to create a shared library, your use-case is no longer using that library from within Julia? Because this is still an open issue it seems.
Hi, I don’t know about the status of allowing multiple julia runtimes at the same time. I vaguely remember reading something about it, in the context of more restrictive (or just “clean”) bindings, but am definitely not able to answer questions about this topic…
And yes, my use case is to use the created library from a different environment / language (e.g. Java), and not calling it from within julia again.
Hi everyone, a quick update about the segfault issue is to be found here: HiGHS instance and model when used in a shared library - #5 by asprionj
Seems to be a fundamental issue between Java and Julia…
Hi, I’ve got the same questions concerning threading when embedding a PackageCompiler generated shared library within Java.
But what might help you, as it helped me, is to make sure that Java signals do not interfere with Julia:
# Signal chaining - MUST be set before JVM starts
# Both Julia and JVM use SIGSEGV internally; without this, Julia intercepts JVM signals
LIBJSIG="/usr/lib/jvm/java-25-openjdk-amd64/lib/libjsig.so"
if [ -f "$LIBJSIG" ]; then
export LD_PRELOAD="$LIBJSIG"
else
echo "Warning: libjsig.so not found at $LIBJSIG"
echo "Julia/JVM signal conflicts may cause crashes."
fi
Make sure to adjust to your JDK/distribution. For more on this see: Julia, The JVM, and Signals
And ChatGPTs verdict:
JuliaC.jl makes building and bundling a Julia-based shared library (with exported C entrypoints) much easier and more distributable — but it does not remove or change the core embedding restrictions you’ll face when loading a Julia runtime into the JVM (single init, thread-affinity, signal/runtime interactions).** GitHub+2docs.julialang.org+2
I’m starting to believe that its easier to accept some kind of IPC mechansim and a separate process than to try to force Julia into something its not designed to be.
I’d like to learn about this more. Does anyone have tips, tutorials, and suggested libraries for calling Julia via IPC?
A full-blown example: GitHub - ufechner7/pykitemodels: Kite power system models for Python
In this package, I wrap a Julia package using JSON over HTTP for use in Python.
The call overhead is about 100µs. (Of course, it depends on the size of the data you exchange and on the speed of your CPU).
See also:
While this is not the fastest possible approach, it is the simplest and most straightforward approach. On the Python side you do not need any extra package, because HTTP and JSON are included by default. All you need to do is to define a struct with the parameters of the function you want to call.
An other alternative is to export Julia models using FMIExport. Then you have an FMU component that can be used in Python or Simulink or whatever. But these components are very large.
My tip is to use AppBundler.jl to pass notarization in MacOS. You can also copy its approach manually.
Thanks @laborg, we will surely give this a try!
We basically also went for separate processes to just be able to continue working. Concerning how to do inter-process communication (IPC): there’s many options, and from julia side, there is InterProcessCommunication.jl that covers many of them.
An HTTP server surely is “convenient”, but it pulls in a ton of dependencies – which one might not want if he intends to create a slim (probably by its own --trim-able) library.
Shared memory would probably be a better option. And, for a simple “poor-man’s version” of that, you could just use files. On hard disk, if the calling frequency is low and latency is not an issue, or on a ramdisk, which basically makes this also an in-memory communication.
Simply put:
- Have a
while trueloop that checks for existence of a file (and/or theflockattribute of it). If it is there (or not locked), read in the input from it, execute the actual functionality, write out an output file, and continue. - If there is no file and thus no calculation to be done, just
sleepfor e.g. some milliseconds (to not block computational resources).
On linux, the mv or rename commands on a file are atomic, so you can just write to a temporary file, the mv it to the agreed-on location. Like that, in a single atomic operation, the file starts to exist, and thus can be directly and safely read by the other side.
On Windows, you have to get a bit more creative. try ... catch when attempting to open the file, and possibly some safeguard in the content. For example, agree on a terminating sequence in the file content, and always check if it is there before actually using the data.
ZMQ.jl has only minimal dependencies
Yes, a message queue is a good way for IPC if the interaction is more complex. If there is sequential interaction between two entities, it might be overkill though. You basically then have three async engines; one in the messaging library (such as ZeroMQ), and one in each of the parties that actually communicate. With shared-memory IPC (or a webserver), you don’t have the additional dependency / tool (messaging library) with its async engine.
I’ve not used with Julia specifically yet, but pipes provide low latency and good throughput for IPC/RPC-typical message sizes:
goldsborough/ipc-bench:
Benchmarks for Inter-Process-Communication Techniques
Evaluation_of_Inter_Process_Communication_Mechanisms.pdf
Generally, separate processes provides a lot of advantages for enabling polyglot compositions of tools (a la Unix), and ease of packaging and deployment. About the only case I’ve run into where you would need to be in-process is something like sharing an OpenGL context where you need the actual pointer/handle.
Thanks for the suggestion. I found a useful post about using named pipes in Julia:
Just note that this would be unix-specific since it is running mkfifo to set up the named pipe. You might also want to run a higher-level protocol, like JSON-RPC, over it, depending on your use case.
For the higher-level protocol: consider protocol buffers, julia support is provided by ProtoBuf.jl. Works fine. Binary wire format, so compact and fast to (de-) serialise. Also, native datatypes and their (de-) serialisation is auto-generated.
Just note that it is a pure message format, not an RPC. There is an RPC on top of protobuf, but ProtoBuf.jl does not (yet?) support that. So, if you want to use it, make sure tue prepend the length information, e.g. in the first four bytes. Then, always read those first, then you know the length of the message and thus the IO-buffer you should read.
This is sooo much more complicated to use than what I suggested with HTTP and JSON…
You need:
- an extra .proto file
- an extra compiler for the .proto files
- an extra compilation step, both for client and server
- extra client libraries
Of course, there might be use cases where this effort is justified.
This is actually really simple, I’ve implemented it today, and ProtoBuf.jl does what it should do without any hiccup. You get well defined versionable messages and its more efficient. I also thought that this is complicated, but I won’t look back…
As @laborg already indicated, usage is rather simple. As a response to your individual points: with HTTP and JSON, you need:
- A separate definition of the JSON structure/objects the API uses, that is NOT executable.
- Two developers (one on each side) that actually stick to that definitions.
- An extra development step for every simple change in these definitions.
- Two extra libraries (HTTP and JSON).
And then in addition:
- Think about the complexity of an entire webserver you add.
- Think about everything that can go wrong with a text-based message format (JSON).
- Performance, esp. for larger data – parsing JSON is anything but simple and “free”. I did some comparisons a while ago (all in julia), and got, for a typical test case:
1.288 ms (35973 allocations: 3.12 MiB) # fastest JSON library (out of the pre-2.0 JSON.jl, JSON3.jl, Serde.jl)
4.409 ms (128357 allocations: 5.81 MiB) # BSON
109.900 μs (2768 allocations: 350.97 KiB) # protobuf