Due to the discussion in Running Julia from Java - What is crazier?, I am introducing a new project for calling Julia from within Java. Despite it is in its early stages, the package implements basic external calling of Julia from Java and a skeleton of JSR 223 (Java scripting interface) implementation is ready for future development.
The package simply starts an external Julia process, injects a script that listens TCP connections (multiple ones if needed) on a given port, accepts the statements and expressions and returns the results in JSON format. The JSON is then parsed to Java objects in Java side. The performans overhead consists of stream connections and JSON parsing (both encoding and decoding).
The project JuliaCaller is hosted on GitHub repo with Apache license.
Hi,
I’ve been digging into JuliaCaller a bit as I am comparing the various Julia-from-Java solutions out there and had a quick question: what made you go for tcp/json as opposed to “basic” process to process communication? (ie. via Java streams, as done in JaJuB. Did you notice any benefit (in terms of coding, performance, etc.)? I’d see a benefit of the network layer if your Julia instances were indeed cloud based but they seem local as of now (due to the code injection?). But the json format sounds like a pain when having to transport several gigs of data between languages as plain text. Thoughts?
Edit: json does sound very appealing to handle dicts out of the box though.
This is a fast ready to go project for calling Julia from within Java and I didn’t think it would be used in a project big enough to move gigabytes of data. I wanted to try what I did in the RCaller project which is library for calling R from within Java using the same logic.
Hi,
I am able to run using juliacaller in my local system tried couple of examples, working fine.
But when i tried to run the .jl file which is in my local system like this,