I cloned the Julia repo to my SmartOS box, intending to build it. I found out that:
it has a Makefile, not a configure script or a CMakeLists.txt file or the like;
it failed to build;
somewhere in the source code, there’s a file with lots of #defines relating to specific operating systems.
Is there a porting guide for people who want to get Julia to run on new OSes or processors? Is someone willing to get it to build with CMake or some such build system, to make it easier to port?
Besides illumos, I’d like to get it to run on DragonFly BSD, and I’ve been wanting to buy a big-endian Power box (Talos II, unless they come out with something new by the time I have enough money) running one or more of Adélie Linux (musl), FreeBSD, or OpenBSD to test portability of software I write.
Is someone willing to get it to build with CMake or some such build system, to make it easier to port?
Before starting actual CMake support work, I’d suggest opening an issue or posting a discussion on discourse.
julia does not currently support builds using MSVC.
As a Windows user, I’m interested in adding support for CMake and similar cross-platform build systems.
[root@orshemesh ~]# pkgin se libuv
R-fs-1.6.3 Cross-platform file system operations based on 'libuv'
libuv-1.48.0 = Cross-platform asychronous I/O
lua51-luv-1.43.0.0nb1 Bare libuv bindings for Lua
lua52-luv-1.43.0.0nb1 Bare libuv bindings for Lua
lua53-luv-1.43.0.0nb1 Bare libuv bindings for Lua
lua54-luv-1.43.0.0nb1 Bare libuv bindings for Lua
py310-uvloop-0.20.0 Fast implementation of asyncio event loop on top of libuv
py311-uvloop-0.20.0 Fast implementation of asyncio event loop on top of libuv
py312-uvloop-0.20.0 Fast implementation of asyncio event loop on top of libuv
I develop on Linux and have compiled some of my C++ projects on Windows, using Ninja (which was a lot faster than Nmake when I first tried it) and the compiler from MSVC. I’ve used the MSVC GUI to debug programs, but I build them on the command line. I’ve also used gcc in Windows; IIRR gcc has 10-byte floats but MSVC doesn’t.
The discussion on GitHub mentions Meson. I have no experience with Meson, so can’t compare it with CMake, but if Meson makes cross-platform building easier, it’s fine with me. Meson depends on Python, which CMake doesn’t.
I understand that sentiment, it is born out of the feeling “Wouldn’t my life be much easier if Julia used a standard build-system, like CMake or Meson”.
I am a big fan of Meson ( and after years of fighting with CMake for LLVM a strong dislike for CMake).
But it is important to ask the question: Why do the Julia developers didn’t use CMake? In many ways we have to solve three problems:
Dependency management (and control). There is a tight coupling between a few dependencies (most notable LLVM and libuv) and we want to minimize the possible configuration set instead of having developers run into the same bugs multiple times, just because they don’t yet have a patch for a dependency.
We need to build the Julia runtime and the compiler (this would be trivial for any build-system and even with Make it’s not a big deal)
We need to bootstrap Julia itself, the Julia standard lib etc…
If you propose to switch the build-system to Meson or CMake you would need to answer the question: How would this make it easier for everyone developing the Julia core.
Right now as someone who just wants to work on a standard lib all I have to do is enter “make” and I get a reasonable approximation of a build that your CI system would also provide.
If you want to port Julia to a new platform the build-system really shouldn’t be the hardest part, you likely will need to turn off the BinaryBuilder provided caches, and build everything from scratch.
I don’t suggest doing that, since it has already been done multiple times and the outcome wouldn’t be much different than the last time this has happened. Valentin elaborated a little bit more about the motivations.
I looked at the pull request and there’s a lot of files changed. Is there a list of steps to port Julia to a new platform? Try to compile, and if you get this error, do that?
How hard would it be to add powerpc64be, which I mentioned above? Powerpc64le is already available.
Big-Endian is hard, we have no Big-Endian support so it is not just going in and porting it to a new architecture, but also going in and rooting out all the small little endian assumptions in the runtime, the language and Julia libraries.
Note that we currently don’t have an active maintainer for ppc64le and are considering dropping support for it due to the ongoing maintenance burden.
In principle, yes, but it does require profound knowledge about both Julia and PPC64. It is very unlikely that we will be able to support -BE and there is very economic incentive to do so.
[This doesn’t not apply to “SmartOS, with its remarkable blend of the illumos kernel’s power” a fork of Solaris, even though it was BE, SmartOS only supports LE. SmartOS might though be ruled out for other reasons.]
Not just little incentive for BE for Power8+ (NOR really for LE there), but no real incentive for any other BE arch, and supporting the first BE arch would be most difficult, though not impossible…
The only reason I can see for supporting BE is mainframes (and I don’t think worth it for Julia), i.e. IBM z/Architecture aka s390x (Rust does it in tier 2, but then for Linux running on the mainframe). This does not apply to Power8 [Linux], since it’s bi-endian. SPARC and AIX Linux are also big-ending, but dead.
We will likely never support z/OS (even if it and its hardware would change to bi or little endian); and running Linux under it is neat, but I doubt very useful in the cloud era.
Why don’t you just run that software on LE (POWER) Linux?
Intriguingly IBM’s Endianness Guidance for Open-Source Projects [for targeting s390x] doesn’t even mention their POWER arch under either the big- or little- endian example architectures.
I’ve always felt the strength of the z architecture is plainly demonstrated in use of z/OS …
When using a z machine as a Linux server farm, those benefits are far harder to quantify.
I.e. this is ruled out, seemingly forever: IBM LinuxONE
IBM LinuxONE is an enterprise-grade Linux® server, powered by the IBM Telum® processor
No processor needs to be run in big or little endianness. The reason for that is simple: anything you can do in one mode, you can also do in the other. …
In ARM it seems that this instruction is called the rev instruction. Let me quote Peter Harris, a Mali GPU distinguished engineer at Arm:
I need to run big endian code but i don’t know how to set endian option in cp15 registers could any suggest me how to set EE bit set
It’s a very expensive way of doing it. If you are going to do functions to load single fields from memory then just use little-endian loads and use the “rev” instruction to reverse the result - it’s only one additional instruction and much faster …
ARM cores support both modes, but are most commonly used in, and typically default to little-endian mode. Most Linux distributions for ARM tend to be little-endian only. The x86 architecture is little-endian. …
Modern ARM processors support a big-endian format known architecturally as BE-8 that is only applied to the data memory system. Older ARM processors used a different format known as BE-32 that applied to both instructions and data. BE-8 corresponds to what most other computer architectures call big-endian.
Because that wouldn’t test big-endian portability, it would test only that it runs on Power, including that it works with 128-bit floats. (The only place 128-bit vs. 80-bit float makes any difference is in the test of the Euler spiral.)
Convertgeoid, one of the Bezitopo programs, reads and writes geoid files in various formats. One of them, NGS, is available in both big-endian and little-endian versions. The government agency that publishes it says that Unix is big-endian and Windows is little-endian. There are many little-endian Unices. I’ve tested convertgeoid on big- and little-endian files, but not on big-endian hardware.
PerfectTIN, which shares some code with Bezitopo, uses the Plytapus library, which I took over and renamed from a French Canadian, to read and write PLY files. PLY binary files can be big- or little-endian. Again, I’ve tested it on big-endian and little-endian files, but not on big-endian hardware.
That’s misinformed. I think you have a dream, and I want to kill that big-endian dream. And tell you why. Which BE Unix do you think is important and why? I’m just questioning that, why not just use a LE Unix/Linux?
Testing can only show presence of bugs, never absence of bugs. Even if you could fix such bugs (and support 128-bit floats), in Julia, even with some formal proof… you wouldn’t fix the potential bugs in the Julia package ecosystem. I think it’s just not worthwhile to even try. Note, in Java this might be easier, even already done (like for sandboxed Java, it also can call C, just less done with JNI, then has same problem). But in Julia you have ccall and it seems to me if you can call C or Rust or whatever, then such code would have to be compiled for BE too, i.e. you duplicate all JLLs for at least the POWER platform, LE plus BE, assuming you have LE already, and I’m not even sure LE there is well supported in JLLs.
E.g. “big endian”, seemingly, format from 2023:
Also in the OK image format:
It’s worthwhile to support all file formats with Julia, on little-endian hardware. Many formats are big-endian, aka network order, for historical reasons, with little-endian hardware now more common, I’m confused why LE isn’t used like for QOA, Quite OK Audio (and other Image) format, I mentioned above (QOA needs to be wrapped for Julia, while QOI has already been with QOI.jl/test/runtests.jl at 76aab3535ed1ba05c711f6005344d5ef58ac6330 · KristofferC/QOI.jl · GitHub. All formats, big or little, can be supported in Julia, I suppose the formats you mention store in the header which is used; or not as with Photoshop if I recall, it saved in big-endian on then/classic MacOS, but in little-endian on Windows. Meaning the files couldn’t be opened on Windows and vice versa (or at least without taking it into account). It must be a solved problem by now, somehow. Note all current macOS hardware, for decades now, is little-endian.
So inform them. I doubt you’ll get an answer. I told them that the Alaska file and the Lower 48 file disagree at Kitimat. Never got an answer.
The build scripts of Bezitopo and PerfectTIN, which are in CMake, have endianness tests, and the code refers to the result of those tests. Those tests aren’t tested with full coverage without running on big-endian hardware.