Are you on master
? I had the same error with the tagged version, but master
works for me.
Thanks, that helped but I think there is a bug on Windows.
julia -J C:\\Users\\j\\.julia\\packages\\PackageCompiler\\oT98U\\sysimg\\sys.dll
ERROR: could not load library "C:\Users\j\.julia\packages\PackageCompiler\oT98U\sysimg\sys.dll"
The specified module could not be found.
This happens because the sys.dll depends on a non-existing msys-gcc-s-seh-1.dll
, whilst the official sys.dll
depends on libgcc_s_seh-1.dll
(on Windows one cannot just rename a dll).
I’m on Windows and it worked for me…
Hmm, can you check the dependencies of your sys.dll
? I use dependencies for that
Not anytime soon, about to travel again.
OK, thanks anyway. I’ll figure it out.
answering @JeffreySarnoff
it should be able to produce something even if some statements trip it … Fezzik compiles everything it can
and prints a warning for anything that it cannot parse or compile.
If the process ended without an error just tons of warnings, then the new system image should be more responsive to your needs even though it failed to compile a few things.
This is why it is called a “brute force package compiler”
yah – how does one encourage it to fold in packages? is it enough to do using Pkg1, Pkg2
or is it necessary to interact with each package we would precomple. and in good situations, how much more responsive [qualitatively]?
After the one-time setup Fezzik traces all compiler activity.
So after setup just do something that you would normally do, run a simulation plot something
load modules interact with Pkg, read some help files use Atom etc.
when you think you covered everything, start a new julia session and do brute_build_julia()
This process can be done several times, on Mac and Linux Fezzik replaces the original sysimg with the modified one, so repeating the process is incremental.
On windows you will have to manually overwrite the sysimg(just follow instructions at the end of the build process)
Considerably more responsive to the point that I cannot work without it.
You are welcome to time it.
Quick questions @TsurHerman : if you do a Fezzik with, say, version 0.2 of your in-development package, and then you continue developing to version 0.3, can you update the brute-forced sysimg with the new version? Does the process build up any sort of cruft? Is performance the same if you Fezzik many times, overwriting stuff, than if you do it just once?
(I’m tentatively assuming the answers are yes, yes, and yes, but just to be sure…)
Actually, it is better to blacklist your in-development package(see the readme).
But let’s say you have a farely stable package with version 0.2 and it is baked into the sysimg. Now let’s say you updated some minor things within that module and “re-evaluated” them using Juno’s built in eval-in-module or by other means. Then there will be a compiler log of that compilation and next time you “Fezzik” those statements will be compiled again and shadowing the previous version in the sysimg. So the answer is Yes (probably because I haven’t tested it … if you test it please let me know if this was true)
Usually the first brute force building catches a lot of the big things, subsequent brute builds usually adds some small stuff … interaction with Atom interactions with the help system git etc.
so even if your in development module is not precompiled it will load much faster because all the little things it uses are already compiled.
In some versions of Julia 1.0x incremental building fails after a few sessions due to some dynamic initial state of packages etc.
It is easy to just do revert() and start over.
Thanks for pointing me to Revise, I think it’ll help a lot. I’ve also noticed massive differences in performance on different machines. Julia is really painful on my 2012 MacBook Pro (smaller model with only two cores and 8 GB RAM), but it’s much faster on my 2015 MacBook Pro with four cores and 16 GB RAM. I imagine this is due to compilation going faster with more and higher performance CPUs.
Code compilation doesn’t use multiple CPUs, so this is probably a mixture of an overall faster CPU core, higher IPC (instructions per clock), more powerful CPU instructions, and possibly (probably) faster RAM.
PackageCompiler is nice and all, but if we can compile these packages on our devices, which is slow and takes hours, why can’t the central repository simply store these precompiled packages and make them available to install?
Maybe a workaround could be use RCall just for the plotting part. I didnt tested anyway.
It might be doable, but it has a number of downsides. For one thing, it’d require storing 200MB+ per binary. Multiply that by the number of platforms you wish to support, and most packages are already over 1GB of storage requirements. Multiply that by the number of versions you want to support (at least one stable, and maybe one dev/master). Things get large, fast. I’m not sure if free Github plans support multiple GB stored for each repo under a single user/organization.
And even if each package did this, now you’d have to open a new Julia session for each package that you want to load quicker. Want to load 2 or more packages quickly? You have to hope someone else wanted exactly that combination of packages built into a binary together, and built for your platform.