Hi,
Most stories I heard so far from people with julia are using supercomputers.
Has anyone experienced using julia for serious computer science research / projects with only a standard laptop or desktop computer? I would like to hear your stories and experience.
Actually I am interested in image processing, however most work on high impact publications / journals now are using deep learning models.
Thank you very much.
By far the majority of Julia work isnât on super-computers, They are more likely to get headlines, but at a very rough estimate, well over 90% of Julia users are almost certainly on something between a laptop and a single node server. I probably wouldnât recommend ML on a laptop (thermal issues) but thatâs language agnostic.
Hello @green and welcome. I might like to field that one. I work in building supercomputers for Dell. Not much direct Julia use to be honest.
I would call Julia âtechnical computingâ - it is a tool used by scientists and engineers. A lot of work like that is begun on laptops and deskside computers. That sort of work actually continues on laptops/desksides if they are powerful enough for the work being done. Indeed these days many companies use cloud computing to âscale outâ simulation work like that.
I would say the reason you get the impression Julia is used on supercomputers mainly is that the case studies of supercomputer use are impressive - so tend to grab attention. And why not?
There also is a huge category of âdepartmental supercomputersâ which do nto make the headlines but are my bread and butter - many technical companies run their own supercomputers which are not comparable in size to national or European scale facilities but they still get useful work done.
Now I open up the conversation - what features in Julia help the scientist/engineer to migrate their codes from laptops/desksides efficiently to supercomputer clustered machines?
Also throwing a rock into the greenhouse - many workloads are embarassingly parallel. You use a single server, maybe using all the cores with threading, but you dont use the low latency fabris to compute across servers. We have servers today with easily a terabyte of RAM , 64(plus) cores and 4 or 8 GPUs. That is a lot of firepower before you even move beyond the box)