What do you mean by insanely slow? The time to generate the first plot in a new session, or the time to generate any plot (after initial compilation)?
If the issue is the former, look into custom sysimages (or maybe try nightly now that native code caching has landed).
If it’s the latter, notice that you are plotting 5 million points if my math is right, which is bound to take some time. You can look at GPU accelerated plotting via WLMakie, or simply plot fewer points (there are only around 2m pixels on a 1,920x1,080 screen so you’re unlikely to distinguish 5m points anyway!)
You can try a different backend for Plots.jl to see which is faster, off the top of my head, I’m not sure which backend will be best for this task.
From my experience one of the Makie packages (i.e GLMakie or WGLMakie as @nilshg suggested) will probably be best if you want faster plots with millions of points, but this has a higher TTFP than Plots.jl.
Do you really need to plot all those points? With that many ensembles, it is usually better to plot quantiles/means/etc.
I find using the SciML ecosystem is the easiest way to get things done and the plotting recipes with ensembles work great. And if you setup the brownian motion as a drift free SDE there are better algorithms for simulating them.
The graphical subsystems of the operating systems are overwhelmed with the representation of this number of points as lines (with typical attributes like line width, color, line type, line caps, etc.). At best, one could render the lines into an image in the simplest form with GR internal functions - but it would make more sense to aggregate the data:
With this method plots can then be created efficiently in all output drivers (SVG, PDF, PS, image formats) - optionally also with the newly presented interaction options (zoom, pan, hover effects): GRDISPLAY=plot julia ...
@jheinen, thanks for your time and additional feedback. (my initial trials with setting interactive zoom in GR failed in Windows, but that will be a separate thread)
The original post had 100 different curves, with different colors.
If we stick to that scenario, why GR is so fast (~1 s or so) compared to Plots.jl gr() backend? Is it autodecimating the data?
On another note, it would be very useful to have an additional keyword argument in Plots.jl plots() to turn automatic decimation on, or to plot big data at a user specified step (every n-th data point). There is a related issue open.