My laptop struggles a bit when I plot large numbers of points with Makie. It’s not just while it’s plotting but it carries on using the GPU after the plot’s completed. I’m using Windows 10 on a laptop with a core i7 processor and Intel UHD graphics. There’s also an NVidia GeForce GTX 16 but I don’t think that gets used in this case.
For example, if I plot:
fig = Makie.Figure()
ax1 = fig[1,1] = Axis(fig)
ax2 = fig[2,1] = Axis(fig)
hm = heatmap!(ax1,rand(8000,257))
Then GPU usage is around 30%, according to Windows Task Manager, until the plot is closed. If I now add
pl = lines!(ax2,rand(2048000))
then GPU usage goes up to around 100% and doesn’t go down until the window’s closed. I realise that’s a lot of points, and I eventually I’ll get round to reducing the number, but is it expected behaviour that the GPU carries on being used even if the plot isn’t being updated? Curiously, in my actual application the plots get updated once a second and the GPU usage is actually less, at about 50%.
Rendering happens in a renderloop, so it renders the plot ~30 times a second even if nothing happens.
You can try to play around with this function to change the behavior of the loop: GLMakie.jl/rendering.jl at 9fc8bd1b26c919aed155654424a473f1c973752a · JuliaPlots/GLMakie.jl · GitHub
Thanks for the reply, that explains what I’m seeing. Julia’s been great for prototyping my application and I’d like to stay with it and not have to move to C++ if possible. Hopefully with fewer points the GPU usage will be bearable.
brings the GPU usage right down and the plot update is still impressively fast. It took me a while to find the function as I was using Makie instead of GLMakie directly, but it’s solved my problem. Thanks again - it’s a great package and the system of observables has made it easy to produce a real-time display.
Is it practical, or would it make sense to have an adaptive framerate system for makie by default? As in, bump the framerate if there’s interaction or some animation. The reason is that most of the time, there’s nothing moving on most plots, so we could save energy/resources when not needed.
edit: the user would then be able to set the maximum framerate or something along those lines.
I guess that you could handle this by controlling your observable updates.
We thought about this before, the requirement would be to intercept all observable signals that cause visual changes, and hook that up to the refresh event somehow. The implementation could potentially get really messy with our current attribute system, where every attribute has its own observable. We are already not so great at disconnecting stale connections to free up resources, this would add to that problem for now
Just to play devil’s advocate for a little. Would it be okay to intercept all observable signals, without trying to figure out whether they change the visualisation? I wonder whether the complexity lies in the identification of whether drawing on a given signal would cause a (pardon the pun) observable change. It may be reasonable enough that if you’re interacting with a plot or adding values in a non visible region, even if there’s nothing changing, that you redraw.
Hi @sdanisch. I think this problem has got a lot worse, for me at least, with the latest release. I tried
with different values of framerate using two versions of GLMakie and noted the GPU usage. The results were (approximately)
# GLMakie 0.2.9
# framerate=2, GPU 1%
# framerate=10, GPU 5%
# framerate=30, GPU 8%
# GLMakie 0.3.2
# framerate=2, GPU 45-100% (very variable)
# framerate=10, GPU 83-100%
# framerate=30, GPU 47-100%
# framerate=0.1, GPU 1-100% (window doesn't respond, unsurprisingly)
With the new version the GPU usage varies a lot where it was fairly steady with the old version. I’ve had to reduce the framerate to 2 on my real application (it was fine at 10) which is just about OK but makes it a bit unresponsive. I’m using Windows 10 and a UHD display with Intel UHD graphics 630 and NVidia GTX 1650 on a Dell XPS15.
hm, maybe that’s regression since we now render that heatmap as a 8000x257 quad mesh
That’s 4,112,000 triangles, so quite a bit to chew for a GPU.
We should try to bring back the fast path to render a regular heatmap as a single quad - meanwhile, you can try using
image instead, which should still render everything as a single quad.
Just confirmed: image stays at 1% gpu usage, while heatmap for the same array is at 100% for me.
That works fine. I confess I’m not sure what the difference between a
heatmap and an
image is anyway. In fact, the merged
DataInspector() seems to work better with
image as it gives x,y co-ordinates instead of indices into the data array.
Iirc the point of the heatmap/image differentiation was to simplify compatibility with other backends (like GR, not something as manual as Cairo or OpenGL).
DataInspector treats both the same, it goes off of
interpolation. My thought was that with ínterpolation you probably want an exact position + color, and without you probably want something discrete relevant to the cell your hover. Maybe getting the (discrete) position would be better here? Maybe both, e.g.
($x, $y, H[$i, $j] = $z)?
I see. I hadn’t noticed that
interpolation is on by default for
image. For my purposes I’d prefer discrete position with
interpolation off but in other cases indices might be better, so having both would be good. Is it possible to have more significant figures in the position? I’m only getting 4 sf and when I zoom in on a high resolution picture I could do with more.
DataInspector available has made Makie much more convenient for me - thanks!
No those are currently hard-coded. You could replace the method that generates the displayed string though:
function Makie.color2text(name, x::Float64, y::Float64, color)
idxs = @sprintf("%0.6f, %0.6f", x, y)
"$name[$idxs] = $(Makie.color2text(c))"
I had to change the function arguments to
Makie.color2text(name, x::Float32, y::Float32, c) but it works. Thanks, that’s amazing!