Let’s say you’re training a neural net (for example) and you want to see how the results are looking after each epoch. You can do an imshow (from ImageViewer) on each epoch, but that’s going to open a new window, display the data statically and then wait for you to close it - the problem is that you’d end up with lots of little windows. Is there a way to make it look like an animation within a single window?
You can do this kind of thing with PIL in Python (see the “Results” section of this Python GAN project for example: https://github.com/shinseung428/gan_numpy). Is it possible to do display dynamically changing data within a single output window with ImageViewer?
1 Like
I usually visualize data with the GR backend to Plots.jl, it’s fast and does not open more than one window. I can easily plot in 50+Hz with this setup.
I went with this based on the discussion in the link you included. This is the autoencoder example from Flux.jl’s modelzoo, but modified to display the data as it changes:
using Flux, Flux.Data.MNIST
using Flux: @epochs, onehotbatch, argmax, mse, throttle
using Base.Iterators: partition
using Juno: @progress
using CuArrays
using Images, ImageView
# Encode MNIST images as compressed vectors that can later be decoded back into
# images.
imgs = MNIST.images()
# Partition into batches of size 1000
data = [float(hcat(vec.(imgs)...)) for imgs in partition(imgs, 1000)]
data = gpu.(data)
N = 32 # Size of the encoding
# You can try to make the encoder/decoder network larger
# Also, the output of encoder is a coding of the given input.
# In this case, the input dimension is 28^2 and the output dimension of
# encoder is 32. This implies that the coding is a compressed representation.
# We can make lossy compression via this `encoder`.
encoder = Dense(28^2, N, relu) |> gpu
decoder = Dense(N, 28^2, relu) |> gpu
m = Chain(encoder, decoder)
loss(x) = mse(m(x), x)
img(x::Vector) = Gray.(reshape(clamp.(x, 0, 1), 28, 28))
function sample()
# 20 random digits
before = [imgs[i] for i in rand(1:length(imgs), 20)]
# Before and after images
after = img.(map(x -> cpu(m)(float(vec(x))).data, before))
# Stack them all together
hcat(vcat.(before, after)...)
end
#evalcb = throttle(() -> @show(loss(data[1])), 5)
s = sample()
guidict = imshow(s)
sleep(0.1) #<- why is this needed?
#callback function
function evalcb()
throttle(@show(loss(data[1])), 1)
canvas = guidict["gui"]["canvas"]
s = sample()
imshow(canvas, s)
end
opt = ADAM(params(m))
@epochs 10 Flux.train!(loss, zip(data), opt, cb = evalcb)
That works, but what I don’t understand is why that sleep call after the initial imshow call is needed? Without it the popup graphics window created by that first imshow doesn’t show up until after the program has run. (BTW: I’m running this with: julia -i autoencoder.jl
Here’s a smaller testcase that doesn’t use Flux:
using ImageView, Images
img = rand(640,480)
guidict = ImageView.imshow(img)
#sleep(0.1)
canvas = guidict["gui"]["canvas"]
for i = 1:100
ImageView.imshow(canvas, rand(640,480))
end
If you don’t uncomment the sleep there you will only see the final canvas displayed. If you uncomment it you’ll see each of the random matrices displayed. (again, running this with: julia -i imshow_test.jl )
I’m wondering why that sleep is needed? When you do an ‘imshow’ does it actually fork a process to display the child window and since it returns immediately the guidict isn’t yet setup (or something…)?
1 Like