Error deserializing a remote exception from worker 2

As from yesterday morning, I keep on getting the following error or something similar every time I try to run something involving some Figure on a Pluto Notebook. The notebook also takes a very long time to load.

I have tried updating the Pluto environment with no success. I have also carried out some other updates. Has anyone ever encountered something similar?

Note that I was running the Pluto notebooks a few days ago with no problems.


Error deserializing a remote exception from worker 2

Remote(original) exception of type MethodError

Remote stacktrace :

Stacktrace:

[1] #56

@ C:\Users\kurtj.julia\packages\Pluto\9zGI7\src\runner\PlutoRunner.jl:993

[2] iterate

@ .\generator.jl:47 [inlined]

[3] collect_to!

@ .\array.jl:892

[4] collect_to_with_first!

@ .\array.jl:870 [inlined]

[5] _collect

@ .\array.jl:864

[6] collect_similar

@ .\array.jl:763

[7] map

@ .\abstractarray.jl:3282

[8] #format_output#52

@ C:\Users\kurtj.julia\packages\Pluto\9zGI7\src\runner\PlutoRunner.jl:992

[9] formatted_result_of

@ C:\Users\kurtj.julia\packages\Pluto\9zGI7\src\runner\PlutoRunner.jl:864

[10] top-level scope

@ none:1

[11] eval

@ .\boot.jl:385

[12] #invokelatest#2

@ .\essentials.jl:887

[13] invokelatest

@ .\essentials.jl:884

[14] #110

@ C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\process_messages.jl:286

[15] run_work_thunk

@ C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\process_messages.jl:70

[16] #109

@ C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\process_messages.jl:286

…and 1 more exception.

Stacktrace:

[1] deserialize(s::Distributed.ClusterSerializer{Sockets.TCPSocket}, t::Type{CapturedException})

@ Distributed C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\clusterserialize.jl:225

[2] handle_deserialize(s::Distributed.ClusterSerializer{Sockets.TCPSocket}, b::Int32)

@ Serialization C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Serialization\src\Serialization.jl:878

[3] deserialize(s::Distributed.ClusterSerializer{Sockets.TCPSocket}, t::DataType)

@ Serialization C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Serialization\src\Serialization.jl:1513

[4] handle_deserialize(s::Distributed.ClusterSerializer{Sockets.TCPSocket}, b::Int32)

@ Serialization C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Serialization\src\Serialization.jl:878

[5] deserialize

@ C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Serialization\src\Serialization.jl:814 [inlined]

[6] deserialize_msg(s::Distributed.ClusterSerializer{Sockets.TCPSocket})

@ Distributed C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\messages.jl:87

[7] #invokelatest#2

@ .\essentials.jl:887 [inlined]

[8] invokelatest

@ .\essentials.jl:884 [inlined]

[9] message_handler_loop(r_stream::Sockets.TCPSocket, w_stream::Sockets.TCPSocket, incoming::Bool)

@ Distributed C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\process_messages.jl:176

[10] process_tcp_streams(r_stream::Sockets.TCPSocket, w_stream::Sockets.TCPSocket, incoming::Bool)

@ Distributed C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\process_messages.jl:133

[11] (::Distributed.var"#103#104"{Sockets.TCPSocket, Sockets.TCPSocket, Bool})()

@ Distributed C:\Users\kurtj\AppData\Local\Programs\Julia-1.10.0\share\julia\stdlib\v1.10\Distributed\src\process_messages.jl:121

Hi @kurtspiteri, do you have a small example to try and reproduce the issue? Also, what’s the output of running versioninfo()?

Good evening, thank you for your reply. I was running Julia Version 1.10, and was running the following Pluto notebook:

I uninstalled and reinstalled Julia (now I am running Version 1.10.2). The original issue seems to be gone for now. I do not know if this has fully solved the problem.