EXCEPTION_ACCESS_VIOLATION when executing ECOS in multi-threading

I am conducting a Monte Carlo analysis on a convex optimization problem using the ECOS solver, accessed through the thin wrapper provided by ECOS.jl. When I run the Monte Carlo simulations sequentially, everything functions correctly. However, as soon as I parallelize the code using the Threads.@threads macro, I encounter the following error:

Please submit a bug report with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks.
Exception: EXCEPTION_ACCESS_VIOLATION at 0xION_ACCESS_VIOLATION with steps to reproduce this fault, and any error messages that follow (in their entirety). Thanks.
Exception: EXCEPTION_ACCESS_VIOLATION at 0x7038cd20 --  at 0x7038cd20 -- restore at C:\Users\Admin\.julia\artifacts\4f69c89ace9d623a1e0debfffea6c6f59b49b9b1\bin\libecos.dll (unknown line)
in expression starting at M:\Project\mytry.jl:11
restore at C:\Users\Admin\.julia\artifacts\4f69c89ace9d623a1e0debfffea6c6f59b49b9b1\bin\libecos.dll (unknown line)
unset_equilibration at C:\Users\Admin\.julia\artifacts\4f69c89ace9d623a1e0debfffea6c6f59b49b9b1\bin\libecos.dll (unknown line)
ECOS_cleanup at C:\Users\Admin\.julia\artifacts\4f69c89ace9d623a1e0debfffea6c6f59b49b9b1\bin\libecos.dll (unknown line)
ECOS_cleanup at C:\Users\Admin\.julia\packages\ECOS\woB52\src\gen\libecos_api.jl:249 [inlined]
macro expansion at M:\Project\mytry.jl:31 [inlined]
#3#threadsfor_fun#1 at .\threadingconstructs.jl:252
#3#threadsfor_fun at .\threadingconstructs.jl:219 [inlined]
#1 at .\threadingconstructs.jl:154
unknown function (ip: 000001bbcba77fdb)
jl_apply at C:/workdir/src\julia.h:2157 [inlined]
start_task at C:/workdir/src\task.c:1202
Allocations: 7413432 (Pool: 7413140; Big: 292); GC: 12
restore at C:\Users\Admin\.julia\artifacts\4f69c89ace9d623a1e0debfffea6c6f59b49b9b1\bin\libecos.dll (unknown line)

I created a MWE (that is the mytry.jl mentioned also in the error), and it is

using LinearAlgebra
using SparseArrays
using ECOS

dtot = 300
ll = 300
ln = 100
ncones = 2


Threads.@threads for i in 1:100

    c = randn(dtot,)

    M = [sparse(-1.0I, 3, 3) for i in 1:ncones]
    Gcones = [spzeros(3 * ncones, dtot - 3 * ncones) blockdiag(M...)]
    G = [sprandn(ln, dtot, 0.1); Gcones]
    h = [randn(ln,); zeros(3 * ncones,)]

    A = sprandn(ll, dtot, 0.1)
    b = randn(ll,)

    q::Vector{Int64} = 3 * ones(ncones,)

    ECOS_work = ECOS.ECOS_setup(dtot, ln + 3 * ncones, ll, ln, ncones, q, 0,
        G.nzval, G.colptr .- 1, G.rowval .- 1, A.nzval, A.colptr .- 1, A.rowval .- 1, c, h, b)
    ECOS.unsafe_add_settings(ECOS_work, Dict([(:verbose, false)]))
    ECOSflag = ECOS.ECOS_solve(ECOS_work)
    ecos_prob = unsafe_load(ECOS_work)::ECOS.pwork
    y = copy(unsafe_wrap(Array, ecos_prob.x, ecos_prob.n))
    ECOS.ECOS_cleanup(ECOS_work, 0)
end

Do you have any suggestons of what can cause this error and can I solve it?

I assume that this means ECOS is not thread-safe. If so, there is no work-around.

I thought the same, but I used to run ECOS in parallel on MATLAB and it worked fine. For this reason I was wondering if there was something that I am missing.

Hmm. I can’t reproduce this on macOS.

Could you try:

Threads.@threads :static for i in 1:100

I tried adding :static and nothing changes, same error.
However, I had the some code run on a Linux machine (Ubuntu 22.04) and it works without any issues. Then, I tried on another Windows computer and I got the error again. So, it seems to be OS-dependent.
Maybe is it something related on how ECOS is compiled on Windows?

Potentially. Our build script is very simple:

Our build is a bit out of date, but we’re not missing anything substantial:

I tried to compile ECOS myself and provide the shared library created as a custom binary (overriding the ECOS_jll library). I compiled both using the same command on Yggdrasil (i.e., with MSYS2 and make shared) and by using the Intel compiler directly on Windows, with some headache to compile, since the CMake files in the GitHub repository are not ready to be used with icx. However, it still does not work.