You’re getting a BoundsError, which I don’t think has anything to do with the @threads macro, unless you’re doing something not thread-safe like reading from and writing to the same memory location on different threads. Unfortunately, the line that errored (31) is not included in your screenshot, so I can’t say much else.
It might be helpful to condense your code into a minimum working example (MWE) that reproduces the error. Or at the least, post the full DefineYnImproved method in a code block, i.e.,
Your code here
I think it’s also recommended to copy full error messages and display them in a code block as well.
I noticed that the error message you pasted above errored on a different line: line 12
Yn[i,i] += Dt ./ (2*netlist.l[m]);
instead of line 31.
Because each thread is working with a different i, the increment to Yn should be thread-safe.
Looking more closely at the offending lines and the stack trace, it looks like Yn[i,i] (line 12) or Yn[i,netlist.cNeg[m]] (line 31) are being indexed out of bounds:
(The other indexing operations in those lines are on Vectors, not Matrixes.) So it appears to me that multithreading is not to blame, but you are initializing Yn in some way that doesn’t mesh with nbnodes and/or netlist.cNeg[m]. It’s hard to say more than this without knowing anything about Yn, nbnodes, and netlist, which appear to be global variables.
Right, netlist.cNeg is not 0, but its second element is.
julia> a = [8 0]; a
Yet it seems like I was wrong in my assessment, because I just copied and pasted all your code into the REPL and it ran to completion without any errors. (This is with @threads.) I’m not sure why it’s not working for you. My only guess is that you accidentally modified some global state during your testing. Either that or there really is some multithreading issue that isn’t easily reproduced.
Just recently, I found that the code with @threads, runs a very few sometimes and doesnot run very often. It is quite confusing…
each thread works on different row of Yn matix, so I think it is already thread-safe.
It it finished, it would have values at all 1:2:1000, hence length 500. On that run it crashed 2/3 of the way, but this will vary.
Yes. In a normal dense array there are pre-existing places to write. In a sparse array there are not, any write may need to move other data around, or grow an array (i.e. copy it to a new, larger, block of memory), and if different threads try this you may get wrong answers even if you don’t get an error.