How are the splines represented right now? For that case, it’d make sense to create a mutable struct Spline holding all the required data, including a good guess for the interval. From the user perspective, it’d be a matter of defining
Just a my tc here. All these alternatives can at the end be quite confusing for the user, because of the non-transparent handling of the initial approximation, and possible irreproduciblity of a result. I think a simpler way to deal with this would be to just have a struct to carry the data of a fit, and a function to update the fit:
julia> struct MyFit{T}
data::Vector{T}
initial_approximation::Vector{T}
# other data
end
julia> MyFit(data::Vector{T}) where {T} = MyFit{T}(data, rand(T,length(data)))
MyFit
julia> function fit!(fit::MyFit)
x_ini = fit.initial_approximation
# do stuff and update the fit and the initial approximation
return fit
end
fit! (generic function with 1 method)
julia> my_fit = MyFit(rand(10)); # initialize
julia> fit!(my_fit) # fit - updating fit and initial_approximation
julia> fit!(my_fit) # do it again, how many times needed
Then you can have multiple instances of fits of different things simultaneously without getting confused. With a callable struct you can also have that, but I don´t see any advantage in terms of user interaction.
With the struct here it is transparent also that the user can copy a MyFit object and work with it independently, perhaps updating the data and keeping the initial approximation, and all that is then transparent.
@Vasily_Pisarev beat me by being concise . But the struct does not even need to be mutable if the contained data is mutable.
I don’t think making the reads and writes separately atomic is sufficient? Admittedly, the atomic API goes a bit over my head, but I imagine the following is still possible with your version:
Task 1 reads tmp1 = count[]
Task 2 reads tmp2 = count[]
Now tmp1 == tmp2
Task 1 writes count[] = tmp1 + 1
Task 2 writes count[] = tmp2 + 1
And thus you only incremented the counter by 1, not 2, even though there were two calls.
Yes, you are right. This is why I typically just try to avoid stuff like atomics and write algorithms that don’t require one to reason about if there’s a race condition or not.
So I guess that’s part of why an AtomicRef hasn’t been provided by base, the getindex / setindex! interface isn’t sufficient.
I’ve edited my previous comment to not use a getindex/setindex! pattern.
I would disagree because it could be misinterpreted that these are equivalents to static variables, which is not true. Besides the subtler scoping differences, none of these are static storage. That’s feasible for executables, not interactive processes. For the purpose of a method referencing an object across several calls, closures and global variables (possible encapsulated in a submodule) are reasonable. I personally wouldn’t use vanishing local variables or @eval interpolation to reference mutable data like this because that makes reflection much harder.
I hope that with appropriate wording, we can avoid this misunderstanding now that you have identified it. I can imagine that in at least some approaches, the memory will end up in static memory after (pre)compilation, possibly with --trim, if not already, then after future optimizations.
[…] closures and global variables (possible encapsulated in a submodule) are reasonable.
I think we can count the encapsulation in a (sub)module as another option.
Does anyone already know the performance difference between the solutions and which solution needs a RefValue for optimum performance? Or do we need to benchmark them?
The solutions from this thread:
Closure capturing a local variable
Local-declared variable inside a top-level begin block
Global-declared method definition inside a local let block
A global variable and method in a separate (sub)module
Interpolate and assign a specific RefValue instance into a globally scoped method definition
Any approach that captures a variable from an outer scope needs a Ref for performance. The principle is that a captured variable should not be reassigned, that is, you should never do count += 1 if count is captured from the enclosing scope; that will force boxing, which kills performance. The Ref solves the problem because count[] += 1 is mutation, not reassignment.
Thus, the following need Ref for performance:
Closure capturing a local variable
Local-declared variable inside a top-level begin block
Global-declared method definition inside a local let block
A global variable and method in a separate (sub)module
The following needs Ref to work at all:
Interpolate and assign a specific RefValue instance into a globally scoped method definition
The following does not need Ref (or you could say it replaces the RefValue with a dedicated mutable struct—after all, RefValue is nothing but a mutable struct with getindex/setindex methods):
I thought from 15276 that some of the simpler cases are nowadays optimized by the compiler to work without performance penalty without a RefValue, too. But that does not seem to be the case.
Some cases like “Local-declared variable inside a top-level begin block” seemed easy to optimize at least if there is nothing else in the begin block besides the local-declared variable and the method, but it is probably more difficult than it seems.