# Calculus with in place functions

I want to understand how in place functions are treatet when used like in this example

function g!(u)
u[:] .= 2*u[:]
end

u=

julia > g!(u) .*(u .+ g!(u))
1-element Array{Int64,1}:
32


For my understanding, this is a completely awkward result. It seems, that the functions g are evaluated first, independent of associative/commutative properties. I’m the only one having problems with that or is this common knowledge/general rule
(I just asking, because it took weeks for me to recognize that such behavior might show up. )

EDIT: For all newcomers digging this out. Writing this as

function g!(u)
u .= 2 .* u
end


is allocation free and therefore much more performant as DNF pointed out:

1 Like

Using prefix notation (and ignoring broadcasting for now) you have

*(g(u), +(u, g(u)))


so I wonder how you imagine that anything but the gs could be evaluated first — they are needed as inputs for both + and *.

I don’t know what you mean here. If you are thinking about precedence, that plays no role here as your expression is fully parenthesized.

Generally, it is best to be more careful about mutating functions, or at least signal it with a !. If you must use in-place operations, don’t mix them with other complex expressions. Julia is just doing what you asked for here.

5 Likes

Argument-mutating functions do what they say. Hence it is not possible to use them in these kinds of expressions. Period, I’m afraid. BTW: Your function should also indicate its argument-mutating property by its name, g!.

1 Like

Actually it’s easy to imagine that everything is evaluated starting from the deepest nested brackets. Namely, first g(u), then +(u, g(u)), then another g(u), and then the final multiplication. I’m not saying this is better (or worse) than evaluating g(u) twice and only after that computing the + and * operations, just pointing at another option.

Jep…
this is how i would have expected it:

function g!(u)
u[:] .= 2*u[:]
end

u=
a=u .+ g!(u)
b= g!(u) .* a

julia> b
1-element Array{Int64,1}:
16



Just to clarify. In principal i don’t care how this is handled as soon as I know. How ever for newcomers this is far from beeing clear IMHO.

True usually I do, just forgot when i wrote the example. I will change the example accordingly, thanks

I thought it is pretty well-defined, with the evaluation order being left-to-right, but I could not just find this in the manual.

Yes this is why I ask. because as I see it, this statement is not truly consistent with:

 u=;u .+ u .+ g!(u)
1-element Array{Int64,1}:
6

julia> u=;u .+ u .+ g!(u) .+g!(u)
1-element Array{Int64,1}:
16


A simple left to right in case of u=;u .+ u .+ g!(u) should lead to . But Julia in any case computes every g! first.

1 Like

I think this is because Julia uses a single + call with multiple arguments here, eg

julia> ex = :(a + b + c)
:(a + b + c)

:call

julia> ex.args
4-element Array{Any,1}:
:+
:a
:b
:c


I said above, I think it is bad style to depend on these things. IMO having to think about precedence and evaluation order in mutating code is a code smell.

2 Likes

This is an aside to the main point of this discussion, but this implementation ruins the point of in-place mutation, which is normally avoiding allocations. The slicing causes allocations to happen. Observe:

using BenchmarkTools

foo!(x) = (x[:] .= 2*x[:])
bar!(x) = (x .= 2*x)
baz!(x) = (x .= 2 .* x)

julia> @btime foo!(x) setup=(x=rand(1000));  # your implementation
1.298 μs (3 allocations: 15.92 KiB)

julia> @btime bar!(x) setup=(x=rand(1000));
709.508 ns (1 allocation: 7.94 KiB)

julia> @btime baz!(x) setup=(x=rand(1000));
97.624 ns (0 allocations: 0 bytes)

4 Likes

I completely agree but let me explain why I brought this up
When i want to solve a DAE with DifferentialEquations.jl, I can do this either in from of an ODEProblem  using a mass_matrix =M which solves M*du = f!(du,u).
Or, as i thought, I can translate it to an DAEProblem via  0 = f!(du,u) .- M*du . And this failed since i was not aware of this kind of interference. So I necessarily need to allocate at least one time more to rewrite my problem.

But in summary I have no questions left thanks a lot for your patience Jea, next time i will spent more time in providing a 100% acceptable, performant and still minimal, example ( but true, the one i gave was on of the worst possible )

It wasn’t intended as a criticism—the example was fine for illustrating the question—but as information to help you improve your code. I assumed that you weren’t aware of this performance trap.

1 Like

I am not sure, it would be helpful to see the original problem.

Generally, if the whole calculation can be captured in an elementwise update function f, then

a .= f.(a, b, c)


should work.

2 Likes

Indeed , good point…

I didn’t understand it as such either. 1 Like