Generally speaking, yes. Summation and multiplication are heavily optimized for, since they are so fundamental, and the hardware itself (the computer chip) usually implements these instructions very efficiently. On the other hand, raising a number to an arbitrary power involves more computational steps. I don’t know how this is implemented in actual hardware, but even when you would do the calculations “by hand”, you’d probably have to come up with some generic algorithm that does the ^
operation as a combination of sums and multiplications, unless in some special cases, e.g. squaring a number.
But that would just be an issue about runtime and in principle the same holds true for log
- so as you said, it might be that the two log
operations plus one ^
take longer than one log
and two ^
(here it doesn’t seem to be the case, see below). But the other issues are accuracy and stability.
As @Oscar_Smith already mentioned, if you plug in the “right” values for the base and exponent you can quickly run into situations where the result of x ^ a
gets quite large\small. Of course, you take the log in the end, so the final result should be manageable again, but it might be that the intermediate value cannot be stored as a floating point number, or at least not accurately enough.
Another point regarding accuracy: Generally speaking, the more operations (adding, multiplying, …) you perform on a number, the more all the inherent tiny round-off errors
will compound. A single multiplication will have a certain maximum error, but if ^
consists of multiple operations, it might in principle have larger error (I don’t know if this is true for ^
specifically).
Even if you just sum a long list of numbers, the error can grow larger the longer the list gets. In that case there are specialized methods to account for such errors: Kahan summation algorithm - Wikipedia
When solving an ODE, you essentially also perform a lot of operations, where compounding errors could well lead to instabilities (the state at very late times could look quite different compared to early times even if there are only small errors along the way) – but what exactly happens in your case would also depend on other factors and the details of the ODE.
So yeah, I think it’s safe to assume that re-writing the log as a product and sum only has benefits. At least I can’t think of any downsides right now…
PS: Here is a quick benchmark comparing the two approaches:
Benchmark
julia> using BenchmarkTools
julia> f(x, y, a, b) = log( x ^ a * y ^ b )
f (generic function with 1 method)
julia> g(x, y, a, b) = a * log(x) + b * log(y)
g (generic function with 1 method)
julia> r = rand(4)
4-element Vector{Float64}:
0.30426897441393985
0.24398687655350038
0.6439939608519193
0.5045082195046893
julia> @btime f($r[1], $r[2], $r[3], $r[4])
56.441 ns (0 allocations: 0 bytes)
-1.477931723609483
julia> @btime g($r[1], $r[2], $r[3], $r[4])
13.024 ns (0 allocations: 0 bytes)
-1.4779317236094829