Let’s say I want to calculate the velocity of a free falling body in vacuum. I want to do this in dependence of gravitational constant and time or distance and time with a fixed gravitational constant.
If I do this, I’m obviously left with only one method. The first thought I had was a type alias, something like
distance = Float64
gravconst = Float64
But as I’ve learned you cannot dispatch on type aliases (there’s a thread on this in this forum). So the next obvious thing would be a custom type like
struct distance d::Float64 end
struct gravconst g::Float64 end
I can dispatch on that, but of course it’s not working because multiplication and division is not defined for these types. In reality, all operations should work on the new types so I would have to define multiplication, addition etc so that the types work like normal numbers, but that would be highly annoying.
Is there a good way to define custom types that behave like normal numbers but can be used to dispatch?
As a side note, sqrt(2*d/9.81) is the time a free falling body takes to travel a distance d.
I’m afraid you have to explicitly access the field of your types where you store the value
velocity(dist::Distance, t) = dist.d / t
if you want to avoid overloading the needed methods. Which can be automated to some extend and need really only be done for arithmetic operations and powers. Anything else takes dimensionless arguments. Then however you find yourself in need of defining and handling a whole zoo of types. After all, the division of a Distance and a Time should yield a Velocity, and so on. And you haven’t even considered units yet.
Yeah, I missed the g in the 2nd formula, thanks for pointing out, I corrected it. And you are right about the units of course. Velocity was only an example, in reality I need electric field, chemical potential, particle density etc.
I had a look at Unitful.jl in the past and thought it only deals with units. But as you suggested it and showed the example using dimensions I looked again. Derived dimensions look good, but there’s still one catch. You can’t have two types with the same dimension, so you can’t do
@derived_dimension Area Unitful.𝐋^2
@derived_dimension AnotherArea Unitful.𝐋^2
Well, you can do that but can’t distinguish the two in dispatch. I only slightly understand how Unitful.jl works, but it seems logical to me this can’t work as the created type is basically defined by the dimension. My idea was to write methods for calculations and make it clear by type annotation what kind of physical property they take as arguments. This should avoid errors, in particular when combined with unit checking.
I found another thread here on this forum about a very similar question. AbstractNumbers.jl is suggested and seems exactly what I’m looking for. I guess combined with Unitful.jl it will do everything I want.
Just played around a bit, the example on the github landing page is not working, there’s also a pull request to correct it. Looking in the test folder shows it should be something like
struct MyReal{T<:Real} <: AbstractNumbers.AbstractReal{T}
real::T
end
AbstractNumbers.number(x::MyReal) = x.real
AbstractNumbers.basetype(::Type{<: MyReal}) = MyReal
x = MyReal{Float64}(5)
sin(3*x^2)
However, there seems to be quite a significant performance hit
using BenchmarkTools
foo(x) = sin(3*x^2)+log(x*3.2)-exp(sqrt(x)-0.2)
a = MyReal{Float64}.(rand(100))
b = rand(100)
aa = MyReal{Float64}(3)
bb = 3.
@btime foo.($a)
@btime foo.($b)
@btime foo($aa)
@btime foo($bb)
Which gives me (I hope I did the benchmarking right):
Replace x^2 in the definition of foo by x*x and the difference in performance disappears… Why? Because for ordinary floats, the compiler takes the opportunity to inline the call to pow as a straightforward multiplication, while it isn’t “smart enough” to figure it out for MyReal. What you are seeing is the overhead of that call. Look at @code_llvm foo(aa) and @code_llvm foo(bb) respectively.
(And no, that was absolutely not clear to me right away )
Interesting. I can confirm the difference goes away. Thanks for the effort to figure this out!
So depending on the package used with an AbstractNumer, there could be a performance loss because the compiler is “not smart enough”. I guess there could be quite a few cases where compiler optimisation doesn’t work for an AbstractNumer.
Nevertheless, for what I intend (solving an ODE), caculating the derivative function should be the time consuming step and I can test whether performance is on par with a Float for that.