Structs and parametric constructors

Hello everybody,

I have a bunch of questions concerning parametric constructors for structs.

Some background: I was writing a toy code to compute a Newton fractal. For that it makes sense to bundle the function of which the roots are computed, its derivative and the locations of the roots (and some more parameters) into a struct.

After reading this discourse thread and then by trial-and-error I came up with the following (reduced for this thread) code:

struct fractal{F<:Function,G<:Function}
  xx
  yy
  myfunction::F
  derivative::G
  number_of_roots::Int
  roots
end

function make_fractal(xx,yy)

  function myfunction(z::Complex{Float64}) 
    return z^3-1.0
  end
  function derivative(z::Complex{Float64})
    return 3.0*z^2
  end
  roots = (complex(1.0,0.0),
             complex(-0.5,0.5*sqrt(3)),
             complex(-0.5,-0.5*sqrt(3)) )
  return fractal(xx,yy,myfunction,derivative,3,roots)

end

xx=range(-3.5, stop=2.0, length=2000)
yy=range(-2.5, stop=2.5, length=2000)

Fractal=make_fractal(xx,yy)

function iterationstep(Fractal::fractal,z)
  return z-Fractal.myfunction(z)/Fractal.derivative(z)
end

zz=complex(1.0,2.0)
zz=iterationstep(Fractal,zz)

This works nicely in the sense that @code_warntype is completely happy and myfunction() and derivative() get inlined:

julia> @code_warntype iterationstep(Fractal,zz)
Body::Complex{Float64}
31 1 ─ %1  = (Base.getfield)(z, :re)::Float64                                                   │╻╷╷╷╷╷ myfunction
   │   %2  = (Base.getfield)(z, :re)::Float64                                                   ││╻      literal_pow
   │   %3  = (Base.mul_float)(%1, %2)::Float64                                                  │││╻      *
   │   %4  = (Base.getfield)(z, :im)::Float64                                                   ││││╻      *
   │   %5  = (Base.getfield)(z, :im)::Float64                                                   │││││╻      imag
...
...

(Remark: having myfunction::Function and derivative::Function, i.e. unparametrized fields, in the struct above and calling iterationstep(Fractal,zz) as above disables type inference as alrady noted in the mentioned discourse thread).

Unfortunately I have only a vague understanding what I did and why this works. Reading the manual section gives me the impression that I should have specified parametric types for all fields in the struct ? But I have untyped fields in there. How does julia decide what fields in the constructor call

fractal(xx,yy,myfunction,derivative,3,roots)

are F and G ? By position ? Then what about multiple Float64 or Int or Vector{Float64} fields ? Or is this abstract versus concrete types ?

Also, changing the struct to

struct fractal{F<:Function}
  xx
  yy
  myfunction::F
  derivative::F
  number_of_roots::Int
  roots
end

which would appear to be reasonable as myfunction() and derivative() have the same argument and return type throws an error

julia> include("mwe.jl")
ERROR: LoadError: MethodError: no method matching fractal(::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}, ::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}, ::getfield(Main, Symbol("#myfunction#3")), ::getfield(Main, Symbol("#derivative#4")), ::Int64, ::Tuple{Complex{Float64},Complex{Float64},Complex{Float64}})
Closest candidates are:
  fractal(::Any, ::Any, ::F<:Function, ::F<:Function, ::Int64, ::Any) where F<:Function at /mnt/ramfs/mwe.jl:2
Stacktrace:
 [1] make_fractal(::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}, ::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}) at /mnt/ramfs/mwe.jl:21
 [2] top-level scope at none:0

which I do not understand.

On top, would it be advisable to actually type the un-typed fields ? How would I do that for fractal.xx and fractal.roots (one could switch fractal.roots to a vector from a tuple), please note that a different myfunction() might of course a different number of roots ?

I am aware that in the already mentioned thread the suggestion was to use something like

struct fractal
  xx
  yy
  myfunction::Function
  derivative::Function
  number_of_roots::Int
  roots
end
...
...
function iterationstep(f,d,z)
  return z-f(z)/d(z)
end
zz=iterationstep(Fractal.myfunction,Fractal.derivative,zz)

which of course works fine and infers type/ inlines correctly.

Still, I would appreciate any help to understand the parametric constructors better.

Thank you very much for reading this far !

Best Regards

1 Like

By the way, also have a look at my newton fractal package

You are welcome to make a pull-request on it too, this is what my type looks like

    Fatou.Define(E::Any;                  # primary map, (z, c) -> F
      Q::Expr     = :(abs2(z)),           # escape criterion, (z, c) -> Q
      C::Expr     = :((angle(z)/(2π))*n^p)# coloring, (z, n=iter., p=exp.) -> C
      ∂    = π/2, # Array{Float64,1}      # Bounds, [x(a),x(b),y(a),y(b)]
      n::Integer  = 176,                  # horizontal grid points
      N::Integer  = 35,                   # max. iterations
      ϵ::Number   = 4,                    # basin ϵ-Limit criterion
      iter::Bool  = false,                # toggle iteration mode
      p::Number   = 0,                    # iteration color exponent
      newt::Bool  = false,                # toggle Newton mode
      m::Number   = 0,                    # Newton multiplicity factor
      O::String   = F,                    # original Newton map
      mandel::Bool= false,                # toggle Mandelbrot mode
      seed::Number= 0.0+0.0im,            # Mandelbrot seed value
      x0          = nothing,              # orbit starting point
      orbit::Int  = 0,                    # orbit cobweb depth
      depth::Int  = 1,                    # depth of function composition
      cmap::String= "")                   # imshow color map
`Define` the metadata for a `Fatou.FilledSet`.

I created this package before I knew much about parametric types, but maybe I should revisit it.

This is based on the order in which your fields are defined in the struct.

Hello,

well, yes. But {F<:Function, G<:Function} are “positional parameters” one and two in the type definiton while the struct has six fields. This confuses me.
The manual has only examples where all fields hvae (the same) parametric type, there is no situation like this and it is also not explained in a way I understand ?

What if I have e.g. multiple Vector{Float64} fields or a function more ? Can I leave them untyped under which circumstances ? Is there a risk of assigning the wrong type to a field ?

Edit: Thinking about it, somewhat echoing this thread may be I am looking for something more explicit like a fortran interface definition which leaves less (?) room for ambiguity.

Edit 2: So this would make the matching between the paremetric type parameters and the struct fields positional and then first match ? Is that right ? Does it take the actual types of the constructor arguments into account when matching (or is that something which cannot happen by language design) ?

One thing I noticed is in your approach it is assumed user has already supplied a Function type and you are trying to specialize on this pre-supplied function. However, in my approach with Fatou, I don’t assume the user already has a Function object, instead I construct the necessary Function necessary by computing the derivative, and then generating a typed-special purpose function to pass along to the iteration scheme.

"""Compute(::Fatou.Define)::Union{Matrix{UInt8},Matrix{Float64}}
`Compute` the `Array` for `Fatou.FilledSet` as specefied by `Fatou.Define`."""
function Compute(K::Define)::Tuple{Matrix{UInt8},Matrix{Complex{Float64}}}
    # define function for computing orbit of a z0 input
    function nf(z0::Complex{Float64})
        K.mandel ? (z = K.seed) : (z = z0)
        zn = 0x00
        while (K.newt ? (K.Q(z,z0)::Float64>K.ϵ)::Bool : (K.Q(z,z0)::Float64<K.ϵ))::Bool && K.N>zn
            z = K.F(z,z0)::Complex{Float64}
            zn+=0x01
        end; #end
        # return the normalized argument of z or iteration count
        return (zn::UInt8,z::Complex{Float64})
    end
    # generate coordinate grid
    Kyn = round(UInt16,(K.∂[4]-K.∂[3])/(K.∂[2]-K.∂[1])*K.n)
    x = range(K.∂[1]+0.0001,stop=K.∂[2],length=K.n)
    y = range(K.∂[4],stop=K.∂[3],length=Kyn)
    Z = x' .+ im*y # apply Newton-Orbit function element-wise to coordinate grid
    (matU,matF) = (Array{UInt8,2}(undef,Kyn,K.n),Array{Complex{Float64},2}(undef,Kyn,K.n))
    @time @threads for j = 1:length(y); for k = 1:length(x);
        (matU[j,k],matF[j,k]) = invokelatest(nf,Z[j,k])::Tuple{UInt8,Complex{Float64}}
    end; end
    return (matU,matF)
end

Yes. The whole exercise is about getting the compiler to specialize on the function types instead of using the Any type and to learn what the syntax (parametric type syntax) and language rules for that syntax are.

The first iteration of the toy code was without using any structs but passing everything as arguments (including myfunction() and derivative() to iterationstep()). This is not nice as there is a risk of miss-matching polynomial, derivative and roots.

Without the specialization @benchmark shows that interationstep() is about two times slower.

I decided to do everything externally and then have different constructors which return the type with all fields set up for the specific polynomial in a matching and consistent way for that. I agree with you that for a code which is about the actual fractal families it is better to do all of that internally on the fly as you do, possibly also interfacing to a CAS when necessary.

1 Like

It would be good to combine of these techniques. As you can see below, the Fatou.Define type also has Function objects stored inside it (derivative and original function is not needed, since the newton map is generated as a single expression automatically in the constructor step). There are 3 functions, the iteration map, the escape criterion, and the coloring function. I’d like to investigate the parametric specializing here.

mutable struct Define
    E::Any # input expression
    F::Function # primary map
    Q::Function # escape criterion
    C::Function # complex fixed point coloring
    ∂::Array{Float64,1} # bounds
    n::UInt16 # number of grid points
    N::UInt8 # number of iterations
    ϵ::Float64 # epsilon Limit criterion
    ...

The constructor of Fatou.Define builds the necessary functions from Expr input in this step

!newt ? (f = genfun(E,[:z,:c]); q = genfun(Q,[:z,:c])) :
        (f = genfun(newton_raphson(E,m),[:z,:c]); q = genfun(Expr(:call,:abs,E),[:z,:c]))
c = genfun(C,[:z,:n,:p])

and the expression transformed into a Newton scheme is obtained from this function

function newton_raphson(F,m)
    f = RExpr(F)
    return Algebra.:-(R"z",Algebra.:*(m,Algebra.:/(f,Algebra.df(f,:z)))) |> factor |> parse
end

The SyntaxTree.genfun method is used to make the expression into a function,

genfun(expr,args::Union{Vector,Tuple}) = eval(:(($(args...),)->$expr))
genfun(expr,args::Symbol...) = genfun(expr,args)

It would be an interesting experiment to try out the Function specialization technique with Fatou.Define

This is a performance benchmark I did on a smaller example

julia> struct g{F<:Function}
       h::F end

julia> struct G
       h::Function end

julia> a = g(x->x^2-2x^7+1)
g{getfield(Main, Symbol("##21#22"))}(getfield(Main, Symbol("##21#22"))())

julia> b = G(x->x^2-2x^7+1)
G(getfield(Main, Symbol("##23#24"))())

julia> gh(x::g{F},a) = x.h(a)
gf (generic function with 1 method)

julia> Gh(x::G,a) = x.h(a)
Gf (generic function with 1 method)

julia> @btime gh(a,17+im)
  32.813 ns (2 allocations: 64 bytes)
-761386735 - 332091598im

julia> @btime Gh(b,17+im)
  33.525 ns (2 allocations: 64 bytes)
-761386735 - 332091598im

I think you missed a where F here. But in any case, cf (note @noinline)

julia> @noinline gh(x::g{F},a) where F = x.h(a)
gh (generic function with 1 method)

julia> @noinline Gh(x::G,a) = x.h(a)
Gh (generic function with 1 method)

julia> @btime gh($a,17+im)
  15.977 ns (0 allocations: 0 bytes)
-761386735 - 332091598im

julia> @btime Gh($b,17+im)
  34.889 ns (2 allocations: 64 bytes)
-761386735 - 332091598im
1 Like

Well, then let me give you the numbers.

With

struct fractal
  xx
  yy
  myfunction::Function
  derivative::Function
  number_of_roots::Int
  roots
end

I get

julia> @code_warntype iterationstep(Fractal,zz)
Body::Any
31 1 ─ %1 = (Base.getfield)(Fractal, :myfunction)::Function                                               │╻ getproperty
   │   %2 = (%1)(z)::Any                                                                                  │ 
   │   %3 = (Base.getfield)(Fractal, :derivative)::Function                                               │╻ getproperty
   │   %4 = (%3)(z)::Any                                                                                  │ 
   │   %5 = (%2 / %4)::Any                                                                                │ 
   │   %6 = (z - %5)::Any                                                                                 │ 
   └──      return %6              
julia> @benchmark iterationstep($Fractal,$zz)
BenchmarkTools.Trial: 
  memory estimate:  224 bytes
  allocs estimate:  7
  --------------
  minimum time:     122.678 ns (0.00% GC)
  median time:      131.516 ns (0.00% GC)
  mean time:        147.694 ns (10.15% GC)
  maximum time:     50.438 μs (99.71% GC)
  --------------
  samples:          10000
  evals/sample:     911

and using

struct fractal{F<:Function,G<:Function}
  xx
  yy
  myfunction::F
  derivative::G
  number_of_roots::Int
  roots
end

the numbers are

julia> @benchmark iterationstep($Fractal,$zz)
BenchmarkTools.Trial: 
  memory estimate:  0 bytes
  allocs estimate:  0
  --------------
  minimum time:     25.975 ns (0.00% GC)
  median time:      26.199 ns (0.00% GC)
  mean time:        26.472 ns (0.00% GC)
  maximum time:     49.658 ns (0.00% GC)
  --------------
  samples:          10000
  evals/sample:     995

not showing the nice but lengthy @code_warntype and noting that “factor two” was a bit optimistic.

However, the original question was about the matching rules for the type parameters. I know realise that my confusion originated from the failing

struct fractal{F<:Function}
  xx
  yy
  myfunction::F
  derivative::F
  number_of_roots::Int
  roots
end

which was actually my first try based on the examples in the manual. In the working case

struct fractal{F<:Function,G<:Function}
  xx
  yy
  myfunction::F
  derivative::G
  number_of_roots::Int
  roots
end

of course the myfunction::F vs. derivative::G makes the match unique. It does not do any positional matching. So the identifier in the struct, i.e. F or G has to be unique which is enforced. Probably one can make sure that the type they reference are the same with an inner constructor ?

Edit: typos …

Yes, indeed, I just pushed an update to Fatou where used an inner constructor, like this

struct fractal{F<:Function,G<:Function}
  xx
  yy
  myfunction::F
  derivative::G
  number_of_roots::Int
  roots
  ## inner constructor here
  fractal(x,y,mf::Function,df::Function,nr,r) = new{typeof(mf),typeof(df)}(x,y,mf,df,nr::Int,r)
end

And what is the real world speed difference for you ?

Let me summarize what I now believe the answer to the original rather confused question is:

The parametric type arguments in the struct definition, i.e. {F<:Function,G<:Function} are matched to the actual outer constructor arguments on a first match basis using the actual argument type and consuming the type parameter afterwards. The match to the fields in the struct is based purely on the uniqueness of the identifiers, i.e. F and G in the example, no positional matching applies. There is no risk of mis-identifying xx or yy which are ranges in the constructor call with these rules.

Edit: Deleted a stupid mistake stemming from translating between actual code and MWE.

I also found out that changing roots to be an array and typing roots as <:AbstractArray was beneficial as the compiler suddenly was able to infer the type in the following function which tries to figure out which root it had converged to:

  ergebnis=0
  for ii=1:Fractal.number_of_roots
    diff=Fractal.roots[ii]-z;
    if ( abs(diff) < Fractal.accuracy*1e2 )
      ergebnis=ii
      break
    end
  end
  return ergebnis
end

Without that I had some Any’s in @code_warntype there.

When I only add the Function parameters to the Define type, I get basically 0% to 10% improvement.

My original timings look like this

julia> newton(:(z^3-1);n=500)|>fatou;
  0.240635 seconds (1.69 M allocations: 58.407 MiB)

With the Define{FT<:Function,QT<:Function,CT<:Function} definition, I don’t get much improvement

julia> newton(:(z^3-1);n=500)|>fatou;
  0.221504 seconds (1.53 M allocations: 50.724 MiB)

However, because in my situation there is additional program logic, I added additional Bool type paramters to the Fatou.Define{FT,QT,CT,M,N} object, which gave me a 10x performance improvement. This time the parameters have additionally Bool pair to help decide on Mandelbrot and Newton fractal mode.

julia> newton(:(z^3-1);n=500)|>fatou;
  0.020936 seconds (542 allocations: 36.784 KiB)

This is because of my iteration function definition, which depends on these true or false values, by including them in the Fatou.Define type parameters, the function can specialize on those values

# define function for computing orbit of a z0 input
function nf(K::Define{FT,QT,CT,M,N},z0::Complex{Float64}) where {FT,QT,CT,M,N}
    M ? (z = K.seed) : (z = z0)
    zn = 0x00
    while (N ? (K.Q(z,z0)::Float64>K.ϵ)::Bool : (K.Q(z,z0)::Float64<K.ϵ))::Bool && K.N>zn
        z = K.F(z,z0)::Complex{Float64}
        zn+=0x01
    end; #end
    # return the normalized argument of z or iteration count
    return (zn::UInt8,z::Complex{Float64})
end

So now it computes much, much faster. However, in my case, having a Function parameter alone did not do the trick, it was also necessary to have other parameters also.

Sprechen sie Deutsch?

OK, from this and roots and related function above (which is still missing the function definition … sorry) the lesson to me is to take the performance tip to heart and to actually check with @code_warntype :slight_smile:

Yes. This is basically the explanation for the mess in the previous post, I had to (or better tried to) translate everything from one language to the other and juggled several versions at the same time … Sorry about that.

Edit: OK, here is the whole untranslated function (to avoid further mess ups), it is probably better than editing above post again which I was contemplating.

function vergleich_zu_nullstellen(Fraktal,z)
  ergebnis=0
  if (Fraktal.index < 5)
    for ii=1:Fraktal.anzahl_nullstellen
      diff=Fraktal.nullstellen[ii]-z;
      if ( abs(diff) < Fraktal.genauigkeit*1e2 )
        ergebnis=ii
        break
      end
    end
  else
    ergebnis=Int(round(real(z)/(pi/2)))
  end 
  return ergebnis
end

This includes an additional if, as you probably know the Newton fractal can also be defined for e.g. sin() which has a single real root and I should simplify that, I know.

1 Like

Well, I am a german speaker also, but probably not everyone on this website is.

What you could do is write a single while loop, which has the if statement inside the condition, as I have done in my code shown above.

Amazingly, with this improvement the difference is enormous for larger n, with old definition

julia> newton(:(z^3-1);n=1500)|>fatou;
 19.215167 seconds (65.07 M allocations: 1.817 GiB, 7.66% gc time)

However, with new definition

julia> newton(:(z^3-1);n=1500)|>fatou;
  0.175065 seconds (541 allocations: 36.706 KiB)

julia> newton(:(z^3-1);n=2500)|>fatou;
  0.490495 seconds (542 allocations: 36.784 KiB)

julia> newton(:(z^3-1);n=3500)|>fatou;
  1.004239 seconds (541 allocations: 36.706 KiB)

Clearly, the memory consumption and allocations are greatly reduced. I believe this is because in my original version, the @threads parallel processing had to assign the memory for Fatou.Define object every time to determine the Bool parameters for Mandelbrot and Newton mode. Having these Bool and Function available in the type parameter for Fatou.Define enables the compiler to leave out the full reference to the original Fatou.Define object, I suspect, and is probably inlining everything now.

So if you include enough type parameters to make the original struct obsolete, I think that’s the trick, so that the bigger object does not need to be re-assigned for parallel computations.

1 Like

I may be restating the conclusion you came to, but for further clarity, the order of the type parameters has no relation to the order of the fields in the struct.

Your struct here would be just as valid with the type parameters rearranged as:

struct fractal{G<:Function,F<:Function}
  xx
  yy
  myfunction::F
  derivative::G
  number_of_roots::Int
  roots
end

When calling the type constructor, that is when the argument order matters; each field is bound to the corresponding argument (e.g. the first arg in the constructor gets bound to xx, the second arg is bound to yy, etc.).

Additionally, in case it wasn’t clear, the reason that

didn’t work is because F represents a single, unique type. Assuming your myfunction and derivative are different,

typeof(myfunction) != typeof(derivative)

and no method matching a type signature with different functions for myfunction and derivative will be found.

1 Like

Hello,

that is nice to hear !

Thank you for spelling this out again. I believe I have somewhat of a problem with the syntax. For me it would currently be much clearer if the parametric types were given in the outer constructor definition and the outer constructor interface had to be explicitely defined once (I realize that means usually more pointless typing), something like

fractal(xx<:Type,yy<:Type,myfunction<:Type,derivative<:Type,3<:Type,roots<:Type)

Probably I just need to work more with the actual syntax, i.e. the ```{…}``, and get accustomed to it :slight_smile:

But may be you or someone else can answer a final question. I now switched everything to using the ranges in Fractal.xx and Fractal.yy which made things slow as they were untyped. It turns out that apparently (including here the unrelated typing of roots for completeness, see above posts)

struct fractal{a<:Any,b<:Any,F<:Function,G<:Function,r<:AbstractArray}
  xx::a
  yy::b
  myfunction::F
  derivative::G
  number_of_roots::Int
  roots::r
end

is sufficient to restore performance, so I assume the compiler is then able to infer type again. Or to give a self contained example

using BenchmarkTools
struct gamma
       a
end
struct alpha{a<:Any}
       a::a
end
function delta(Gamma)
       zz=sqrt(Gamma.a[3])
end
Gamma=gamma(range(1.0,stop=2.0,length=10))
@btime delta($Gamma)
Gamma=alpha(range(1.0,stop=2.0,length=10))
@btime delta($Gamma)

which gives

julia> include("new.jl")
  42.343 ns (2 allocations: 32 bytes)
  16.811 ns (0 allocations: 0 bytes)

@code_warntype indicates that the first case defaults to Any for everything while in the second case the whole range function appears to be inlined.

What would be a better type for the ranges if there is one ? And why do I have to do a seemingly trivial <:Any by hand, I am sure there is a good reason but I would like to know it ? And why should I not type everything as <:Any then (ok, catching errors, but for performance) ?

1 Like

This is because your second example includes the type of a in the type of alpha. Technically, you don’t need to have <:Any in your definition, because you can just leave that out, because Any is assumed

struct alpha{a}
       a::a
end

Should have same behavior. This definition stores the type of a as part of the type.

julia> alpha(range(1.0,stop=2.0,length=10)) |> typeof
alpha{StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}}

The difference between gamma and alpha is that alpha types will be able to specialize on their type parameter, so that code generated for that type can be inlined. With gamma, the information about the type of a is not present when the code is generated. The type parameter helps the compiler infer field type.

I understand that perfectly :slight_smile: I should reword my question:

Why do I have to do that manually in the first place ?

Adding <:Any is something I would expect as a default behaviour, avoiding the performance pitfalls. This is completely astonishing to me, especially since doing this as a default appears to be trivial, so there must be a very compelling reason not to do it which I would like to know.

This is because Julia compiles specialized methods for specific types. Since your gamma definition has no information about the type of the field a, the compiler will not be able to infer its type. However, because you specifically included the type parameter for alpha, the compiler can infer more about it. You don’t have to use a parameter, because you could specify what the type should be like this too:

struct alpha
  a::StepRangeLen{Float64,Base.TwicePrecision{Float64},Base.TwicePrecision{Float64}}
end

But then, it is not parametric, it is statically typed.

There is no specific reason not to do it, but it also isn’t necessary, since it is automatically assumed.

1 Like

I think you may have misunderstood something: performance suffers when you don’t parametrize the type at all (see the performance tips). Whether you then constrain the type parameter (or leave it unconstrained, either implicitly or with <: Any) is orthogonal to performance.

1 Like