It’s interesting that a method can be defined and attached to a type as described in the documentation about functors.
I’m curious how it is ever useful though. Why not just define a regular function so it has a name that describes the intention of the logic? Using a functor is like an anonymous function but the implementation is “hidden”…
I use it to define algorithm structs that are callable. The type wraps the parameters and pre-allocated memory needed by the algorithm. It is then called with the inputs of the algorithm as arguments and returns the outputs. It is just a nice way to pre-allocate and define parameters then “call” the algorithm.
Doesn’t that constrain yourself to have 1 algorithm per struct? Why not define a struct, pre-allocate via constructor, and write a function for each algorithm?
Perhaps this is an “inside-out” thinking. The algorithm is the main thing but you want to use some data structure only inside this algorithm. Since we cannot define structs inside functions, using functors would be the only way to achieve that. Right?
Well, I think you can achieve the same thing with a function, just pre-allocate separately in a non-callable struct and then pass it to a function with the name of the algorithm, but why have 2 separate entities when they can be combined? It’s just a mental model and API thing rather than an absolute necessity, for me at least.
An algorithm struct can also be composed of other sub-algorithms with different parameters and pre-allocated data structures, where the subalgs are called during the call of the main alg. So it is just a nice way to organize the code for me.
We use them to define objects called transfer functions. These represent functions in the mathematical sense as well and it becomes very logical to call them as such. The same would go for a polynomial type etc.
Flux does this to make their neural network layers behave like functions, same concept, very useful and intuitive.
With a function you can only do one thing, call it. With a functor you can define additional methods to expose other information and functionality, but still have the convenience of a function-call API.
For example, there are various packages (Polynomials, ApproxFun, etc.) that provide a data structure wrapping a polynomial p. It is natural to make these callable so that you can use the syntax p(x) to evaluate the polynomial, but that’s not all you want to do with polynomials so you need a struct and not an anonymous function.
I use this all the time. A simple example is for passing a loss function to Optim.jl that needs attached data and preallocated arrays.
I find it a lot clearer and more self contained to define a struct than to pass arguments in an anonymous function. I think Julia is doing the same thing with a closure anyway, you just cant use the #XX struct for anything else. With a functor you could print details about the model with show(x), or whatever.
For this purpose, I would tend to just use a closure. The distinction here is that Optim only needs to call your function and can’t do anything else with the attached data, so there is no benefit to defining a callable struct … unless you are going to do other things with the object besides pass it to Optim.
I see. It does seem quite natural to evaluate polynomials as p(x) rather than evaluate(p, x). Looks cleaner and less verbose.
An additional thought - the lack of a function name seems to coincide with the common usage with an implicit name of evaluate, which works with polynomials, algorithms, Flux layers, etc. So the way that I could describe the action of “calling the struct” as “evaluating the struct for given arguments.”
I’m providing a complex parametrisation method for users of my package. Defining the struct with default constructors is much clearer to users than telling them to use a closure with 8 arguments.
Then I can define show() methods, and use the same struct to build a user interface.
The functors can be understood in terms of multimethods and the generic call() function.
We can say that the f(x) notation is a syntactic sugar for a special expression call(f, x). We can say that the built-in call() is defined only for argument lists where the first element is bound to a lambda-expression, and defining a functor is just adding a method for the call() expression with the first argument of struct type.
Much like adding a method to getindex() for a new type defines what x[3] means when x is of that type.
It’s not so unusual to be able to modify a mutable value captured by a closure. Even C++ lets you do that.
The behavior I still find truly surprising is that julia allows rebinding of new values to variable names which “belong” to an outer scope:
julia> function foo()
i = 10
some_closure = ()->(i = "Surprise")
some_closure() # call the closure
return i
end
foo (generic function with 1 method)
julia> foo()
"Surprise"
Function-like structs are just explicit closures. They have the advantage that you have full control over the layout, types, etc and also can add more methods / functionality, at the cost of some verbosity.
I mean this in a very literal sense: A very early step (lowering) transforms inner function definitions into definitions for callable structures, long before the optimizer does its work. Closures are mere syntactic sugar over callable structs, and I recommend to use explicit callable structs instead of implicit closures whenever the captured state is nontrivial.