Introspecting into a function?

Hi, I’m not sure if I’m using the right terminology here but essentially what I want to do is to make a function (or macro) which takes another function as it’s argument and looks inside that argument function to see what it does.

So here’s an example case. I’ve defined a method *(f1::Function, f2::Function) = t -> f1(t) * f2(t)

And then I want my function D(f::Function) to look inside f without evaluating it and see if f is the multiplication of two functions. Ie. if f = sin * cos Then D(f) should return D(sin) * cos + sin * D(cos)

Does anyone have any ideas on how to go about doing this? I’ve has a hard time with the documentation on this.


To give some background on why I’m interested in this, I recently watched this talk: Physics in Clojure where the author talks about his experience porting scmutils from the book Structure and Interpretation of Classical Mechanics (something of a sequel to Structure and Interpretation of Computer Programs) from Scheme to Clojure.

This got me really interested in attempting this with Julia though its a bit of an uphill battle as I’m not proficient enough with Scheme and Clojure to understand the author’s source code for scmutils and the documetation is very sparse. However, I can infer how the programs are supposed to behave and so its turned into a fun little side project to attempt to reimplement scmutils in Julia.

Waiting for GitHub - JuliaLabs/Cassette.jl: Overdub Your Julia Code might be your best bet.

You could also try out https://github.com/SimonDanisch/Sugar.jl.

4 Likes

Won’t cassette require evaluation of the function, which OP doesn’t want?

@Mason Have you considered macros? If *(f1::Function, f2::Function) = @store_definition t -> f1(t) * f2(t), you can turn the anonymous function into a callable object (containing the anonymous function, and the definition). Then it’s straight-forward to check if it’s a multiplication.

Or you could straight up have *(f1::Function, f2::Function) = Multiply(f1, f2), where Multiply is a callable object.

Do you need your arguments to be “black box” functions? If not, there are a couple of approaches I can think of:

  • Create a “functiony” datatype (and maybe come up with a better term for it :slight_smile: ), representing multiplication of functions, and maybe other cominators if you need. Then you can define a multiplication of these things to just look at the existing structure and extend accordingly.
  • Create functions as expressions, and evaluate them later. Multiplication can then look at the code directly. I’m doing something like this for Soss.

Can you back up and summarize what underlying problem you are trying to solve? Why do you need to write functions that analyze other functions without executing them?

3 Likes

My apologies, I’ll edit my original post.

Great, Cassette looks exciting but for now at least, Sugar might do the trick.

Evaluation isn’t actually a huge deal, I just don’t want to have to call D as D((sin*cos)(t)), I want it to act on just the function sin*cos.

A natural way to do this in Julia, without introspection, is e.g. D(f) = x -> ForwardDiff.derivative(f, x) using the ForwardDiff package for automatic differentiation. Then you can do D(sin ∘ cos) etcetera. (Note that Julia 0.6 has a generic function-composition operator , typed by tab-completing \circ in the REPL etc.)

I deal with numeric and symbolic inputs though so using multiple dispatch I already have that (D(f::Function))(t::Number) is the automatic derivative of f but I need (D(f::Function))(t::Union{Expr, Symbol}) to perform symbolic differentiation using the calculus rules I’ve written.

From what I’ve seen here though, doing the introspection does not apparently worth it but I think I have another idea that I’ll try out and if it works I’ll report back.

Okay, so I got around my problem and I did it without needing to introspect into functions. In case anyone’s interested in how I did this, I made a derivative function D using forward mode automatic differentiation but using symbolic arguments instead of numeric which turned out to be really easy to implement and much better than the method I was using before which involved trying to do pattern matching and term rewriting to apply differentiation rules.

1 Like

That sounds nice - which symbolic package did you use? Would you be interested in sharing the code?

I’m actually not using a symbolic package, I wanted total control over evaluation and simplification so I rolled my own. Its still in the early stages but I’ll share some snippets here to show you whats going on but I think I’ll put the whole thing up on a github repo soon.

The idea is that I’ve added methods to all the basic mathematical functions such that

julia> :x + :y 
:(x + y)
julia> :x - 5 
:(x + -5)   #I'm not sure I'm super happy with this choice but it makes simplification much easier.
julia> sin(:x)
:(sin(x))

etc.

And then I made a simplification algorithm which looks for common mathematical patterns in the expression tree and applies replacement rules to them and then keeps on looping until the expression stops changing ie.

julia> :(x^1) |> expansion_loop
x
julia> :(0 * x) |> expansion_loop
0
julia> :(1 * x) |> expansion_loop
x
julia> :(0 + x) |> expansion_loop
x

etc.

Finally, to take derivatives of symbolic functions I made a struct

Mathy = Union{Number, Symbol, Expr}

struct Dual_Number
    real::Mathy
    infinitesimal::Mathy
end

function dual_number(a::Mathy, b::Mathy)
    if b == 0
        a
    else
        Dual_Number(a, b)
    end
end

dual_number(a) = dual_number(0, a)

and then screwed around with how dual numbers are displayed, ie.

function Base.show(io::IO, dual::Dual_Number)
    if (dual.infinitesimal isa Number) && (dual.infinitesimal < 0)
        op = "-"
    else
        op = "+"
    end
    real_string = dual.real == 0 ? "" : "$(dual.real) $op "
    infinitesimal_string = dual.infinitesimal == 0 ? "" :
                           abs(dual.infinitesimal) == 1 ? "ϵ" :
                           (dual.infinitesimal isa Expr) && (dual.infinitesimal.args[1] == :+ || dual.infinitesimal.args[1] == :-) ? "($(dual.infinitesimal))ϵ" :
                           "$(dual.infinitesimal)ϵ"
    print(io, "$real_string$infinitesimal_string")
end

Next, I’ve made methods for some standard mathematical functions for working with dual numbers that are agnostic about numbers versus symbols and expressions (because I can compose all of them). The idea here is that if I define


conj(a::Dual_Number) = dual_number(a.real, -(a.infinitesimal))

+(a::Dual_Number, b::Dual_Number) = dual_number(a.real + b.real, a.infinitesimal + b.infinitesimal)
+(a::Dual_Number, b::Mathy) = dual_number(a.real + b, a.infinitesimal)
+(a::Mathy, b::Dual_Number) = dual_number(a + b.real, b.infinitesimal)

-(a::Dual_Number, b::Dual_Number) = dual_number(a.real - b.real, a.infinitesimal - b.infinitesimal)
-(a::Dual_Number, b::Mathy) = dual_number(a.real - b, a.infinitesimal)
-(a::Mathy, b::Dual_Number) = dual_number(a - b.real, b.infinitesimal)

*(a::Dual_Number, b::Dual_Number) = dual_number(a.real * b.real, a.infinitesimal * b.real + a.real * b.infinitesimal)
*(a::Dual_Number, b::Mathy) = dual_number(a.real * b, a.infinitesimal * b)
*(a::Mathy, b::Dual_Number) = dual_number(a * b.real, a * b.infinitesimal)

/(a::Dual_Number, b::Dual_Number) = b.real != 0 ? (a * conj(b))/(b.real)^2 : Inf
/(a::Dual_Number, b::Mathy) = dual_number(a.real / b, a.infinitesimal / b)
/(a::Mathy, b::Dual_Number) = b.real != 0 ? (a * conj(b))/(b.real)^2 : Inf

^(a::Dual_Number, b::Mathy) = dual_number(a.real^b, b * a.real^(b-1) * a.infinitesimal)
^(a::Dual_Number, b::Integer) = dual_number(a.real^b, b * a.real^(b-1) * a.infinitesimal)
^(a::Mathy, b::Dual_Number) = dual_number(b^a.real, log(b) * a^b.real * a.infinitesimal)

log(a::Dual_Number) = dual_number(log(a.real), 1/a.real * a.infinitesimal)

sin(a::Dual_Number) = dual_number(sin(a.real), cos(a.real)*a.infinitesimal)
cos(a::Dual_Number) = dual_number(cos(a.real), -sin(a.real)*a.infinitesimal)

The idea here is that if I define ϵ = dual_number(0, 1) then ϵ^2 == 0 so ϵ is like a very small (infinitesimal) number and for any function f(x), f(x + ϵ) = f(x) + ϵ f'(x)

I then made a derivative function which takes a function f and returns a function which when evaluated will evaluate f(t + ϵ) and extracts out the infinitesimal part which will be the automatic derivative of f and then simplifies that result.

ϵ = dual_number(0, 1)
function D(f::Function)
   t::Mathy ->  f(t + ϵ).infinitesimal |> expansion_loop
end

With this I can do stuff like

julia> f(x) = log(sin((x^2)));
julia> D(f)(:x)
:((1 / sin(x ^ 2)) * (cos(x ^ 2) * (2x)))

which is correct(!!!) but could be simplified a bit more.

Sorry that turned into a lot of code but I was excited enough that I wanted to share what I had done. Once this is a bit more polished I’ll maybe make some blog posts about it if people are interested.

3 Likes