Build_function returning an Expr, which is not callable. How do I make it callable?

I am trying to run Optimization.jl (or something else) on an expression I have built up in about 200 lines of code. For example, the output of

println("LOSS FUNCTION:")
println(loss)
println(typeof(loss))

is:

(1.1079731710047782 - xˍt_1)^2 + (2.0 - x_1)^2 + (4.607932711559424 - x_2)^2 + (8.963378140696422 - x_3)^2 + (2.2029940222813673 - xˍt_2)^2 + (4.862596195092209 - xˍt_3)^2 + (xˍt_1 - alpha*x_1)^2 + (xˍt_2 - alpha*x_2)^2 + (xˍt_3 - alpha*x_3)^2
Symbolics.Num

However, the below code fails on the last line:

	lossvars = get_variables(loss)
	println("LOSS VARIABLES:")
	println(lossvars)
	f_expr = build_function(loss, lossvars)
	println(f_expr)
	println(typeof(f_expr))
	n = length(lossvars)
	d = zeros(n)
	println(d)
	println(n)
	println(typeof(f_expr))
	f_expr(d)

The error is:

LoadError: MethodError: objects of type Expr are not callable

The output of the above code is, in case it helps:

LOSS VARIABLES:
Any[xˍt_1, x_1, x_2, x_3, xˍt_2, xˍt_3, alpha]
function (ˍ₋arg1,)
    #= /home/orebas/.julia/packages/SymbolicUtils/NJ0fs/src/code.jl:373 =#
    #= /home/orebas/.julia/packages/SymbolicUtils/NJ0fs/src/code.jl:374 =#
    #= /home/orebas/.julia/packages/SymbolicUtils/NJ0fs/src/code.jl:375 =#
    begin
        (+)((+)((+)((+)((+)((+)((+)((+)((^)((+)(1.1079731710047782, (*)(-1, ˍ₋arg1[1])), 2), (^)((+)(2.0, (*)(-1, ˍ₋arg1[2])), 2)), (^)((+)(4.607932711559424, (*)(-1, ˍ₋arg1[3])), 2)), (^)((+)(8.963378140696422, (*)(-1, ˍ₋arg1[4])), 2)), (^)((+)(2.2029940222813673, (*)(-1, ˍ₋arg1[5])), 2)), (^)((+)(4.862596195092209, (*)(-1, ˍ₋arg1[6])), 2)), (^)((+)(ˍ₋arg1[1], (*)((*)(-1, ˍ₋arg1[7]), ˍ₋arg1[2])), 2)), (^)((+)(ˍ₋arg1[5], (*)((*)(-1, ˍ₋arg1[7]), ˍ₋arg1[3])), 2)), (^)((+)(ˍ₋arg1[6], (*)((*)(-1, ˍ₋arg1[7]), ˍ₋arg1[4])), 2))
    end
end
Expr
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
7
Expr

For some context, I also tried OptimizationFunction, and I also tried using the @syms macro after a different error message suggested to try that. I have been struggling to optimize this function for a while now, and nothing seems to work. Any help would be appreciated. Thanks.

Welcome @orebas,

A. S. For those reading this thread at a later time: the solution was provided by @ChrisRackauckas and consists in actually returning the callable function directly instead of doing additional steps for making the Expr callable:

f_expr = build_function(loss, lossvars, expression = Val{false}) 

Resuming the main/original thread:

You cannot directly call an Expr object (in your specific scenario, it is not a callable function yet).

From the print info you provided, the Expr seems to be encoding an anonymous function.

Here is an idea for you to try (keep in mind that I didn’t run the suggestion on my end):

f = eval(f_expr)
typeof(f) # should be Function
f(d)

If you call the above on REPL (or a notebook), you shouldn’t get into world-age issues.

If the solution I suggested above doesn’t work, please try to share a working MWE that we can run end-to-end (at this point, your code starts with a preexisting loss object).

1 Like

Thank you. I tried your suggestion, and get that typeof(f) is

var"#3#4"

and the error is

var"#3#4"
ERROR: LoadError: MethodError: no method matching (::var"#3#4")(::Vector{Float64})
The applicable method may be too new: running in world age 34050, while current world is 34051.

Closest candidates are:
  (::var"#3#4")(::Any) (method too new to be called from this world context.)

I appreciate that a MWE would be helpful, I will try and put one together.

BTW, if I have a .jl file which I’m running, is there a way to run it until the error and then remain in the scope where the error happens? I can’t figure out how to query variables after errors happen (hence all the println statements)

1 Like

Here is an MWE (self-contained, it runs as a .jl file)

using ModelingToolkit, DifferentialEquations

function main()
	@variables xˍt_1 x_1 x_2 x_3 xˍt_2 xˍt_3 alpha
	loss_f =
		(2.0 - x_1)^2 + (3.6 - x_2)^2 + (8.9 - x_3)^2 + 
		(1.07 - xˍt_1)^2 + (1.7 - xˍt_2)^2 + 
		(5.0 - xˍt_3)^2 + (xˍt_1 - alpha * x_1)^2 +
		(xˍt_2 - alpha * x_2)^2 +
		(xˍt_3 - alpha * x_3)^2


	lossvars = get_variables(loss_f)
	f_expr = build_function(loss_f, lossvars)
	n = length(lossvars)
	d = zeros(n)
	g = eval(f_expr)
	g(d)
end

main()

and here is the error

ERROR: LoadError: MethodError: no method matching (::var"#3#4")(::Vector{Float64})
The applicable method may be too new: running in world age 33678, while current world is 33679.

Closest candidates are:
  (::var"#3#4")(::Any) (method too new to be called from this world context.)
   @ Main ~/.julia/packages/SymbolicUtils/NJ0fs/src/code.jl:373

To be clear, I am trying to optimize this function loss_f (and others like it) - I tried passing loss_f itself to OptimizationProblem, OptimizationSystem, and OptimizationFunction, in various different ways, and nothing worked. So that is my end goal. (Also I am hoping to use AD to automatically be able to use hessians/gradients with such a simple function. But right now I literally can’t pass to the Optimization package.) Thanks for taking a look!

Later Edit (after @ChrisRackauckas correction): my suggestion only explored what can be done downstream to make the Expr object callable. However, the correct approach is to use the package-provided functionality and retrieve the generated function instead of the Expr.

If you have to pick between the two, you always go with the well-thought and well-crafted package-provided functionality.

I am keeping this for reference and thread-consistency reasons:

using ModelingToolkit, DifferentialEquations

function main()
    @variables xˍt_1 x_1 x_2 x_3 xˍt_2 xˍt_3 alpha
    loss_f =
        (2.0 - x_1)^2 + (3.6 - x_2)^2 + (8.9 - x_3)^2 +
        (1.07 - xˍt_1)^2 + (1.7 - xˍt_2)^2 +
        (5.0 - xˍt_3)^2 + (xˍt_1 - alpha * x_1)^2 +
        (xˍt_2 - alpha * x_2)^2 +
        (xˍt_3 - alpha * x_3)^2


    lossvars = get_variables(loss_f)
    f_expr = build_function(loss_f, lossvars)
    n = length(lossvars)
    d = zeros(n)
    g = eval(f_expr)
    result = @invokelatest g(d)
    @info result
end

main()

The eval usage is always carefully weighted when used.

In my example, please notice the usage of @invokelatest macro. You can consider this as a way to by-pass the world age error.

However, this is a little more complex: because if you compile and return your anonymous function from your main, you’ll be able to use it at a later time without getting into world age issues (e.g., the world will age at subsequent calls).

No, please use the recommended keyword arguments in build_function. There’s expression = Val{true}, change it to expression = Val{false} and you’ll get a RuntimeGeneratedFunction that won’t have world-age issues. I.e.:

f_expr = build_function(loss, lossvars, expression = Val{false})
3 Likes

Thank you guys so much! the second suggestion works. I did not try the first, I am attempting to avoid learning about “world age” until I absolutely must.
I have a followup question about optimization, I will start a new topic on that.

1 Like