Generate re-usable fast function from Symbolics

I am trying to generate a function that gets multi-input (both symbolic and numeric), and multi-output that each of them have different sizes. Then I want to generate a function using build_function, with expression = Val{false}, so that it would be a runtimegeneratedfunction.

But I cannot make it work! I would appreciate if somebody can help me. I will show a psudo code of what I want to do.

script1:
using Symbolics, RuntimeGeneratedFunctions

@variables θ m g

F = m * g * sin(θ)

T = m * g * cos(θ)

F_expr = build_function([F,T], [θ, m, g],expression=Val{false})

write(“myfunc.jl”, string(F_expr))

Script2:
c = include(“myfunc.jl”)
a,b = c([1, 2, 3])

Format Julia code in fenced code blocks between triple backticks; forums like discourse support a bit of extended Markdown like this. Please also share whatever error stacktraces you’re running into.

For now, it looks like you’re mixing up the Val{true} option’s eval-able Expr outputs and the Val{false} option’s callable RuntimeGeneratedFunction outputs, and you’re neglecting that the output of build_function actually gives you 2-tuple to choose from. Review the tutorial here: Getting Started with Symbolics.jl · Building Functions

Thank you for your answer. I have read those files before. My question is mostly how can I get the generatedruntimefunction that comes out of build_function working in another file.

For instance if I do this in first script:

using Symbolics

# Import JuliaTarget from Symbolics
import Symbolics: JuliaTarget, build_function

# Define symbolic variables and expression
@variables x y
f = x^2 + 2x*y + y^2
t = x + y
println("Expression: ", f)

# Generate a Julia function
generated_code = build_function([f,t], [x, y]; target=JuliaTarget(), expression=Val{true})[2]
println("Generated code: ", generated_code)

write("generated_function.jl", string(generated_code))

and If I use it in the second script as follows

# Include the generated function
using Symbolics
using NaNMath

# Include the generated function
f = include("generated_function.jl")
# Define input and output arrays
out = [0.0, 0.0]
f(out, [1.0, 2.0])
println(out[1])
println(out[2])

I can easily get the output I want, but I read on some other documents that runtimegeneratedfunction is faster. Therefore, I made the expression=val{false}, so that I can get the function in this runtimegeneratedfunction format:

RuntimeGeneratedFunction{(:ˍ₋out, :ˍ₋arg1), Symbolics.var"#_RGF_ModTag", Symbolics.var"#_RGF_ModTag", (0x2e47bddd, 0xf8cec6ab, 0x587ece1d, 0x1361e09f, 0x8e477015), Expr}(:(#= C:\Users\milad\.julia\packages\Symbolics\PxO3a\src\build_function.jl:342 =# @inbounds begin
          #= C:\Users\milad\.julia\packages\Symbolics\PxO3a\src\build_function.jl:342 =#
          begin
              #= C:\Users\milad\.julia\packages\SymbolicUtils\99RP6\src\code.jl:388 =#
              #= C:\Users\milad\.julia\packages\SymbolicUtils\99RP6\src\code.jl:389 =#
              #= C:\Users\milad\.julia\packages\SymbolicUtils\99RP6\src\code.jl:390 =#
              begin
                  begin
                      #= C:\Users\milad\.julia\packages\Symbolics\PxO3a\src\build_function.jl:558 =#
                      #= C:\Users\milad\.julia\packages\SymbolicUtils\99RP6\src\code.jl:437 =# @inbounds begin
                              #= C:\Users\milad\.julia\packages\SymbolicUtils\99RP6\src\code.jl:433 =#
                              ˍ₋out[1] = (+)((+)((NaNMath.pow)(ˍ₋arg1[1], 2), (*)((*)(2, ˍ₋arg1[1]), ˍ₋arg1[2])), (NaNMath.pow)(ˍ₋arg1[2], 2))
                              ˍ₋out[2] = (+)(ˍ₋arg1[1], ˍ₋arg1[2])
                              #= C:\Users\milad\.julia\packages\SymbolicUtils\99RP6\src\code.jl:435 =#
                              nothing
                          end
                  end
              end
          end
      end))

But I don’t know how can I include it in the second script and execute it.

Faster compared to what? If you mean faster execution compared to the anonymous function eval-ed from the Val{true} expression, then that’s not true.

The real benefit of RuntimeGeneratedFunction is that it can be instantiated and compiled during a method call without hitting world-age issues like a eval-ed expression would. You’re not doing this inside a method, so this doesn’t apply. If you really want to write an anonymous function to another file given a RuntimeGeneratedFunction, then RuntimeGeneratedFunctions.get_expression can help get the Expr back before you convert it to a string, but the Val{true} option saves you all that circular work. To put it another way, you already did the right things, Val{false} would only do more work you don’t need and would reverse.

There is a potential source of poorer performance in your working code however. Your f is a non-const global variable, so if you call its function in another method, the compiler cannot assume f always references the same function. That results in call overheads and poor type inference optimizations following the call. The fix is simple, write const f = include("generated_function.jl"). Note that named function and type definitions in the global scope are implicitly const as well, that’s important for compiler optimizations.

1 Like

Thank you so much for your response. My answer was your first paragraph. I thought runtimegeneratedfunction is faster in execution when is called in a separate file.

Just two more questions to see if I understand everything correctly (sorry if they are dumb ones). I am trying to make a pipeline for my work with Julia, so I want to make sure that everything is correct. I have lengthy symbolic derivations which I have saved in seperate files, and want to use those terms in a main file to implement optimization.

  1. If I generate the functions with Val{true}, and include them in my main file. For instance:
const f1 = include('function1')
const f2 = include('function2')
const f3 = include('function3')
const f4 = include('function4')

function required_optimization_terms(inputs)
    out1 = f1(inputs[1:x])
    out2= f2(inputs[x:y])
    out3 = f3(inputs[y:z])
    out4 = f4([z:a])
    # Return a tuple or another custom result
    return (out1, out2, out3,out4)
end

and then pass these params into a optimization solver, which we will call this required_optimization_terms, n_times to define n constraints. Will this be still efficient in execution? (with respect to runtime version).

  1. If I write a wrapper function to change how the generated function work:
function f(input)
    output = similar(input)  # Allocate an output container similar to input
    f(output, input)         # Call the original function
    return output            # Return the result
end

Does this introduce overhead? I am coming from MATLAB and python, and it is easier to use functions in this format.

Thanks again for your time and help.

I don’t know, I’m not versed in optimization solvers. Putting the wrong syntax aside, your const f1 = include("function1.jl") lines won’t lose any efficiency compared to a typical named function definition.

A method that allocates an output before forwarding to an in-place mutating method is common in Julia. However, they’d normally belong to separate functions, the mutating one with ! at the end of the name. You don’t have to do that, but allocating vs in-place mutation is enough of a difference in practice. For example, say I want another method that works on 2 inputs, f(input1, input2). Uh oh, that might overwrite the f(output, input) method, and even if it doesn’t, f([0], [1]) is still ambiguous.

Yes, allocating takes time and memory, so doing it more takes more. The more garbage (allocations your program can’t reach) you make, the more work the garbage collector needs to do. If you have an already allocated object holding obsolete values, go ahead and mutate it to save work. Otherwise, you can’t avoid necessary work.

1 Like