Hi,

I’m optimizing my package SymbolicRegression.jl, the backend for PySR, a GA-based gradient-free symbolic regression code.

The normal workflow for this package is to configure the options—such as choice of operators, mutation probabilities, choice of algorithm—and then run the search for a long period of time. For example:

```
options = SymbolicRegression.Options(
binary_operators=(+, *, /, -),
unary_operators=(cos, exp),
npopulations=20,
annealing=false,
maxsize=30,
batching=true
)
```

This options struct configures the search, and gets passed to nearly every function. Because of this, I think it will improve performance to have Julia compile specialized functions specific to every choice of parameter.

I am wondering if there is a way to force Julia to compile every user-defined parameter (defined here) into my functions?

As an example - the tips from @marius311 and @Henrique_Becker on this thread helped a lot with optimizing my equation evaluation: e.g., putting the operator choices into the type:

`Options{typeof(binary_operators), typeof(unary_operators)}(...)`

, where each set of operators is assumed to be a tuple, results in Julia compiling the operator choices into the equation evaluation. This improves the performance by quite a bit.

Basically, I would like to extend this technique to every single parameter in the options, since they will remain constant or only take on a few different values each run (say if the user launches multiple equation searches). My first idea is to repeat the above technique for every single parameter, like so:

```
function search(options::Options{T1, T2, T3, ....}) where {T1, T2, T3, ...}
# Use T1, T2, T3, ... inside this function
end
```

but my guess is that there is a cleaner way to do this. Any idea how I could set this up?

Thanks,

Miles