Get values of symbols inside a function

Hi guys! I have one problem: if I have an array of symbols, how can I get their values inside a function scope?

Something like this:

function f(a::Array{Symbols,1})
    a = 1
    b = 2
    c = 3

    # Get the values of the symbols in `a` and return.
end

f([:a,:b])

Then it should return [1;2]. I know that eval will not work because it operates at global scope. So, is there any way to do this?

Thanks,
Ronan

Why don’t you use a Dict?

d=Dict(:a=>1,
       :b=>2,
        ...)
[d[s] for s in a]

Because I want to do operations inside the function with the symbols like:

c = a+b
function f(a,b,c)
etc

If I use Dict, then it will become very difficult to understand, since I will have many symbols.

You cannot (by design) do this. There’s been a fair number of discussions about why, which I won’t rehash here, but what you can do instead is generate the function you want and then call it.

Hi Stefan, thanks for the answer. But I am quite new to julia, what is to generate the function?

I don’t understand your use-case. However, it does sound like it could overlap with my package https://github.com/mauro3/Parameters.jl, which let’s you pack and unpack variables.

Stefan means (I think) that you can create a function with meta-programming to save you typing:

function f(a::Vector{Symbol})
  out = []
  for aa in a
    if aa==:a
      push!(out,1)
    elseif aa==:b
      push!(out,2)
    ...
    end
  return out
end

(But again, this you should really do with a Dict.)

Hi Mauro,

No, you are not understanding my use case.

I have tons of variables to simulate a dynamic system. I want to create a system in which I can take all these variables an put in somewhere and then load them to the local scope. This will help a lot for computing the Runge-Kutta solution of ODEs.

If I use Dict for example, then I will have to use something like d[:lat] instead of ‘lat’ to refer to the latitude. This will work, but I am trying to simplify the maintenance of the code. If I use your package Parameters.jl, then I need to manually assign the values to each variable, and I have something like 200 variables…

The biggest problem is that everything is simulated inside functions and eval works on global scope.

There is not way you can create or reference local variables that you don’t know when the function is written by design. So you either need to use metaprogramming to generate such a function when you know the variable names, or use a dict instead.

In Parameters.jl, there are the @unpack_MyType macros. Thus you put all your variables into a type and then unpack them:

using Parameters
@with_kw type MyType
  a
  b
  c
...
  xyz # the 200th variable
end
function foo(mt::MyType)
  @unpack_MyType mt
  # now all the fields of MyType are local variables. E.g.:
  return a+b 
end

However, I would be cautious against this approach. It is reminiscent of the make-all-variables-global style of programming, which is prone to bugs (IMHO). I think it is better if you make lots of small functions which operate only on a sub-set of your 200+ variables and in those functions you explicitly unpack the needed variables; if using Parameters.jl with @unpack a, b = mt.

That is very ironic…

@mauro3’s Parameters.jl can be used for this and it is used for this inside of Julia’s ODE solver packages. But I think there’s just a larger design issue here. Why are you mixing variables like lat with the Runge-Kutta steps? The two should be kept separate because

  1. You’re not going to get any extra speed by keeping them together (where would this speed even come from?).
  2. It lets you write re-usable code.

Is there an issue why this be solved with something like DifferentialEquations.jl or ODE.jl? A problem of 200 variables should be small code which defines a function and that’s it. Can you describe a little bit what your problem is? I can probably come up with a clean way to write that function (a type + Parameters.jl + an iteration scheme on that type sounds like it may be what you’re looking for?).

That would cut down both on the amount of code you’d have to write and would likely be orders of magnitude more efficient (unless you’re planning to implement things like PI-controlled stepsize adaptivity on higher-order methods, which by all means, go for it, though there really won’t be a benefit).

Hi Chris,

I think it will be very difficult to explain my user case without showing the code, but I am not allowed to do so now. We are building a satellite simulator. All the systems are very complex and you need to solve many ODEs. The true system is continuous and discrete (for example, the attitude control algorithm). Coupling this kind of things are very difficult using a solver as, for example, Matlab’s ode45. I have 200 variables in the space state that needs some kind of integration. Those variables depends on others that are evaluated in a “discrete” manner.

I already wrote everything in Julia. I was just trying to simplify things to make the code more maintainable. Hence, I tried to use some kind of workspaces that writes/reads variables into the local scope. But unfortunately it can’t be done with julia. No problem then! :slight_smile: I will come back to my solution using Dicts.

Thanks everyone!

This is really easy with event handling and callbacks. It’s actually what I am using DifferentialEquations.jl for in a current project (much more complex than this since it also does things like change size, but the simple test case is pretty much the application you describe). If you just make a custom type for your values, and make it linearly index through the ones which should be continuous, it will “just work”. you can even keep adaptive timestepping by using events (callbacks + tstops at the discretization values), or just use adaptive=false with whatever method to turn off the adaptivity and apply the discrete changes through callbacks. Let me know if you want more details.

I would highly recommend a Parameters.jl method over Dicts since it would give you a lot better performance.

Hi Chris,

Can you provide a minimal example how can I use this to solve my problem?

Thanks!

type SatelliteData <: AbstractVector
  x # Vector of continuous values
  discrete_field1
  discrete_field2 # Should type these
  ...

Now just make implement the array interface using x

getindex(A::SatelliteData,i::Int...) = A.x[i....]
setindex!(A::SatelliteData,x,i::Int...) = (A.x[i...] = x)
length(A) = length(A.x)
...

Now you can write

function f(t,s,ds)
  # `s` is the input SatelliteData
  # `ds` is the SatelliteData for the change in the continuous values
  # Write the ODE here
  # Discrete values are available via `s.discrete_field1`
end

Now you just have to choose how to handle the discrete changes. You can either do this via events are tstop + dumb callback. Let’s do the tstop way. Setup

const tstop = [ ...] # Values of t for which discrete changes occur

Now you make a dumb callback which checks if it’s a timepoint where discrete changes occur, and apply them:

sat_callback = @ode_callback begin
  if t in tstop
    # Apply discrete changes to `s` here
    # s.discrete_field1 = 2
    ...
  end
  @ode_savevalues # make sure you save after!
end

and then just do the standard

prob = ODEProblem(f,s0,tspan)
solve(prob,callback = sat_callback, tstops = tstop)

and that should work (I might be missing a method in the array interface or something like that, but it should be clear from here how to do that).

This method will adaptively step as far as possible, unless there is a time in tstop that is in the next interval, and then it will shorten to hit that point exactly, and after doing that continuous update it will apply the discrete update (and then save). This implementation using t in tstop has a bad spot though since that check isn’t a trivial calculation, so you should replace it with something more related to the problem (i.e. if you tstops are 1:10000, instead just check if t is the same as its rounded value. Of course, the optimization here depends on the problem). You may then want to play with dense=false or save_timeseries=false depending on your problem. Check the documentation for solver options there.

Another way, without using tstops, is to just allow the discrete change at the end of each timestep with no adaptivity. This means you just get rid of the conditional in the callback (i.e. have it use the update directly), and pass in adaptive=false with dt = ..... You can even do this without discrete changes every step, and just check t in the callback as above. Adaptive timestepping should usually be better though.

These will work with any algorithm in OrdinaryDiffEq.jl (maybe not the Rosenbrock23? I’d have to see how it handles the Jacobian. It may “just work” there but there’s an optimization in the solver I can see that I’d want to do). After standardizing callbacks, that should extend to Sundials.jl and, if you interested in stochastic equations, StochsaticDiffEq.jl.

Again, I just typed this down and did test what I wrote, so excuse me if there’s a typo but it should be pretty simple from here.


Extra Stuff

So that’s one quick way to do it. The implementation time for this should be very simple: you just write the f (which you probably already have written). There is one downside though in that the continuous variables are not named. If you only need the continuous variables to be named inside of f, a quick way to handle this would be to use ParameterizedFunctions.jl to define the f.

For example, for the Lotka-Volterra equations:

f = @ode_def LotkaVolterra begin
  dx = a*x - b*x*y
  dy = -c*y + d*x*y
end a=>1.5 b=>1 c=3 d=1

It ends up defining the function on the indices of the input, so it would name s[1] as x and s[2] as y and everything will work as long as the input type (as determined by the initial condition s0) is indexable (which it is given the interface we put on it).Also, this will calculate analytical functions for things like the Jacobian, so that could help speed things up (currently not used… but it will be!).

Now let’s say you want to actually want to make it all index as fields? I’ll put that in a different response because it’s a neat trick, but not truly necessary.

So okay, as an added bonus, what if you wanted to be able to do s.lat for lat a continuous variable. How can you set this up? Take the SatelliteData type that we already have. Now I think you can define the following.

Use a helper function:

getfield(s::SatelliteData,sym::Symbol) = getfield(s,Val{sym}) # Helper function

and now define the accessors:

getfield(s::SatelliteData,::Type{Val{:continuous_field1}}) = s.x[1]
setfield!(s::SatelliteData,x,::Type{Val{:continuous_field1}}) = (s.x[1] = x)
getfield(s::SatelliteData,::Type{Val{:continuous_field2}}) = s.x[2]
setfield!(s::SatelliteData,x,::Type{Val{:continuous_field2}}) = (s.x[2] = x)
...

and now s.continuous_field1 should be the same as s[1] which is the same as s.x[1] and so it grabs the first continuous variable. Not only that, but if you only use this function in a way that the symbol values are known at compile time, this should all just be syntactic sugar and just compile away to be zero overhead (I haven’t tested this, but in theory the compiler can and should do it here). So then you can write your ODE like

function f(t,s,ds)
  ds.continuous_field1 = s.continuous_field1 + s.discrete_field1 
  ...
  # Define the ODEs
end

or if you’re lazy and want to get rid of the s. part, use Parameters.jl’s @unpack and @pack.

Again, I would check the @code_llvm before I am certain using it like that will have zero overhead (but it should… it’s what Julia realizes it should, or whether we have to help it realize it some more :slight_smile:). I say “like this” because it would have to be in a compiled function that this would work well: in the REPL this would probably be dynamic dispatch. I didn’t test what I wrote here either, so there may be something subtle in the getfield and setfield! to change.

Actually, a macro for creating this kind of naming on an array type would be nice. NamedArrays.jl?

Again, this is clearly not necessary but is a little added bonus that could make the code easier to read/maintain in the future.

Hi Chris!

You are awesome man, I would have never expected such detailed answer. I will study and implement it in my simulator.

Thank you very much.

Hi Chris!

Two things:

  1. Overloading of getfields does not work, I think it is not supported.
  2. Some of the continues variables are vectors and matrices. I think to use x::Array{Any,1} and then some components can be matrices and vectors. Do you know if the ODE solver will work? Let’s say I want to integrate d/dt D = X*D where X and D are 3x3 matrices.

Thanks!
Ronan

getfield not getfields? Oh, I guess you can’t do it:

You can overload getindex and setindex! via symbols instead so that way s[:continuous_field1] works (i.e. it looks like a Dict).

That should be fine. In fact, one of the problems in the tutorial is on matrices:

https://juliadiffeq.github.io/DiffEqDocs.jl/latest/tutorials/ode_example.html

However, I would try not to use an Array{Any,1} if you want good performance. There are many ways to make the single array work (DifferentialEquations.jl just needs that the continuous variables have a linear index). So one way is to make an indexing scheme return views, like

getindex(s::SatelliteData,sym::Symbol) = getindex(s,Val{sym})
getindex(s::SatelliteData,::Type{Val{:continuous_field1}}) = reshape(view(s,1:9),3,3)

and then you can use s[:continuous_field1] for the matrix of the first components like it was a dictionary.

Or inside your function f, simply start by defining some views:

function f(t,s,ds)
  a = @view reshape(s[1:9],3,3) # First 9 coordinates are the 3x3 matrix a
  b = @view s[10:20] # Next 10 coordinates are a 
  ... # use the views

Reshapes and views are cheap so this is all sugar with low overhead.

Another thing you can do is instead make the type more complicated:

type SatelliteData <: AbstractVector
  continuous_field1
  continuous_field2
  ...
  discrete_field1
  discrete_field2
  ...
end

In the end, the rule is simply, whatever you have linearly indexed is what DifferentialEquations.jl treats as continuous. So define a linear index through the continuous fields and this will be the “array” DifferentialEquations.jl will use. This is actually a simplified version of what I do with MultiScaleModels.jl


In DifferentialEquations.jl, you can use any “array” with a linear index. The items in the linear index are those recognized by the solvers as the continuous variables. But using Julia’s dispatch, you can define any type you want to be that “array”, and make it have any indexing scheme you like.

This is probably enough information to be an interesting blog post.

1 Like

Maybe I’ve missed something in your post but as far as I can tell s[:continuous_field1] will result in dynamic dispatch on the inner getindex call and probably bad performance, perhaps even worst that just using a Dict? Using Val like that is not recommended: see these two sections of the manual.

No, the details matter. It will only be dynamically dispatching if you’re using it dynamically. Yes, if you do

var = :continuous_field1
s[var]

that will be dynamic dispatch, so don’t do that with this implementation. But if you only ever use it in a function with constants

s[:continuous_field1] # this inside of a function so the value is a constant

that shouldn’t dynamically dispatch (it should when inlining the constant end up inlining the function. I know for certain it will if you write it as s[Val{:continuous_field1}], and remember that when I did something similar in the past it was able to inline the pure symbol form as well when it was a constant). So yes, you have to be careful when using this, but if you restrict using this to places where the symbol is constant then you’re fine.

In a function for an ODE as is discussed in this thread, I am certain it will only be used that way, in which case dynamic dispatch is not an issue.

(I actually thought I already mentioned this, but as a search above I guess I did not)