Type constraints in Distributions and StatsFuns

Hello all,

I’m having some type constraint troubles, hoping someone here might have a suggestion for a workaround. For most distributions, the logpdf is just an algebraic expression, which I need to be able to perform some basic manipulations on. There are a few options for doing this, but the one that so far seems most promising is @chakravala’s Reduce.jl, which wraps the external tool Reduce.

Because most of these are so simple, I had hoped to be able to pass a symbolic value to build a distribution, and then pass that and another symbolic value to logpdf to end up with some symbolic expression.

Simple example:

# μ, σ, x are all symbolic
logpdf(Normal(μ,σ),x)

Type constraints are preventing this. Even if I bypass the distribution, I’m stuck with

normlogpdf(μ::Real, σ::Real, x::Number)

which, of course, doesn’t hold because the values are symbolic.

I had thought the obvious solution would be to make e.g., normlogpdf more generic, possibly only exporting a type-constrained version. IIUC, this would present an identical interface with no loss in performance; the only difference would be the existence of an importable generic version handy for situations like this.

From some Slack discussion, there wasn’t much interest in this suggestion, so I’m trying to find a workaround. Some possibilities:

  • Fork StatsFuns.jl and modify as needed. The introduces a new dependency to track and generally confuses things.
  • Copy lots of StatsFuns into my package and genericize them there. This way I could still use the original StatsFuns for most purposes, but I’d have generic versions for the algebraic stuff.
  • Find another CAS that allows type constraints (is there one?)

Any suggestions? Are there other options I’m missing?

1 Like

You can import relevant functions from StatsFuns into the module of your package and define methods for your own type (which I assume is defined in your own package to represent symbolic expressions, so it won’t be type piracy).

1 Like

Perhaps, this is use case where the f(x) = _f(x) pattern can be useful. So you can PR StatsFuns to make every f(x::Real) = _f(x) for example. Then in your package, you can define your own type that wraps a Symbol or Expr as Tamas suggests and define your version of StatsFuns.f as StatsFuns.f(x::MyType) = StatsFuns._f(x). This will minimize the copying you have to do (in fact you can probably metaprogram it) and you won’t be changing the exported functions’ annotations in StatsFuns, so users can still be getting clear dispatch errors. One problem with this is that your code can break easily if StatsFuns devs decide to remove _f at any point, which won’t be “breaking” change since _f is not exported.

Alternatively, make MyType <: Real or something. That should take you as far as real numbers can go in StatsFuns.

You can also check out GitHub - mschauer/GaussianDistributions.jl: Gaussian distributions as state variables and for uncertainty quantification with Unitful

I think this would work. The only issue I see is that the body of a given logpdf function, for example, is exactly the same whether the input type is ::Real or generic (or some symbolic type). So it’s a lot of code duplication.

I think this is very similar to my suggestion. Do you mean that _f and f would be identical with the exception of a type constraint on f? This was my approach, except I didn’t see a reason to give them separate names. As I write this, I just realized this would allow only exporting f (but not _f). Is that the point? Where else is this pattern used?

I agree metaprogramming would help with this, something like

export f
@generic f(x::T) = ...

becomes

export f
_f(x) = ...                        # Should this be `@inline`d?
f(x::T) = _f(x)

This adds very little to the original source code, and should have zero performance overhead. But it wasn’t well-received, and I still don’t understand why. So I’d really like to better understand any potential downsides to this I’m missing.

This would be great, except MyType isn’t something I control - it comes from something like Reduce.jl.

That looks really useful! I’ll keep it in mind for Gaussian-specific applications. But for the current situation, Normal is a special case; I need to be able to do this with any Distribution.

This was my approach, except I didn’t see a reason to give them separate names. As I write this, I just realized this would allow only exporting f (but not _f ). Is that the point?

But it wasn’t well-received, and I still don’t understand why.

This is not the same as what you suggest. f is part of the API, so its type annotations are important. To see why this is, let’s take an example:

julia> f(x::Real) = sqrt(x)
f (generic function with 1 method)

julia> f(1)
1.0

julia> f("1")
ERROR: MethodError: no method matching f(::String)
Closest candidates are:
  f(::Real) at REPL[1]:1
Stacktrace:
 [1] top-level scope at none:0
julia> f(x) = sqrt(x)
f (generic function with 1 method)

julia> f("1")
ERROR: MethodError: no method matching sqrt(::String)
Closest candidates are:
  sqrt(::Float16) at math.jl:1018
  sqrt(::Complex{Float16}) at math.jl:1019
  sqrt(::Missing) at math.jl:1070
  ...
Stacktrace:
 [1] f(::String) at .\REPL[1]:1
 [2] top-level scope at none:0

You see the error in the first case is clearer and more shallow. This helps with the debugging. So type annotations can encode and communicate to the user the kind of assumptions made in the function about its inputs. When your code adds multiple layers above f, having to dig too deep into the functions called by f is really annoying, especially when you haven’t written the package yourself. So the idea of using _f is that it is not exported so users will not use it by default, but it lets you bypass the type annotation when you know what you are doing. As I said though, the problem with this approach is that if you are using the internal function _f then breakage can happen if developers remove _f. I guess this can be avoided by adding a comment there saying that this function is used in package XYZ so please don’t remove it.

In a lot of packages but not for this purpose. Typically, f does some pre-processing, e.g. assertions or other computations, and _f is called possibly with extra arguments.

This would be great, except MyType isn’t something I control - it comes from something like Reduce.jl .

You can define your own wrapper of the type in Reduce.jl. Then all methods defined in Reduce.jl can be defined for your type by forwarding it to the wrapped type.

This can be made easier by defining a convenience macro in Reduce.jl that enables easy definition of wrappers. So one would do:

@wrap Reduce.ExprType in MyExprType <: Distribution

And it would become:

struct MyExprType <: Distribution
    f::Reduce.ExprType
end
# method forwarding

I have now re-opened the related issue Type-constrained functions · Issue #22 · chakravala/Reduce.jl · GitHub

This may hold for some distributions, but not in general, a lot of PDFs involve numerical constants. Also, while formulas are invariant, there is no guarantee that the actual implementation will be.

1 Like

Great point, the _f approach is definitely better. Thanks for the detail :slight_smile:

Oh, this is interesting. So for my case I think it would be something like

struct FakeReal <: Real
    val :: Reduce.RExpr
end

And then I’d need to define operations for FakeReals, which should be a quick loop to @eval the definitions for each operation. Something like (untested)

for f in [:+, :*]
    @eval $f(x::FakeReal, y::FakeReal) = $f(x.val, y.val) |> FakeReal
end

@chakravala Any concerns with this approach?

Good point, it’s definitely worth watching out for potential corner cases.

Yes, you could do that, but it wouldn’t be a quick loop if you want to have all the standard functions, which Reduce.jl could automate internally.

So the question is: should Reduce.jl automatically generate all those methods “globally” for FakeReal or is this something which should be handled on a case-by-case basis in your local package?

After some more exploration, I’ve discovered the stats module of SymPy; it can be used from Julia with a few lines of code. So now you can do this to get a simplified logpdf:

julia> @vars x μ σ
(x, μ, σ)

julia> s = density(Normal(x, μ, σ)).pdf(x) |> log
   ⎛            2 ⎞
   ⎜    -(x - μ)  ⎟
   ⎜    ──────────⎟
   ⎜          2   ⎟
   ⎜       2⋅σ    ⎟
   ⎜√2⋅ℯ          ⎟
log⎜──────────────⎟
   ⎝    2⋅√π⋅σ    ⎠

julia> s = expand(s,log=true,deep=false,force=true)
                                   2
          log(π)   log(2)   (x - μ) 
-log(σ) - ────── - ────── - ────────
            2        2           2  
                              2⋅σ   

julia> repr(s)
"-log(σ) - log(pi)/2 - log(2)/2 - (x - μ)^2/(2*σ^2)"

I’ve got got this connected with Soss.jl to simplify distribuitions. So for example, here’s a normal approximation to a Poisson:

julia> @getlogpdf(x ~ Normal(λ,√λ), [x,λ])
                                    2
  log(λ)   log(π)   log(2)   (x - λ) 
- ────── - ────── - ────── - ────────
    2        2        2        2⋅λ   

I’ve heard SymPy is a little slow, but I’m not doing this in an inner loop - it’s a “once and done” thing when the model is defined. So far it seems like the overhead isn’t more than a second or so, which seems well worth the convenience and easy install.

I don’t have a deep understanding of the tradeoffs between the two, but it seems like Reduce may be more powerful in terms of simplification options. And @chakravala you’ve been a great help in troubleshooting. In general I’d recommend anyone interested in CAS in Julia at least try out Reduce.jl. Thank you!

1 Like

Alright, so I went ahead and implemented the requested feature, which works on master branch

julia> Reduce.@subtype FakeReal <: Real

julia> FakeReal(R"x+1") + FakeReal(R"y")
y + 1 + x

the implementation looks like this at the moment

macro subtype(x)
    x.head ≠ :<: && throw(error("$x is not a subtype expression"))
    name = x.args[1]
    Expr(:struct,false,x,Expr(:block,Expr(:(::), :r, :RExpr))) |> eval
    @eval begin
        export $name
        show(io::IO, r::$name) = show(io,r.r)
        extract(r::$name) = r.r
        Algebra.init_subtype($name)
    end
    nothing
end

and

function init_subtype(name)
    Expr(:block,[:(Base.$i(r::$name...)=$i(extract.(r)...)|>$name) for i ∈ [alg;iops]]...) |> eval
    Expr(:block,[:($i(r::$name...)=$i(extract.(r)...)|>$name) for i ∈ [calculus;cnan;cmat]]...) |> eval
    Expr(:block,[:(Base.$i(r::$name)=$i(extract(r))|>$name) for i ∈ [sbas;[:length]]]...) |> eval
    Expr(:block,[:($i(r::$name)=$i(extract(r))|>$name) for i ∈ [sfun;snan;snum;scom;sint;sran;smat]]...) |> eval
end

this provides all of the basic functions from the Reduce.Algebra module, but outside that, the new subtypes generated do not have the same treatment ast RExpr at the moment