Package for mathematical functions?

Is there a package for manipulating (mathematical) functions? I often find myself manipulating functions in algorithms. For example, I often create the function sum f+g with domain(f+g) = domain(f) ∩ domain(g) such that (f+g)(x) = f(x) + g(x). Unfortunately, the + operation is not defined for Julia functions:

f(x) = x
g(x) = 2x

f+g # error

and so algorithms always have a branch or separate method with a specific implementation. It would be nice to treat functions as if they were numbers generically sometimes. I am mostly interested in functions R^n -> R^m, but complex numbers could be useful too.


Is this the kind of thing that you were thinking of?

julia> function Base.:+(fn1::Function, fn2::Function)
           function mysum(varargs...; kwargs...)
               return fn1(varargs...; kwargs...) + fn2(varargs...; kwargs...)
           return mysum

julia> f(x) = x
f (generic function with 1 method)

julia> g(x) = 2x
g (generic function with 1 method)

julia> f+g
(::var"#mysum#4"{var"#mysum#3#5"{typeof(f),typeof(g)}}) (generic function with 1 method)

julia> (f+g)(10)

Yes, but this would be considered type piracy right? I was looking for a package with a new type like MathFunc with careful semantics for R^n, C^n, etc. the notion of domain, image of functions, etc.

I wonder if AbstractAlgebra.jl is what you are looking for?

Yes, something along these lines would be useful. My focus is on real or complex analysis however, not asbtract algebra, at least for the moment.

Have you given ApproxFun.jl a try? These types of packages push forward representations of functions in a given basis. In general you’re either looking for lazy evaluation, in which case you can use closures/thunks or Lazy.j, or you are looking for full function representations and ApproxFun is a good choice.


Here’s my attempt at implementing your original “sum of functions” idea in AbstractAlgebra:

julia> import AbstractAlgebra

julia> mutable struct MathFunc{D, C} <: AbstractAlgebra.Map{D, C, AbstractAlgebra.FunctionalMap, MathFunc}

julia> function (M::AbstractAlgebra.Map(MathFunc))(a)
           parent(a) != AbstractAlgebra.domain(M) && throw(DomainError(M))
           return AbstractAlgebra.image_fn(M)(a)::AbstractAlgebra.elem_type(AbstractAlgebra.codomain(M))

julia> function Base.:+(M1::AbstractAlgebra.Map(MathFunc), M2::AbstractAlgebra.Map(MathFunc))
           AbstractAlgebra.domain(M1) == AbstractAlgebra.domain(M2) || throw(DomainError((M1, M2)))
           AbstractAlgebra.codomain(M1) == AbstractAlgebra.codomain(M2) || throw(DomainError((M1, M2)))
           return MathFunc(AbstractAlgebra.domain(M1), AbstractAlgebra.codomain(M1), x -> AbstractAlgebra.image_fn(M1)(x) + AbstractAlgebra.image_fn(M2)(x))

julia> const R = AbstractAlgebra.RealField

julia> f = MathFunc(R, R, x -> x)
MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}}(Floats, Floats, var"#5#6"())

julia> g = MathFunc(R, R, x -> 2x)
MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}}(Floats, Floats, var"#7#8"())

julia> f(R(10))

julia> g(R(10))

julia> f + g
MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}}(Floats, Floats, var"#3#4"{MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}},MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}}}(MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}}(Floats, Floats, var"#5#6"()), MathFunc{AbstractAlgebra.Floats{BigFloat},AbstractAlgebra.Floats{BigFloat}}(Floats, Floats, var"#7#8"())))

julia> (f + g)(R(10))

It does seem a little unwieldy. Probably @ChrisRackauckas has the better suggestion.

1 Like

Here’s the “sum of functions” with ApproxFun:

julia> import ApproxFun

julia> const R = ApproxFun.Line()

julia> f = ApproxFun.Fun(x -> x, R)
┌ Warning: Maximum number of coefficients 1048577 reached in constructing Fun.
└ @ ApproxFunBase ~/.julia/packages/ApproxFunBase/Ak6LI/src/constructors.jl:138
Fun(Chebyshev(ℝ),[0.0, 4.1937830580858183e6, 0.0, 4.193779058085821e6, 0.0, 4.193775058085823e6, 0.0, 4.193771058085826e6, 0.0, 4.1937670580858313e6  …  0.0, 17.99671143171054, 0.0, 13.997442224683063, 0.0, 9.998173017662317, 0.0, 5.998903810632585, 0.0, 1.9996346035238175])

julia> g = ApproxFun.Fun(x -> 2x, R)
┌ Warning: Maximum number of coefficients 1048577 reached in constructing Fun.
└ @ ApproxFunBase ~/.julia/packages/ApproxFunBase/Ak6LI/src/constructors.jl:138
Fun(Chebyshev(ℝ),[0.0, 8.387566116171637e6, 0.0, 8.387558116171642e6, 0.0, 8.387550116171646e6, 0.0, 8.387542116171652e6, 0.0, 8.387534116171663e6  …  0.0, 35.99342286342108, 0.0, 27.994884449366126, 0.0, 19.996346035324635, 0.0, 11.99780762126517, 0.0, 3.999269207047635])

julia> f(10)

julia> g(10)

julia> f + g
Fun(Chebyshev(ℝ),[0.0, 1.2581349174257455e7, 0.0, 1.2581337174257463e7, 0.0, 1.2581325174257468e7, 0.0, 1.258131317425748e7, 0.0, 1.2581301174257495e7  …  0.0, 53.990134295131625, 0.0, 41.99232667404919, 0.0, 29.994519052986952, 0.0, 17.996711431897758, 0.0, 5.998903810571452])

julia> (f + g)(10)

Thank you @ChrisRackauckas, I wonder if it is part of the goals of ApproxFun.jl to treat multivariate functions, and to expose to the user function properties like the domain of the function? I should probably ask it directly to @dlfivefifty :slight_smile:

To be honest, I am not interested in approximations, I would like exact function sum, product, scalings, etc. I remembered ApproxFun.jl when I asked the question, it is a really nice package, but maybe the goals are different than what I am looking for. I simply want to manipulate functions around without approximation, to operate with them as if they were scalars.

That is great @dilumaluthge, thank you for putting the time to write it down :heart: I wonder if we can deal with R^n as well? By skimming the README of ApproxFun.jl I only saw examples on the real line.

If you skim to the bottom you’ll see an example of a PDE on a square. sum also works there. A POC implementation on disks and triangles is implemented in MultivariateOrthogonalPolynomials.jl.

Higher dimensions is not implemented yet, though 3D on squares, balls, and tetrahedra would not be too hard.

1 Like

While approximate, it does work to floating point accuracy (1e-16), so it’s somewhat “exact”, even more numerically exact than using the actual functions in some cases because the basis expansion techniques are pretty careful to be numerically stable.

But indeed, the other choice is to just use something like Lazy.jl if you were just looking for lazy evaluation.

1 Like

That is an interesting observation. Thanks for sharing. :100:

Thank you for the package @dlfivefifty, really nice work.