Is it possible to multiply with a derivative operator?

Say that I have:

using SymEngine
x=symbols(:x); y=symbols(:y);
A = [x y^2]

I know to find the x derivative I can say:

1×2 Array{Basic,2}:
 1  0

My question is then; is it possible in any way to rewrite (in pseudo-code) this like?


Where “deriv(x)” is holding the operator d()/dx, then multiplying it on A.

Kind regards

1 Like

Currently I am working on such a differential geometric algebra, although I have not finished the full operator algebra implementation yet. The package is Grassmann.jl and I am currently working on expanding the operator algebras to handle the scenarios you are interested in.

The package provides a basis for partial differential operators, which are a work in progress but will do exactly what you are asking.

1 Like

Awesome, that is great - looking forward to that! I assume you are making this package since you discovered that no package can do that currently?

If it was possible to do it would be very neat, since I could write something like this:

∂ = [∂/∂x 0; 0 ∂/∂y; ∂/∂y ∂/∂x]

And then multiply it on a matrix etc.

Please, do comment if you are able to make the package do it.

Kind regards

It’s already possible with the package; however, I have not published all the code for it yet since it is not fully completed in its implementation yet.

You will be able to work with matrices too. However, I recommend against using matrices, because my package is all about geometric algebra, which uses multivectors instead of matrices.

The philosophy of my paper is encoded in “differential geometric algebra” which is a generalization of what is called linear algebra and multi-linear algebra, which also includes differential + geometric algebra too. That is why I am making my package, because this has not been done before in that way.

The whole point of my package is to use * the multiplication for this purpose.

You can express this with broadcasting and function application, rather than with multiplication, e.g. you could define:

deriv(x) = y -> diff(y, x)

and then call:


You could even get a bit fancier and define a custom struct which represents a derivative and implement Base.:* for your struct so that you can express differentiation using the * operator.

1 Like

Multivectors/Clifford algebras aren’t a replacement for matrices though. They are complementary and useful in different scenarios.

Taking a simple example, you may be interested in matrices of octonions to study the exceptional lie groups. Octonions are non-associative, so you can’t embed them in a clifford algebra.

They are isomorphic, so you could represent all matrices and vice versa.

Octonions can be had by nesting quaternions, so yes it is possible.

Also, octonions can be found as a sub-algebra of a (6+1)D geometric algebra with point at infinity.

If it is already possible with the code which you have uploaded, I would really like if you could do a simple example where you multiply an x derivative operator on some symbolic function, if possible. For me matrices are what I use mostly - multivectors seem exciting as well though.

Kind regards

Thanks for showcasing that. If it was possible to multiply it together in matrix form with some matrix function this would be what I want. Imagine you have:

A = [x y^2; y^2 x*y]

And you would like to apply d/dx on each element in first column and then apply d/dy on each element in second column. That is why I would like to multiply it on.

Kind regards

Alright, well currently you can use my prototype implementation from Leibniz.jl (which will not be needed in the future as the feature will be built into Grassmann.jl itself)

using Leibniz, DirectSum, SymEngine
import Leibniz: Monomial
printindices(V,D) = (io=IOBuffer();DirectSum.printindices(io,V,D,true);String(take!(io)))
Base.:*(d::Monomial{V,G,D,0} where G,r::Basic) where {V,D} = diff(r,symbols(Symbol(printindices(V,D))))
Base.:*(d::Monomial{V,1,D,1},r::Basic) where {V,D} = diff(r,symbols(Symbol((printindices(V,D)))))
x,y = symbols(:v1),symbols(:v2)
A = [x y^2; y^2 x*y]
b = [∂(ℝ^2,1),∂(ℝ^2,2)]

something like this should work with SymEngine at least

julia> A*b
2-element Array{Any,1}:
 1 + 2*v2

is that what you wanted? as I said, Leibniz is a prototype for what will be available in Grassmann


Yes, this is very close to what I want, thank you!

Can you explain why “ℝ^2”?

I hope that in your more polished version in Grassmann, it will be possible to use x and y directly, instead of renaming them to v1 and v2, but thank you very much, now I can try playing around with it a bit.

Kind regards

This code is not how it will look when finally implemented, but ℝ^2 is used for specifying the 2-dimensional vector space that you are working with in your calculation.

If this is a feature you want, then you will have to pay me a large sum of money for that to be implemented because it is much better and more robust for my design if I stick with v1 and v2 because that is an indexing scheme that is easily scalable to arbitrary dimensions and other bases, unlike x and y.


Thanks, it makes sense now!

And haha, I think I will survive with v1 and v2 then.

But again thank you so much, I’ve been thinking about this the whole day and finally I am able to do it like this:

∂total  = [∂(ℝ,1) 0; 0 ∂(ℝ,2); ∂(ℝ,2) ∂(ℝ,1)]

Bmat = Array{typeof(a)}(∂total*ℕℕ)

I changed the notation a bit by remove ^2, but it works and it looks cool now, so I am pretty estatic! :smiley:

Kind regards


Just an update, regarding using x and y. You can just redefine as:

∂ₓ = ∂(ℝ,1)
∂ᵧ = ∂(ℝ,2) #\_gamma

Which makes it look a little bit prettier:

𝛛(N)  = [∂ₓ 0; 0 ∂ᵧ; ∂ᵧ ∂ₓ]*N

If you really want x and y you have to go up to the 35th dimensional vector space

julia> ∂(ℝ^35,34),∂(ℝ^35,35)
(∂x, ∂y)

Then you will work with vx and vy with x and y index. However, I don’t recommend this unless you actually need 35 dimensions (this is also better than using the subscript x and gamma).

I only work in 2 dimensions right now (starting simple), so seems more feasible to stick with your first solution. Could you explain to me why, 34 and 35, are x and y respectively? Is it to do with the packages we are using or is it some kind of “unicode” definition or?

Kind regards

The indexing scheme is specified by DirectSum.jl intended to be used as a universal index printing scheme for up to 64 dimensional vector spaces using byte encoding (so the entire index is encoded in 64bit data)

julia> DirectSum.printindices(stdout,DirectSum.indices(UInt(2^62-1)))
1 Like

Thank you! Really learning a lot from this. So the reason you are advising against starting from 34 / 35 is that I lessen the number of variables I have left or is there some other reason? I guess that if I only care about 4 variables, x,y and for an example a and b, then there would be no harm in starting from 34 / 35?

Thanks for your time.

The reason I advise against it is because when it is finished implementing and you use it from Grassmann you will actually be working with a 35 dimensional vector space basis, which requires a huge amount of memory and an entirely different and less efficient algebraic caching scheme. The Grassmann package generates a full TensorAlgebra basis of 2^{35} dimensions with up to 35 dimensional tensors in that case. The Leibniz package on its own does not generate the full 2^{35} dimensional TensorAlgebra needed.

However, it is likely that I will keep the Leibniz package (the way it is) around for people like you.

Thanks for the clarification - I will keep that in mind for the future then.

Kind regards