Equivalent of numpy.gradient in Julia

Pull requests are welcome! It’s probably only a couple lines of code, so it could easily go into ImageFiltering.KernelFactors. Basically all that needs doing is generalizing the line above to n-dimensions and picking a name for it.

Hello, here is my attempt for a Numpy-style gradient function which works with multidimensional arrays. It uses central finite difference expects at both ends.

It’s based on the Multidimensional algorithms and iteration, in particular the last example Filtering along a specified dimension (exploiting multiple indexes), including a function barrier to avoid type instability (I’m not 100% sure it’s necessary, because I’m not super familiar with this issue).

"""
    gradient(A::AbstractArray, h; dim::Integer)

Compute gradient of array `A` along dimension `dim`,
assuming an equidistant grid step `h` along this dimension.

Gradient is computed using central finite difference, except at both ends
of dimension `dim` where forward or backward differentiation is used instead.

# Examples
```julia-repl
julia> a = [1 2 4; 4 6 6]
2×3 Matrix{Int64}:
 1  2  4
 4  6  6
julia> gradient(a, 1.0; dim=1)
2×3 Matrix{Float64}:
 3.0  4.0  2.0
 3.0  4.0  2.0
julia> gradient(a, 1.0; dim=2)
2×3 Matrix{Float64}:
 1.0  1.5  2.0
 2.0  1.0  0.0
\```
"""
function gradient(A::AbstractArray, h; dim::Integer)
    # Check inputs
    Base.require_one_based_indexing(A) # necessary?
    n = size(A, dim)
    n >= 2 || throw(ArgumentError("dimension $dim should be ≥2"))
    
    # Init output array
    g_type = eltype(A[1]*inv(h))
    g = similar(A, g_type, axes(A))

    # Iterators
    Rpre = CartesianIndices(size(A)[1:dim-1])
    Rpost = CartesianIndices(size(A)[dim+1:end])
    
    _gradient!(g, A, h, Rpre, n, Rpost)
end

function _gradient!(g, A, h, Rpre, n, Rpost)
    for Ipost in Rpost
        for i = 1:n # dimension to be differentiated
            if i==1 # forward derivative
                i_left = 1
                i_right = 2
                denom = inv(h)
            elseif i == n # backward derivative
                i_left = n-1
                i_right = n
                denom = inv(h)
            else # 1<i<n: central derivative
                i_left = i-1
                i_right = i+1
                denom = inv(2*h)
            end
            for Ipre in Rpre
                g[Ipre, i, Ipost] = (A[Ipre, i_right, Ipost] - A[Ipre, i_left, Ipost])*denom
            end
        end
    end
    return g
end

Compared to Numpy version, two features are not supported:

  • compute several gradients in one call (returning a tuple of gradient arrays)
  • compute gradient using an irregular grid, because the central difference equation gets dirtier that the in regular case

Actually I have one question with the Filtering example: the blog post says it’s fine to mix CartesianIndex and Integer for indexing, but says nothing about 1-based indexing. In my case, I have added the check Base.require_one_based_indexing(A) which I found to be used in the source code of diff in /base/multidimensional.jl#L1101C1-L1110C4 (another potential source of inspiration which ended up not being applicable to process edge cases)

2 Likes

but says nothing about 1-based indexing

The blog post was probably written before Julia supported arbitrary indexing

If you want to support arbitrary indexing,

  • in gradient: replace n = size(A, dim) with axdim = axes(A, dim) and check its length in the following line. Pass axdim to _gradient!
  • in _gradient!: replace for i = 1:n with for i in axdim, and use first(axdim) and last(axdim) instead of hard-coding 1 and n.

Thanks for the tip!

I’ve seen that you already updated the blog post twice, which makes it a great source of consolidated information. Then, if times allows, perhaps there is space for a third iteration with arbitrary indexing?

Thinking about arbitrary indexing and the axes() function, the source code of diff looks odd to me, because it uses axes() and last(), but then still hard codes the 1 instead of using first as you mentioned:

function diff(a::AbstractArray{T,N}; dims::Integer) where {T,N}
    require_one_based_indexing(a)
    1 <= dims <= N || throw(ArgumentError("dimension $dims out of range (1:$N)"))

    r = axes(a)
    r0 = ntuple(i -> i == dims ? UnitRange(1, last(r[i]) - 1) : UnitRange(r[i]), N)
    r1 = ntuple(i -> i == dims ? UnitRange(2, last(r[i])) : UnitRange(r[i]), N)

    return view(a, r1...) .- view(a, r0...)
end