Julia provides excellent capabilities for writing hardware-agnostic code, allowing the same algorithm to run seamlessly on CPUs or GPUs. For example:
using LinearAlgebra
using GPUArrays
using Adapt
using CUDA
using KernelAbstractions
@kernel function phi_kernel(y, phi, n, max_iter)
i = @index(Global)
if i <= max_iter
acc_dot = zero(eltype(y))
acc_norm = zero(eltype(y))
for j = 1:(n - i)
acc_dot += y[i + j] * y[j]
acc_norm += y[j]^2
end
phi[i] = acc_dot / acc_norm
end
end
function compute_phi(y::Union{AbstractGPUArray{T}, AbstractArray{T}}, max_iter::Int=15) where {T<:AbstractFloat}
n = length(y)
phi = similar(y, max_iter)
backend = get_backend(y)
kernel_inst = phi_kernel(backend)
kernel_inst(y, phi, n, max_iter; ndrange=(max_iter,))
return phi
end
Unfortunately, some standard libraries don’t yet fully support GPU arrays. Otherwise, the same computation could be expressed as simply as:
# Original equation from ARAR model
function compute_phi(;y::AbstractArray, max_iter::Int=15)
phi = [dot(y[i+1:end], y[1:end-i]) / sum(y[1:end-i].^2) for i in 1:max_iter]
return phi
end
If we make better use of these features, Julia has everything needed to become a truly powerful, modern, and cross-hardware language.
And just to be clear, I don’t mean these values don’t already exist, but with even more constructive feedback, community collaboration, and structured code review (similar to the CRAN process in R), Julia’s ecosystem could reach an exceptional level of reliability and performance.