In general, on the surface, it is to some extent quite similar in Julia as well, though not as pronounced as in other ecosystems. Depending on how you look at it, one could even argue that there is a lot of C++ in Julia because of LLVM. However, I don’t think that claim is fully justified, since, at least as far as I know, thanks to this high-level assembly, much can be expressed directly in Julia’s syntax. Anyway, for example, the upcoming comp neuro & machine learning framework built on Enzyme.jl, Reactant.jl, and Lux.jl relies at its lower levels on LLVM IR and C++, at the middle level mainly on MLIR, and at the higher level primarily on Julia.
To answer your question: honestly, I think it may be more rewarding to focus on innovation than on supporting other ecosystems, especially given that Julia, like Python, is a general purpose language. That said, although I have recently become again more optimistic about Julia (with developments such as PackageCompiler.jl, JuliaC.jl, BorrowChecker.jl, Enzyme.jl, Reactant.jl, Lux.jl, and ongoing research on the GC), I believe there remains a significant risk, mainly due to the speed at which other ecosystems are advancing.
Personally, I am committed to Julia while also experimenting with q/kdb-x. I believe Julia is particularly well suited for time series analysis. By the way, could you share more information about the Time Series Analysis & Forecasting Society and its activities? And would you consider presenting Durbyn.jl at one of the JuliaDynamics monthly meetings? Thanks again for the package!
@j_u From my perspective, the new compiler packages make Julia a strong candidate to be used as a backend for other languages. At the same time, I feel the bigger risk for Julia doesn’t come so much from R or Python, but rather from Rust, which is advancing quickly in performance-sensitive areas.
In my view, the Julia community could also benefit from engaging more with industry practitioners – people like me who work with large-scale forecasting problems – so that tools stay practical and relevant in production.
On a personal note, I’d be very happy to present in Julia community forums. Since I’ve only recently become involved in open source (most of my development work until now was for my employer), I haven’t had much interaction with the community before. If there’s an opportunity to contribute and share, I’d really welcome it.
Nice to see this effort.
Re custom tables, I worked on some large scale forecasting problems where we also needed custom table implementations for performance, but we also kept all core functions generic by adopting the Tables.jl interface, and then just added the custom table also using that same interface. We could have supported any number of table types without touching any of the model code.
Well, yeah, I agree with you, it does look like not only performance-sensitive areas but almost the entire internet is being rewritten in Rust. In general, this kind of strategic analysis is closer to my background than programming itself. From my experience, it usually takes a lot of time and effort to identify all the relevant factors, opportunities, and strengths. It’s a fascinating topic, especially since it feels like the programming industry might currently be undergoing some structural changes.
I have to admit I haven’t done any deep thinking on this subject. I sustain, that my general impression is that sometimes other ecosystems seem to be advancing a bit faster than Julia’s. That being said, there’s also the question of quality. What I wanted to convey is that the packages and scientific work I mentioned earlier, as well as many other efforts I probably don’t know about, could put Julia in a much stronger position than it might seem at first glance, hence, our friends working with Rust might end up just spinning their wheels … :- )
I think we actually are engaging. I guess, it’s hard to argue otherwise. Every post gets answered, there are already forty three messages in this topic, over 1.3k views, and high-quality advice on package performance and user interface.
I think the best approach would be to contact the organizer of the JuliaDynamics monthly meetings directly with a message. From what I understand, your package is very much related to the kinds of topics usually discussed there. I am also aware that there are other opportunities, like during JuliaConn which is a conference on the Julia programming language, however, I don’t know any details.
I’m sorry, I wasn’t able to check that discussion. I have “LIn” heavily restricted on my computer, and I also have a long standing habit of not posting under my real name online. That said, I did send you an invite there. I’m always happy to make connections.
That’s exactly the plan — the goal is to build an internal data infrastructure, tentatively called PanelData, that’s lightweight and efficient while still supporting all widely used Julia tabular formats at the user level.
By keeping the internals minimal but exposing a Tables.jl-compatible interface, we can ensure Durbyn remains performant and scalable while integrating smoothly with the broader Julia data ecosystem – all without introducing unnecessary dependencies.
Julia provides excellent capabilities for writing hardware-agnostic code, allowing the same algorithm to run seamlessly on CPUs or GPUs. For example:
using LinearAlgebra
using GPUArrays
using Adapt
using CUDA
using KernelAbstractions
@kernel function phi_kernel(y, phi, n, max_iter)
i = @index(Global)
if i <= max_iter
acc_dot = zero(eltype(y))
acc_norm = zero(eltype(y))
for j = 1:(n - i)
acc_dot += y[i + j] * y[j]
acc_norm += y[j]^2
end
phi[i] = acc_dot / acc_norm
end
end
function compute_phi(y::Union{AbstractGPUArray{T}, AbstractArray{T}}, max_iter::Int=15) where {T<:AbstractFloat}
n = length(y)
phi = similar(y, max_iter)
backend = get_backend(y)
kernel_inst = phi_kernel(backend)
kernel_inst(y, phi, n, max_iter; ndrange=(max_iter,))
return phi
end
Unfortunately, some standard libraries don’t yet fully support GPU arrays. Otherwise, the same computation could be expressed as simply as:
# Original equation from ARAR model
function compute_phi(;y::AbstractArray, max_iter::Int=15)
phi = [dot(y[i+1:end], y[1:end-i]) / sum(y[1:end-i].^2) for i in 1:max_iter]
return phi
end
If we make better use of these features, Julia has everything needed to become a truly powerful, modern, and cross-hardware language.
And just to be clear, I don’t mean these values don’t already exist, but with even more constructive feedback, community collaboration, and structured code review (similar to the CRAN process in R), Julia’s ecosystem could reach an exceptional level of reliability and performance.