For some applications concatenated ranges would be nice. E.g. it would simplify working with rotated indices, obviating the need for fftshift for some calculations. E.g. if I want to construct a suitable Gaussian in Fourier-space without subsequently shifting it to the corner, The following line comes close to this:
f = Iterators.flatten((1:100000, -99999:0))
However, f is not an AbstractRange. Yet, this would be nice. It could support operations such as size(), length() and even get_index. Simple additions, subtractions, multiplications or divisions with scalars, could just modify the internally stored ranges without collecting. Ideally CUDA could even work with such ranges (e.g. by launching separate calculation kernels for each partial range, or letting the kernels figure out internally which sub-range a thread belongs to).
Would it make sense to write such a ConcatRange support?
Is there any overview of the Range and Iterator classes and how the Abstract class hierarchy is currently set up and which operations each such class needs to support?
The CatView is also vaguely related to this, but it does not treat ranges either (even fails with a stack overflow if you try).
Interesting. I didnât know that LazyArrays has support for this. As for your âmaximumâ example I guess one could specialize these functions for this type of range. Or is there something fundamentally wrong with this?
I would interpret that implementation as a default which can be specialized (i.e. overwritten).
Maximum is just the canary in the coal mine here. There are myriads of such methods. All would be incorrect âby default.â Some would be obvious, others would result in silent data corruption or segfaults. One obvious such place is with indexing and views; they all have similar optimizations for ranges that would lead to OOB accesses â accesses in places authors may have (rightly!) marked as @inbounds.
Youâre welcome to break the ârulesâ of a given supertype or interface, but youâre gonna be unleashing dragons. Maybe you can tame some, but theyâre gonna be lurking everywhere.
Yes, I see. I did not quite realize what a rabbit hole this can be. The Vcat or ranges is not itself an AbstractRange range but it supports most of the operations get_index(), size, length, end in a sensible way. In fact is is an AbstractArray which is probably a more suitable type, and, hey, it can be also used for indexing, if you wish. So this maybe the right way to go ahead here.
Note that instead of concatenation, you can think of what fftshift does as a lazy circshift. The package ShiftedArrays.jl has wrappers for this, and it appears there was some attempt to make it work with CUDA.jl, see e.g. this thread. Although earlier discussion is about shifting an arbitrary vector, and there may be shortcuts possible if you only handle ranges.
Indeed ShiftedArrays.jl has great wrappers, but still needs the arrays to exist in memory. The CUDA support would be somewhat helpful but you would still need a ShiftedArray(CuArray)) as a type, whereas a LazyArray.Vcat of ranges could potentially directly interface with a CuArray, even though the ranges are not converted to a CuArray themselves. This would be nice.
Anyway, now that you mentioned ShiftedArrays.jl, I am unsure how to proceed there. I am having difficulty to reach @piever, as we need to decide to either go ahead with the pull request I made years ago (and updated the other day to new CUDA.jl versions:
Or release a new package (MutableShiftedArrays.jl) based on ShiftedArrays.jl. The latter could be an option, since it would be great to also support a mutable version of ShiftedArray (CircShiftedArray is anyway mutable).
Any ideas how to proceed? (I guess this should be in a different thread now)