Julia Programming Language
Sequence of warp and how to avoid divergence when folding shared memory in a reduction kernel
Specific Domains
GPU
cuda
Lian_Yunlong
July 18, 2018, 4:18pm
2
I just need some keywords and/or some links
show post in topic
Related topics
Topic
Replies
Views
Activity
Unexpected coalesced group behaviour in CUDA.jl
GPU
cuda
3
71
January 25, 2025
Correct usage of shared memory?
GPU
5
811
January 20, 2024
Question about coalesced read and write to the global memory using CUDA.jl 2D grid
GPU
question
1
765
April 20, 2023
Base function in Cuda kernels
General Usage
cudanative
,
cuda
8
3192
March 15, 2019
CUDAnative: Using second and third dims in the kernel
GPU
cudanative
2
865
January 31, 2017