State of deep learning in Julia

This seems like a trend to note and keep in mind for future Julia ML feature development / roadmap etc. >>

38 results for ERROR: Out of gpu memory here >> Search results for 'ERROR: Out of gpu memory ' - JuliaLang

Also relevant from @denizyuret’s work on large neural machine translation models based on RNNs and Transformers described here >> CuArray allocation issue: How often is it a problem? , and memory allocation errors like this **ERROR:** **Out of gpu memory** `for` `function` `vgg_m(x0,weights)` described here >> `` Gpu out of memory ;

and also noting one human genome has approximately 100GB raw data, 250GB analyzed file sizes. I think we are starting to see useful large machine learning data sets that are either completely intractable or at best cost prohibitive using anything other than high performance CPUs and relatively less expensive NVMe SSD technologies with fs compression.

For large memory models I think it is very helpful that Knet has its own mechanism for File IO , so please continue to develop Julia KnetArray memory allocator for us to be able to implement machine learning models using CPUs with NVMe SSDs and fs compression for data sets larger than GPU 11GB Memory limitations.

3 Likes