Any particular reason you expect to get double precision (i.e. Float64) instead of single precision floating point numbers? Float32 is sufficient precision for typical artificial neural networks and allows faster computation on CPU as well as drastically better performance on most GPUs. Note that the fact that you have a 64 bit processor or operating system has no consequence for the choice between single and double precision floating point numbers.