Yes, that is true.
It doesn’t make a big difference in this case for me tho.
I measured the times and calling columnwise seems to be only 0.8% faster.
Yes, that is true.
To give you a quick update:
I tried to make my algorithm work by only using Arbs, to avoid getting new allocations by the BigFloats.
Unfortunately, this does not work because Arbs seem to not be made for this kind of usage.
They lose numerically stability during the matrix operations and get unprecise results (unless one puts the precision much higher than for BigFloats, but this makes the performance even worse than the BigFloat-allocations).
I would like to know if someone knows another way how I could make this type of algorithm work.
Maybe there is another multiprecision floating type or a way to preallocate the BigFloat calculations?
Thank you for the update. Your experience is a matter of record, and I will refer back to this after finishing work on the functional part of ArbNumerics and before considering whether there may be aspects of the underlying library which may ameliorate some of this. Please recap (a) the dimensions of the matrices which are of interest, (b) the bit precision of numeric entries as they are given when creating the matrices, © the bit precision which you require of numeric entries at the conclusion of all processing, (d) the specific matrix-valued functions, transforms, factorizations, and any other computational work that you apply with the appropos order of operations you adopt. I do not suggest there may be help that is available within the Arb C library, nor that if there were, it would be available to Julia in a manner that I utilize – it is not entirely impossible.
The only way I know to preallocate BigFloats is do that in C [and by ‘know’ read “assume could be done”]. There is an arbitrary precision (modern) FORTRAN library mpfun2015, if you want to port that.