Can you clarify which pre-made functions you’re comparing to? A full runnable example that shows where what you’re seeing doesn’t match your expectations would be helpful.
One possibly-common gotcha is what orientation is your data in? (i.e. if you have 1000 samples of 8D data, is it a 1000x8 matrix or a 8x1000 matrix)?
The algorithm returns an estimator of the generative distribution’s standard deviation under the assumption that each entry of itr is an IID drawn from that generative distribution. For arrays, this computation is equivalent to calculating sqrt(sum((itr .- mean(itr)).^2) / (length(itr) - 1)). If corrected is true, then the sum is scaled with n-1, whereas the sum is scaled with n if corrected is false with n the number of elements in itr.
std by default does a “corrected” standard deviation. This is what you want if the mean of your data is estimated by taking the average of the data. If you know the mean a-priori and subtract it off manually or provide it with the mean argument, then you should give corrected=false to std.
So I think your implementation is doing the right thing and sklearn is not.
Standardization is just a simple procedure with two major benefits: easier interpretation and better numerical properties (of the likelihood etc, it’s a simple preconditioning).
There is no single canonical way to do it, either using the uncorrected or the corrected std is fine, as long as it is used consistently.
There are also alternative approaches, eg using two standard deviations. Which are also fine.