I’m trying to port an algorithm originally written in matlab to Julia and I’m getting a weird error.
For some values and some exponents, the results of raising the value to the exponent differs between matlab and Julia. In the example below, I get an error of around 1e-16 which is not huge in absolute terms but large enough that it affects the output of the algorithm.

val=.93333333333333333
ex = 3
val^ex-mat"$val^double($ex)"

Further, for exp=0,1,2,4,5,7,8,9 the error is 0, but for whatever reason it seems to be identical for the powers 3 and 6.
I’m looking for an explanation as to why this happens, and ideally a workaround so I can complete my porting with zero deviation.
I have already tried:
using BigFloat in Julia and using various methods for computing the exponential (valvalval, Float64(val)) with (little or) no effect.

Note that 1e-16 is on the order of the machine precision for Float64. If this difference is important for your algorithm you should not be using Float64 in the first place. The difference between Matlab and Julia comes from different rounding but I cannot tell you which of the results is more accurate.

Implementations of special functions, such as pow, tend to have compromises between speed and accuracy. I don’t know, neither for Julia nor for Matlab, what promises they make about accuracy and exact rounding for this function.

My first approach would be to try to determine if either of them is consistently giving more correct results, and if that is the case, see if the other one can use a higher precision workaround to match up.

A second option is to see if either of them is calling some specific library for the special function and see if there is a way to get the other one to call the same library.

If you only need a limited range of integer exponents, write your own function which repeatedly multiplies numbers to give the power and use it in both Julia and Matlab. Those ought to agree.

More difficult options would be to port Julia’s power function to Matlab, or use StaticCompiler to make a small library and call it from a Matlab mex function.

But if this is just a small part of a bigger algorithm, you will likely find that it’s extremely difficult to get two implementations to give exactly the same results when floating point numbers are involved, and accept small numerical deviations. The key is to design your algorithm so that the deviations don’t increase out of control along the way.

This is a great webinar by Avik Sengupta on floating point numbers. He defines a useful function called floatbits which you might try to show how floating point numbers are actually represented