OS dependency of `Float16(::BigFloat)` on nightly

Is there any OS dependency regarding the BigFloatFloat16 conversion? (Edit: Apparently, the core problem may be with the Float64Float16 conversion.)
I’m not sure the cause, but the output of CI (GitHub Actions) for macOS x86_64 says:

julia> versioninfo()
Julia Version 1.7.0-DEV.796
Commit 82dc40264d (2021-04-01 23:25 UTC)
Platform Info:
  OS: macOS (x86_64-apple-darwin18.7.0)
  CPU: Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-11.0.1 (ORCJIT, ivybridge)

julia> Float16(big"16379" / 16383)
Float16(0.9995)

On Linux x86_64, for example, it returns the more accurate Float16(1.0).
My concern is not the failure of correct rounding due to double rounding, etc., but the OS dependency.

I don’t have a local macOS environment, so I haven’t been able to track the details, but the change has occurred over the last about half a month, and there has been at least one implementation change.
https://github.com/JuliaLang/julia/pull/40245
However, PR #40245 itself should be OS-independent. :thinking:

This was discovered from a CI failure in FixedPointNumbers: Test failure on nightly (1.7.0-DEV) in macOS (x64) · Issue #246 · JuliaMath/FixedPointNumbers.jl · GitHub

As the author of that or, I think it’s plausible today that might be the cause. If so, it would presumably be due to a bug in how mac converts Float64 to Float16.

2 Likes

I see, Float64(::BigFloat) seems to be ok. So, the difference seems to be in Float16(::Float64).

julia> Float64(big"16378" / 16383)
0.9996948055911615

julia> Float64(big"16379" / 16383)
0.9997558444729292

julia> Float16(0.9996948055911615)
Float16(0.9995)

julia> Float16(0.9997558444729292) # `Float16(1.0)` on Linux and Windows
Float16(0.9995)

It seems that __truncdfhf2 is called when going through Float64.

julia> @code_native Float16(1.0)
	.section	__TEXT,__text,regular,pure_instructions
; ┌ @ float.jl:204 within `Float16'
	subq	$8, %rsp
	movabsq	$__truncdfhf2, %rax
	callq	*%rax
	popq	%rcx
	retq
	nopw	%cs:(%rax,%rax)
; └

julia> @code_native Float16(1.0f0)
	.section	__TEXT,__text,regular,pure_instructions
; ┌ @ float.jl:203 within `Float16'
	vcvtps2ph	$4, %xmm0, %xmm0
	vmovd	%xmm0, %eax
	retq
	nopl	(%rax,%rax)
; └

There is indeed a conditional branch on _OS_DARWIN_ there.

extern "C" JL_DLLEXPORT uint16_t __truncdfhf2(double param)
{
    return float_to_half((float)param);
}

Triple rounding? :sweat_smile:

I think the concept of PR #40245 is very nice, but I think there are (at least two) problems with the current implementation.

Yeah, it would probably be a good idea to make a version that only uses arithmetic. Shouldn’t be too hard, and could get rid of the double rounding completely.

1 Like