I think there are two issues at play. First thing is that the LAPACK implementation that julia uses doesn’t have float16 routines which is probably why it has to get promoted, the other is that julia currently has a compiler pass to promote Float16 computations to Float32.
One thing that could be done is to truncate the result after doing the LAPACK routine so the user gets a Float16 back.
I think in a previous thread you opened I gave more details Half precision