Hows this?
function bar(a::Int16)
end
function foo()
local b::Int16 = 35
bar(35)
end
foo()
My (naive) view is that in the case of local b::Int16 = 35
the compiler is interpreting my constant 35 as an Int16 value. However for bar(35)
the compiler is saying NO the constant 35 is a Int64. Yes at their base these are two different operations, assignment vs method call. However how the constant 35 is interpreted in these two operations is different…which changes based on operation…
My naive again guess would be this happens because there may at some time in the future be another function bar which takes a numeric, in which case it wouldn’t know which one to use. Or maybe it’s just that it could happen so the compiler refuses to make guesses, even though at this point and time, there is no other option.
You are correct, however by forcing me to make the conversion (or at least looking at the function definition) I can realize that the value will be used as an Int32. Which means the calling function should operate on an Int32…which trickles up the call stack.
Know (or at least being reminded) of the size of a primitive value at the bottom of the call stack can be useful when you are at the top of the call stack and can simplify (or complicate) your input validation, especially when taking the value from a user…but you can be reasonably confident the value won’t be rejected at the end of it’s journey.
I’m not sure if the CPU pipeline handle math operations on Int32 more efficiently than int64, but the CPU cache can handle more Int32s than it can Int64s. So if the value is going to be save or used as an Int16 or an Int32 why would I manipulate it an int64? Granted I probably shouldn’t be worrying about cpu cache size, but I’d prefer my code to be closer to right than wrong from a performance standpoint. So if the value can be treated as the same primitive type in it’s whole journey that would be the most efficient path, correct?