When profiling some Julia code I stumbled upon an unexpected performance bottleneck at the line corresponding to multiplication of a static matrix of integers and a static vector of rationals. Further investigation revealed that in such instances both types are promoted to static arrays of rationals before the multiplication. The same applies to the rational scalars as well:
julia> @code_typed 2 * (1//2)
CodeInfo(:(begin
return $(Expr(:invoke, MethodInstance for *(::Rational{Int64}, ::Rational{Int64}), :(Base.*), :($(Expr(:invoke, MethodInstance for Rational{Int64}(::Int64, ::Int64), Rational{Int64}, :(x), 1))), :($(Expr(:invoke, MethodInstance for Rational{Int64}(::Int64, ::Int64), Rational{Int64}, :((Core.getfield)(y, :num)::Int64), :((Core.getfield)(y, :den)::Int64))))))
end))=>Rational{Int64}
While in the case of scalars the cost of conversion is small, for matrices it may be quite significant. Also, the loss of speed may be visible for the case of Rational{BigInt}. It seems that adding specialized methods for multiplication of an integer and a rational is straightforward. Am I missing reasons for the current design choice?