Inner product grammar is neater than sum(index) in JuMP modeling, But triggers Warning?

Given the discussion here. I think we are motivated to fathom whether there is a standard (or at least recommended) way to write JuMP code that attains the optimal performance. @odow

Some very common cases include

import JuMP
using LinearAlgebra
model = JuMP.Model();
JuMP.@variable(model, x[1:3]);
JuMP.@variable(model, X[1:3, 1:3]);

JuMP.@expression(model, lin_scalar_1, rand(3)' * x)
JuMP.@expression(model, lin_scalar_2, dot(rand(3), x))

JuMP.@expression(model, quad_scalar_1, x' * rand(3, 3) * x)
JuMP.@expression(model, quad_scalar_2, dot(x, rand(3, 3), x))

JuMP.@expression(model, matrix_lin_scalar_1, rand(3)' * X * rand(3))
JuMP.@expression(model, matrix_lin_scalar_2, dot(rand(3), X, rand(3)))

JuMP.@expression(model, mat_lin_scalar_1, sum(rand(3, 3) .* X))
JuMP.@expression(model, mat_lin_scalar_2, dot(rand(3, 3), X))

It appears that they will have different dispatches in MutableArithmetics, thus possibly different performances.

For a reference, notice the word Unlike in the following docstring

help?> *

  *(A, B::AbstractMatrix, C)
  A * B * C * D

  If the last factor is a vector, or the first a transposed vector, then it is efficient to deal with      
  these first. In particular x' * B * y means (x' * B) * y for an ordinary column-major B::Matrix. Unlike  
  dot(x, B, y), this allocates an intermediate array.