Build a JuMP bilinear expression with a function

I’m writing some algorithms recently and I find this bilinear structure is particularly useful, because its construction steps can be reused in the 2 models.
Is there any advisable idea?

out = JuMP.Model() # outer master problem 
inn = JuMP.Model() # inner subproblem
JuMP.@variable(out, π[1:3]) # dual variable (i.e. Lagrangian multiplier)
JuMP.@variable(inn, x[1:3]) # primal decision

# In this topic, bilinear means one factor is Array{VariableRef},
# whereas the other factor is Array{Float64}

function bilinear_expr(π, x) # although this reads concise, it is slow!
    return sum(π[i]x[i] for i in 1:3)
end
# Therefore, I have to write the following 2 methods
function bilinear_expr(π::Array{JuMP.VariableRef}, x)
    model = JuMP.owner_model(π[1])
    return JuMP.@expression(model, sum(π[i]x[i] for i in 1:3))
end
function bilinear_expr(π, x::Array{JuMP.VariableRef})
    model = JuMP.owner_model(x[1])
    return JuMP.@expression(model, sum(π[i]x[i] for i in 1:3))
end

# in practice, we will do, e.g. the following
while true # solve the Lagrangian dual problem using cutting plane method
    JuMP.@constraint(out, bilinear_expr(π, JuMP.value.(x)) >= 3)
    JuMP.@objective(inn, Min, bilinear_expr(JuMP.value.(π), x))
end

We’d generally suggest using @expression. The usual performance tips apply:

But if they’re both Vector: bilinear_expr(π, x) = π' * x?

Inner product style, along with matrix-vector product style code, reads intelligible only when all decision variables has one or two axes. In higher-axes case, sum style code reads more intelligible.

Defining a helper function is because it reads easier when calling (And make me able to reuse the same code).