I am building a user defined function to be optimized in an NLP problem. However NLexpressions cannot be operated on without the @NLexpression macro (eg added, subtracted etc). Is there something wrong with lets say extending Base.+ to add 2 nonlinear expressions.

using JuMP
using Ipopt
model = Model(Ipopt.Optimizer)
@variable(model,x)
expr1 = @NLexpression(model,sin(x))
expr2 = @NLexpression(model,cos(x))
expr3 = expr1 + expr2 # This doesn't work
import Base:+
+(a::NonlinearExpression,b::NonlinearExpression) = @NLexpression(model, a + b)
expr3 = expr1 + expr2 # Now it works

This would save me having to include all the operations inside a macro.

Operator overloading causes a large number of intermediate expressions to be generated:

julia> using JuMP
julia> function Base.:+(a::NonlinearExpression, b::NonlinearExpression)
return @NLexpression(model, a + b)
end
julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
julia> @variable(model, x)
x
julia> expr = @NLexpression(model, sin(x))
"Reference to nonlinear expression #1"
julia> expr_2 = sum(expr for _ in 1:10)
"Reference to nonlinear expression #10"

The macros rewrite things behind the scenes for efficiency, which is critical for achieving performance in any realistic sized instances.

Is this also the case with linear expressions? Should I avoid explicitly adding them for performance? (eg only use add_to_expression! or the appropriate macro)