JuMP & SumOfSquares generate SDPs with ~18x more constraints than YALMIP

After some playing around I think the problem is due to the norm^2 in the objective function.
Here are some shorter codes for testing.
Julia:

using MosekTools
using JuMP
using DynamicPolynomials
using SumOfSquares
##

n = 100;
model = SOSModel(Mosek.Optimizer)
@variable(model,H[1:n,1:n],PSD); # Matrix parametrizing V
@polyvar(x[1:n]) 
V = x'*H*x;
@constraint(model, V - x'*x >= 0)

@objective(model, Min, sum(H[1:end].^2) ); # slower solve than Yalmip
# @objective(model, Min, sum(H[1:end]) ); # (slightly) faster solve than Yalmip
JuMP.optimize!(model)
@show objective_value(model)

Matlab:

n = 100;
options = sdpsettings('solver','mosek','sos.newton',0,'sos.congruence',0); % turn off reductions
H = sdpvar(n,n); % Matrix parametrizing V
x = sdpvar(n,1); % State
V = x'*H*x;
con = [ sos(V - x'*x), H>=0 ];

sol = solvesos(con,norm(H(:),2)^2,options,[H(:)]); 
% sol = solvesos(con,sum(H(:)),options,[H(:)]) 

On my machine, Mosek takes 12s to solve the problem from Yalmip and 37s to solve the JuMP’s problem.

The codes include commented lines with linear objective, to show that the solving speeds (and problem sizes) are almost the same without the norm.

Is there a better way to write the norm^2?