Help with Logit NLP problem


I found this thesis online and wanted to reproduce the section Robust both Feature and Label for Logistic Regression. The Logit model here looks like this:

My problem is that when i compute the first constraint in the model, It return “Unexpected object in nonlinear expression”.

Could someone point me where i were wrong ? I used the Wisconsin Breast Cancer Dataset from UCI.
My code:

using ScikitLearn
using JuMP
using Ipopt
using PyCall, Pandas
using LinearAlgebra
using ScikitLearn.CrossValidation: train_test_split
@sk_import preprocessing: StandardScaler
@sk_import preprocessing: LabelEncoder

url = "D:/python-machinelearning-datascience/thesis/data/data.csv"
data = read_csv(url)
list = ["Unnamed: 32", "id", "diagnosis"]
x = drop(data, list, axis=1)
y = data.diagnosis

x1 = fit_transform!(StandardScaler(),x)
y1 = fit_transform!(LabelEncoder(),y)
y1 = [(yi == 1.0) ? 1.0 : -1.0 for yi in y1]
X_train, X_test, y_train, y_test = train_test_split(x1, y1, test_size=.3)
n, p = size(X_train)
rho = 0.01
m = Model(solver=IpoptSolver())
@variables m begin
    (v[1:n] <= 0, start = 0.0)
    (μ <= 0, start = 0.0)
    (β[i=1:p], start = zeros(p)[i])
    (w[1:p] >= 0, start = 0.0)
    hs_norm >= 0

@constraint(m, fxcon[i=1:n], fx[i] == sum(X_train[i, j]*β[j] for j=1:p) + β0)
#l1norm constraint
@constraint(m, hs_norm == sum(w[j] for j=2:p))
@constraint(m, pos_abs[i=2:p], w[i] >= β[i])
@constraint(m, neg_abs[i=2:p], w[i] >= -β[i])

#first constraint
@NLconstraint(m, firstcon[i=1:n], μ + v[i] <= log(1+exp(-y_train[i]*fx[i] + rho*hs_norm)) - log(1+exp(y_train[i]*fx[i] + rho*hs_norm)))

@NLobjective(m, Max, -sum(log(1+exp(-y_train[i]*fx[i]+rho*hs_norm)) for i=1:n)+ 0.1*n*μ + sum(v[i] for i=1:n))

beta, beta0 =getValue(β)[:], getValue(β0)



From a quick glance, I can’t see anything unusual. What is the type of y_train?

Can you simplify the example? Strip out all the unneeded parts so that the error still occurs. For example, you could use random data for X_train etc to remove the dependence on ScikitLearn.