# Question about DataDrivenDiffEq / UDE / Symbolic Regression failing for very sparse settings

Hello,

I’m playing around with symbolic regression and UDEs for a larger problem and have run into some difficulties. It seems that for various methods, the symbolic regression simply fails with zeros method errors and different array dimension errors (which are rather inscrutable to me). This seems to happen when the sparsity-related parameters are set too “high” – but this seems to defeat the purpose of the exercise, as the only solutions I can successfully obtain are really big expressions.

I post a simple example here where it fails with random matrices but could post my actual MWE / application which is training a UDE on an ODE that is scalable by dimension to check the recovery performance of interaction terms for various system sizes. I guess I naively thought that you can simply sparsify the regression however much you wanted, but unless I am making a silly mistake, this doesn’t seem to be the case.

Any guidance or resources to check on hyperparameter tuning would be helpful here.

Thanks!

``````

n = 10
T = 15
@variables u[1:n]
b = polynomial_basis(u, 2)
basis = Basis(b, u);

# random X
X̂ = rand(n,T)

# simple relation to Y
Ŷ = X̂ .+ 0.15 .* rand(n,T)

#λ_sparse = exp10.(-1:1:3)
#opt = STLSQ(λ_sparse)
opt = SR3(1e-1, 100.0) # <- doesn't work
#opt = SR3(1e-3,100.0)  # <- works

normalize = DataNormalization(ZScoreTransform),
selector = bic, digits = 2,
data_processing = DataProcessing(split = 0.9,
batchsize = 10,
shuffle = true,
rng = StableRNG(1111)))

nn_res = solve(nn_problem, basis, opt, options = options)
nn_eqs = get_basis(nn_res)
equations(nn_eqs)
``````

This might just be an issue with the way the optimization is done. We need to change it to a convex optimization. Can you share this as an issue on DataDrivenDiffEq.jl so we can track it?

Sure, will post now! Sorry lost track of this.

1 Like