Numerical simulation changes with different tspan

I am attempting to simulate a network of four neurons, where the ith neuron’s dynamics are described by the following differential equations:

\dot{V}_m = I_{syn} - (V_m - \alpha_f^-\tanh(V_m - \delta_f^-) + \alpha_s^+\tanh(V_s - \delta_s^+) - \alpha_s^-\tanh(V_s - \delta_s^-) + \alpha_{us}^+\tanh(V_{us} - \delta_{us}^+))
\tau_s \dot{V}_s = V_m - V_s
\tau_{us} \dot{V}_{us} = V_m - V_{us}

where \tau_{us} >> \tau_{s} >> 1, and

I_{syn} = I_{app} + \sum_{j \neq i} \frac{\alpha_{ij}}{1+\exp(-\beta(V_j^s-\delta_{syn})}

The network is entirely homogeneous, where each neuron has the same parameters, and all of the weights between the neurons are the same.

I am using Lsoda and DifferentialEquations.JL to solve the system, with reltol = abstol = 1e-8. I find that the system either immediately converges to a fixed point or a limit cycle depending on the time span over which I integrate. Any suggestions as to why this is happening and how I might fix it would be appreciated.

Can you provide some Julia code?

Sure. Here is the function for the ODE for the entire network:

function NNdynamics!(dV, V, p, t)
		@unpack neuronp, edgep, W = p
		@unpack β, δ = edgep
		for j in range(1, length = length(neuronp)) #loop through each neuron
			idx = (j-1)*3
			Vneuron = [V[idx+1], V[idx+2], V[idx+3]]
			dV[idx+1], dV[idx + 2], dV[idx + 3] = neuronDS(Vneuron, neuronp[j]) #internal dynamics
			for i in 1:length(W[j]) #again, loop through each neuron
				dV[idx+1] += W[i][j]*sigmoid(β*(V[((i-1)*3)+2] - δ)) #Isyn
			end
		end
	end

Here is the function for the internal dynamics of an individual neuron:

function neuronDS!(dV, V, p, t)
		@unpack αfm, αsp, αsm, δfm, δsp, δsm, αusp, δusp, Iₐ = p
		Vm, Vs, Vus = V
		dV[1] = Iₐ - ( Vm - αfm*tanh(Vm - δfm) + αsp*tanh(Vs - δsp) - αsm*tanh(Vs - δsm) + αusp*tanh(Vus - δusp))
		dV[2] = (1/τs)*(Vm - Vs)
		dV[3] = (1/τus)*(Vm - Vus)
		dV
	end

And here is my function for simulating the system. The thing that I am confused about is that changing T gives me qualitatively different results

function numsim(f, V0, par, T)
		prob = ODEProblem(f, V0, (0,T), par)
		sol = solve(prob, lsoda(), reltol = 1e-8, abstol = 1e-8)
	end

Let me know if there is anything else I should provide.

I’m not an expert, but since nobody else is jumping in, I’ll make a few points.

The first thing that jumps out at me is that when you change T, the initial stepsize changes. It may be the case that when T is larger, you basically just step over an important dynamical feature, for example. I can’t find any documentation about how that initial stepsize is chosen automatically, and though you can pass the dt option to choose just that initial stepsize, it appears that lsoda doesn’t actually use that option.

That leads me to wonder why you’re using lsoda. In the first post, you pointed out that \tau_{us} \gg \tau_{s} \gg 1, which looks like you’re saying that the dynamics work on very different timescales, which suggests a stiff system. It is true that lsoda has long been a workhorse for that kind of problem, but it looks like there are typically better options nowadays — and certainly options that have better integration with the Julia interface. You could just use

sol = solve(prob, alg_hints = [:stiff], reltol = 1e-8, abstol = 1e-8)

to let the library choose an algorithm for you. Or if you want to get into the details, you should look at this page (or find even more detail on this page). If I’m right in thinking that this is a stiff system, it looks like RadauIIA5 might be the algorithm for you because you’re also asking for pretty high accuracy. Note that these pages seem to suggest lsoda unenthusiastically; its main use case seems to be when the number of variables is very large, but even then QNDF, FBDF, and CVODE_BDF all come earlier in the recommendations — but you say you only have 4 neurons in any case.

So I would investigate with different algorithms, the dt parameter, and the tolerances, to see if the fixed point or limit cycle behaviors are robust.

1 Like

Thank you very much! I was using lsoda in order to reproduce the results of a colleague who had run the simulations in Python with scipy’s odeint function, which uses the lsoda algorithm. I will try messing with the algorithm and parameters as you mentioned to see if I can still reproduce his results and get more robust behavior with the limit cycle and fixed points.