Error using DifferentialEquations.jl to simulate a reasonably common SDE

I’m using DifferentialEquations.jl to try to simulate paths of the following SDE:

dS = 0.0314 dt + v*(-0.576) dW_1 + v*sqrt(1 - 0.576^2) dW_2
d(v^2) = 0.035*(0.636 - v^2) dt + 0.144 v^2 dW_1

Sorry for the formatting, I wasn’t sure if the Discourse equations plugin was activated here. Anyway, this is my first time using DifferentialEquations.jl and I’m getting an error, presumably because I’m doing something wrong. My code is here:

function garch_diffusion_mu!(du, u, p, t)
    du[1] = 0.0314
    du[2] = 0.035 * (0.636 - u[2])
end
function garch_diffusion_sigma!(du, u, p, t)
    du[1,1] = -0.576 * sqrt(u[2])
    du[1,2] = sqrt(1 - 0.576^2) * sqrt(u[2])
    du[2,1] = 0.144 * u[2]
    du[2,2] = 0.0
end
garch_diffusion_g_mat = fill(0.1, 2, 2);
prob = SDEProblem(garch_diffusion_mu!, garch_diffusion_sigma!, [log(30.0), 0.1], (0.0, 500.0), noise_rate_prototype=garch_diffusion_g_mat);
dt = 1 // 1000;
sol = solve(prob, dt=dt)
plot(sol)

If I set the time span to (0.0, 1.0), everything works fine, but when I set it to (0.0, 500.0) (which I need to do as I’m duplicating a simulation from a paper), I get the following error:

Warning: dt <= dtmin. Aborting. There is either an error in your model specification or the true solution is unstable

I’m assuming the problem is in my implementation of the SDE, since the equations themselves are known as the GARCH diffusion and have been in a fair few peer-reviewed papers.

Any help or insight that anyone can offer on this would be greatly appreciated.

Thanks,

Colin

UPDATE: I was able to get runs without any errors using either of the following options:

sol = solve(prob, dt=dt, force_dtmin=true)
sol = solve(prob, LambaEulerHeun(), dt=dt)

Presumably the second of these is preferred.

I am still concerned that I’ve made an error somehow so would still appreciate someone more experienced in the package looking over my code and just commenting as to whether it matches the equations I’ve provided. The reason for my concern is that the equations have appeared in several Econometrica papers (so they should be good right?) but when simulating them, on some paths I get negative values for S which is not ideal since it is supposed to be a price…

I’ve analyzed diffusions in Stata and R and have been meaning to learn how to do it with Julia! Do you have a reference to the Econometrica article(s)? Are you copying the equations from the article or adapting from some existing code in another language?

1 Like

For sure, you can find the SDE I describe in my first post in the simulations/monte carlo sections of the following papers:

Andersen, Bollerslev, Meddahi (2005) “Correcting the Errors: Volatility Forecast Evaluation Using High-Frequency Data and Realized Volatilities”, Econometrica 73 (1) pp.279-296

Goncalves, Meddahi (2009) “Bootstrapping Realized Volatility”, Econometrica 77 (1) pp. 283-306

Patton (2011) “Data-based Ranking of Realised Volatility Estimators”, Journal of Econometrics 161 pp.284-303

I’m sure it would have been in other papers too. These are just the ones I know off the top of my head.

1 Like

Thanks. Are your equations related to equation (16) in Patton (2011)?

Isn’t your S the log of the price? In which case it can be negative when the price is between 0 and 1.

3 Likes

The assumption is Ito by default, so definitely the second is preferred if you wanted Stratonovich interpretation.

Did you try the same integrator as them and see if you get the same exact result? Did you try lowering the tolerance of the adaptive methods? Etc.

2 Likes

Oops, yes, you’re totally right. The point I meant to make is that many of the prices it produces are still insane. On any given run, I’m getting log(S) ranging from -30 to 30, which corresponds to prices ranging from 1e-13 to 1e13… that’s a bit silly, even for bitcoin.

1 Like

These papers are sometimes a little light on the detail of exactly how the simulations are performed. For one of them, I even contacted the authors asking for code because the SDE produces prices ranging from 1e-15 to 1e15… they declined to release the code. The SDE I’ve written above is one of the ones that produce these crazy prices.

In one of the papers they state that they just use an Euler discretization scheme which I’ve written myself in the past and I get similar numbers to the authors for the simulations they perform. Getting the same exact result is not feasible because of random number generation. They certainly don’t go to the level of providing seeds or algorithms used.

As a general rule I am able to get similar simulation results to those published in the papers. Further, if you look at the numbers generated by the SDE in return space - that is to say, only looking at increments in the SDE - then they look reasonably sensible. It is only when you examine then in the level-space - log S or S in my equation above - that you see that the numbers produced make very little sense given the supposed application. I’ve been trying to avoid stating the models being used are not suited to the application because, well, I figure I’m far more likely to be wrong than a sequence of peer-reviewed papers published in A+ journals… that’s why I decided to ask here, in case I was doing something obviously wrong :slight_smile:

BTW - I’m sure you’ve heard it before but amazing package! And the documentation is really good also.

You won’t be able to reproduce that range with “good” error (since that’s 30 orders of magnitude, and Float64 only can do 16 orders of magnitude), so the adaptive integrator is probably attempting to hit its error tolerance and giving up saying “I cannot have relative 1e-2 error over this entire range because that’s not possible for Float64”. Hitting reasonable tolerances with SDEs is just hard, and maybe strong error is just too strong of a quality for your problem.

2 Likes

The SDE is for the log-price, which ranges from -30 to 30, so as far as the SDE is concerned there shouldn’t be an issue with order of magnitude. The huge variation in order of magnitude only occurs when you convert the log-price back to the price by taking the exponential. Prices shouldn’t range from exp(-30) to exp(30) :slight_smile: but that is what you get when you convert the log-price from the SDE to the price. Honestly, I’m starting to suspect that it just isn’t a good SDE for the application, despite the fact that it has appeared so many times in top journals. I don’t want to waste people’s time if it turns out that this is the case.

If I understand correctly, you’re initializing price at 30, and the model says that price increases by approximately 3% every period, ignoring the stochastic part. You are simulating for 500 periods. 3% growth per period for 500 periods would give a final price of 7.86e7, the log of which is about 18. So, the results don’t seem too surprising. Perhaps this model was originally calibrated to shorter length time periods, so it may not be expected to work well for longer simulations?

Another factor is that calibrating the model to replicate returns and the volatility of returns may not lead to a good fit to prices or log prices. The original authors may not have focused on getting a good fit to prices themselves. Small errors in the growth rate would lead to large errors in prices, after enough time has gone by. It might be possible to improve the calibration of this equation by using information on prices. Or, as you suggest, this model may not be a good one to fit prices well in the long run.

3 Likes

The more I mess around, the more I suspect it is a combination of both these things. I traced back the parameters of this model to some papers pre-1998, and it does indeed seem that the original applications involved shorter time periods. It would appear that over time people have continued to use the model without necessarily revisiting the assumptions under which it was original created. It also appears based on my reading of those older papers that it was fitted on returns, not prices.

Thanks for this. I think you’ve hit the nail exactly on the head. It is good to hear someone else saying it as I thought I must have been doing something stupid.

Maybe calibrating to prices at the same time would be worthwhile. However, real data also has jumps from time to time, that can reset prices. With a long time series, it would probably be good to allow for jumps, too. This thread has made me wonder if the expected sign of a jump should perhaps depend on the level of log prices…

If you’re trying to include market capitalization events in your price series then definitely. But the usual approach is to remove cap events using a backward looking adjustment factor before your start any analysis.