It was great to read such an interesting discussion on Dynare, where I can agree with almost all sides to a large extent. I can’t resist adding my 2 cents though.
The main goal of empirically oriented subfields of economics (macro, intl macro, finance, trade) is to understand economic data using structural models. This usually involves solving the model, estimating (or calibrating) its parameters, and then (in the ideal case) model criticism to understand its limitations.
The problem is that this the current state of the art for the above requires so methodology that it stretches the limitations of a graduate program, even for students with a macro or related focus. Teaching the very basics of the foundations for numerical linear algebra, first- and higher-order perturbations, global methods, sparse grids, AD, let alone the various practical tools for Bayesian estimation, could easily take 3–5 years. This is a huge investment for what is a very useful, but at the same time very narrowly focused set of skills.
So while black-box can be misused without deeper understanding, it is great that they exist, otherwise teaching macro would become impossible, it is just too overwhelming. What the Julia ecosystem could strive for is a composable set of tools which can be tweaked and modified on demand as necessary.
Not sure I have good advice on this. I would recommend joining a project with more senior coauthors so that one does not have to learn everything at once. My understanding is that this is quite common in the natural sciences, which have similar methodological demands. But I am aware of the trade-offs regarding career advancement.
Also, I think that that demand for numerically trained quantitative macroeconomists outside academia is very countercyclical. When things are going well, no one really cares, when the next crisis hits then it is “we need estimated structural models about X ASAP”… until it is over and we don’t. But maybe I should not complain, quantitative epidemiologists had it worse before COVID hit.
At least from a pedagogical perspective, perhaps there could be an “Assumptions” flag, like the verbose flag we have for warnings, that autogenerated alerts for students/users on the assumptions being made on the calculation.
This would be moving towards Mathematica-type output, where assumptions are usually coupled with answers. This would be useful for audits as well, if these assumptions were assertible conditions.
Since this thread is about the econ use of Julia, I’d like to take the opportunity to introduce my package SFrontiers.jl which provides commands to estimate stochastic frontier (SF) models. SF models are often used in productivity and efficiency analysis. I just put the package online two days ago and haven’t registered it yet. The web page has a detailed example in Jupyter notebook for easier exploration. Comments and suggestions are welcome.
There are at least two reasons why Julia is particularly useful for estimating SF models.
The parametric versions of the models are almost always estimated by MLE. Because they are highly nonlinear, the estimation is often numerically challenging. Julia’s Optim.jl coupled with ForwardDiff.jl substantially reduces the problem. The coding time is much shorter now and the estimation quality is higher. I used to code in Stata and spent a lot of time getting the analytic forms of gradients and Hessians to assist in the estimation. They really helped. But that’s 20 years ago. It’s not a smart thing to do today when you have Julia.
Simulation-based methods are becoming important in this field. The speed of Julia makes those methods much more practical. I haven’t done a serious benchmarking, but I have seen a maximum simulated likelihood example where a typical (not optimized) Julia code runs more than 10 times faster than a typical Stata code does, and at least 2 or 3 times faster than a quite optimized code (which uses compiled functions).
A lot of effort was put into software engineering tooling, workflows, etc. in this revision.
My personal feeling is that changing people’s workflows to support reproducibility is our most important goal, and the tools can help. To me, the eventual nail in the coffin of matlab is not its speed (it is actually pretty fast in practice since so many algorithms are dominated by linear algebra) nor its clunky syntax (far superior to python in some cases), but rather that it cannot support modern software engineering.
Julia is a great language for reproducibility and collaboration partially because it is new enough that these things (e.g. reproducible environments, CI, unit testing, collaboration workflows, package discovery) were designed in from its inception. So if a student learns these things correctly with julia, they can apply them with R, Python, Stan, etc.
I created a double entry book keeping package, that I’d like to get a point I can register. If I do the same thing with relational database, I should be able to tie to inventory management and track expenses to perform optimization.
Is QuantLib.jl under active development? I was trying to see if I could use it for some basic stuff like USD swaps curve fitting, but some bits are either sale (for example the US business calendars don’t include June 19th) or not ported from upstream (overnight indices like SOFR).
I’m an end user (ie, I’d run a curve fitter to live data streamed from Bloomberg for my trading needs) but I probably don’t have the skills to contribute to the port itself. Happy to help from that end though.
I’m curious (as the developer of Yields.jl) what you are looking for. I’m not intending Yields.jl to be a trading-desk level tool, but I’m wondering if there are features you are looking for that would be a fit for Yields.jl
Sure. What I eventually want to do is the following.
Set as of date (either today or past date)
Create a list of instruments that will be used to calibrate the yield curve (meaning, retrieve say the definitions of the first 8 IMM SOFR futures and swap definitions with exact cash flow dates for a set of maturities)
Calibrate a SOFR discount curve that gives 0 NPV for each of the above instrument, with some external constraints on the shape of curve
Query that curve for the usual: disount rates, forward rates, etc.
Same as above except I need to jointly calibrate a EURIBOR and an ESTR curve.
Main point is I need to be instrument aware. Not generic, textbook (Hull or other) instrument, but actual tradable market instrument, with appropriate precision (often the last cashflow of a 1y swap is not exactly at time 1.0, if only for the fact that swaps settle T+1 or T+2).
My understanding is that only QuantLib C++ (not yet .jl) has the facilities to create the instruments according to their exact definitions.
Which is probably what you’d call trading-desk level. I’m a derivatives trader so I that’s what I’m after, trading level. Not market-making level though (that would be one extra step, and one for which I have no need at this point).