Finance and Economics Use Cases

I am using it for a derivative pricing “kernel” inside a Python model validation tool. Performance is a key point here and the problem is hard to vectorize (for using NumPy), therefore a solution outside of Python was needed.
I am using QuantLib.jl and PyJulia for Python integration.
It is actually my first Julia project and I am quite satisfied so far :slight_smile:

5 Likes

(Now that this old thread has been revived) I would like to offer my own small contribution to the Julia/Finance field: https://github.com/PaulSoderlind has jupyter/Julia notebooks for both MSc and PhD level finance.

12 Likes

I am the author of the Quantlib.jl package - let me know if you are having any issues or would like to see something added! I have been meaning to jump back into it (the last updates were getting it working with 1.x) - but life has been a bit busy. I should probably try to get it registered as an official package at some point.

4 Likes

any plan of adding multiple threading support?

4 Likes

great suggestion - this seems like a logical improvement!

1 Like

QuantLib.jl is really a game changer. installing quantlib c++ in windows is very time-consuming

3 Likes

Thanks, the package was very useful for me!
I was first a bit reluctant of using an unregistered package, but it turned out that it contains all functionality I need (calendars, daycounts, rollout of payment schedules, curves) so I gave it a try and did not encountered any issues. I added a few small constructors for convenience, will make a PR, maybe they are helpful for others, too.
Having QuantLib.jl registered would be great.

@pazzo83 Thanks for registering your QuantLib package!

3 Likes

GitHub - SciML/NeuralPDE.jl: Physics-Informed Neural Networks (PINN) and Deep BSDE Solvers of Differential Equations for Scientific Machine Learning (SciML) accelerated simulation solves a hundred dimensional nonlinear Black-Sholes equation in a handful of seconds by mixing neural networks with the SDE solvers. That’s a nice showcase of Julia for SciML and how it can be used in financial applications. And right above that is a hundred dimensional Hamilton-Jacobi-Bellman equation solved in a handful of seconds once again by the same technique. The technique is described in [2001.04385] Universal Differential Equations for Scientific Machine Learning

6 Likes

I work in the financial industry. My main proficiency is in python. I picked up Julia by replicating the notebooks put out by the QuantEcon econometric course. But I am more comfortable in R and Python in general, though this is just because most of my time goes into working with those languages.

One reason why you could be seeing a perceived low activity regarding Julia in finance could because the industry is notoriously tight lipped in general. As culture goes, we are not exactly the paragon of open source style transparency, and for good measure :slight_smile:

Python is enjoying its day in the sun because of the commoditizing of “data science”, “machine learning” buzzwords that get inserted in almost all arenas now, even where there isn’t exactly a scope for them. Combined with this, Python is indeed a very ubiquitous language and nimble, flexible and easy to learn. Which means it is usually the language of the choice for the first time programmers or non programmers who are looking to know to write some basic code. And if you can do some “data science” in the language you first learnt, hey, why do you need to pick a new language!

But I suspect that as the buzzword-dropping-tendency fades out, the differentials will matter considerably more. And I can tell you that it is already beginning. There are inherent problems to the way python handles data which show themselves after a certain point.

Quite sure Julia will be gaining traction.

5 Likes

I would be curious to hear what you have in mind here – not to trash Python, but to ask what Julia can do (or does?) better for data applications in finance.

The technologies at banks, etc., as far as I encontered so far are (excluding legacy stuff like Cobol):

  • Prototyping / smaller tools: Excel/ Access VBA (the bad ones), Python (the better ones)
  • Larger Applications: Java (most frequently), C#, C++ (especially for performance critical parts)

The institutes are usually rather conservative regarding new technologies (somehow understandable looking at the costs of introducing new systems), therefore one would need a very good use case to convince an institute using Julia (which is a rather new and exotic language compared to the ones above) on a significant scale. And for prototyping, Python is often deemed as “fast enough”.

Other obstacles for the introduction of Julia in large financial institutions are:

  • In most cases, there is no official Julia installation available from the internal IT
  • Most decision makers have no experience with Julia
  • There are very few developpers available (internal and external) with Julia experience

I would really like to see a more wide-spread adoption of Julia in financial industry and I think it would be superior to other languages for many use cases. But I fear that it will still take a while :frowning: .

3 Likes

I think that’s a pretty good assessment of the issues.

One thing that might be cool to see is a Julia-first finance shop of some kind, in the same way that Jane Street drives so much of OCaml development. I think you would start to see others follow if there were a successful competitor that used the tooling.

For banks and stuff (ignoring trading desks) I think the use case is maybe not as strong – folks like market makers, hedge funds, and prop traders are the sorts of people who might benefit from removing the two-language problem and care enough about speed to make it work.

2 Likes

Working on it :slight_smile:

4 Likes

Jane street has been using and contributing to OCaml for quite some years but I don’t see many others following it…

It’s a fair point. Bad example.

Perhaps the better point to make is that OCaml now has the “Jane Street” mystique to it, and it would otherwise largely be an unused language. I have a suspicion that people learn OCaml because they want to work at Jane Street where everyone is cool and wears hoodies and writes OCaml in emacs. It’s a nontrivial language benefit, but you’re right in that it does not seem to have imbued OCaml with outrageous success in the finance industry. Though apparently Bloomberg uses it for some deriviatives risk management/pricing thing.

Why the hell is Java/C# so popular, is my question? I have written a tremendous amount of C# and actually very much enjoyed the experience, but it was not easy to do anything numeric. I know Java is quite similar. What was the thing that happened early on that made these two (especially Java) so popular in banks and stuff? Was it just a right time, right place kind of thing?

Maybe Oracle was just spectacularly good at being there when banks were talking about computerizing, and banks were inclined to trust Oracle’s support. I dunno.

Edit: Found a good Quora answer about this.

1 Like

A nobel laureate (Tom Sargent) & co-authors recently use Turing.jl to study econ history.

Abstract:
We enlist Turing.jl, Bayes’ Law, Hamiltonian Monte Carlo, and a parameteric statistical model of Hicks-Arrow date-contingency prices to approximate nominal (meaning gold-dollar) yield curves for the US from 1791 to 1930. Posterior probability coverage intervals for yield curves indicate more uncertainty during periods in which data are especially sparse (e.g., during the administration of Andrew Jackson who, unlike his admirer President Donald Trump, paid off all US federal debt). We compare our approximate yield curves with standard historical series on yields on US federal debt and find substantial discrepancies especially during war time surges in government expenditures that were accompanied by units of account ambiguities.We use our approximate yield curves to study how long it took to achieve Alexander Hamilton’s goal of reducing default premia in US yields by building a reputation for paying as promised on time.

9 Likes

I had been aware of the SolveDSGE.jl package for a while, but I only recently tried it when I realized that it had the possibility to simulate after solving. I have been very favorably impressed. It has a very clear syntax, it can make use of the nice Julia features like unicode, and it can solve models using a number of methods, and then simulate them. It doesn’t currently have built in estimation methods, but it does have everything needed for simulated moments estimation, or related methods.

2 Likes

tbeason,

The SparkSQL.jl package gives Julia programmers the full expressive power of advanced SQL. The query results get returned to Julia DataFrames. Once in the DataFrame the many wonderful Julia data science packages can be applied. Here’s how easy it is:

query = sql(sprk, "SELECT * FROM julia_data j LEFT OUTER JOIN spark_data s ON j.ticker = s.ticker AND j.trading_date > '2010-01-01'")
results = toJuliaDF(query)

.
The project tutorial page has additional sample code and instructions:

1 Like

I recently added the following simple Consumption Savings example to InfiniteOpt.jl.
image

using InfiniteOpt, Ipopt

# finite dimensional parameters
ρ = 0.025  # discount rate
k = 100.0  # utility bliss point
T = 10.0   # life horizon
r = 0.05   # interest rate
B0 = 100.0 # endowment

# infinite dimensional parameters
u(c; k=k) = -(c - k)^2       # utility function
discount(t; ρ=ρ) = exp(-ρ*t) # discount function
BC(B, c; r=r) = r*B - c      # budget constraint

# hyperparameters
opt = Ipopt.Optimizer   # desired solver
ns = 1_000;             # number of points in the time grid

# solve 
m = InfiniteModel(opt)
@infinite_parameter(m, t in [0, T], num_supports = ns) ## time
@variable(m, B, Infinite(t)) ## state variables
@variable(m, c, Infinite(t)) ## control variables
@objective(m, Max, integral(u(c), t, weight_func = discount))
@constraint(m, B == B0, DomainRestrictions(t => 0))
@constraint(m, B == 0, DomainRestrictions(t => T))
@constraint(m, c1, deriv(B, t) == BC(B, c; r=r))
optimize!(m)

c_opt = value(c)
B_opt = value(B)
ts = supports(t)
opt_obj = objective_value(m) # V(B0, 0)

# closed form
λ1 = exp((r)T)
λ2 = exp(-(r-ρ)T)
den = (λ1-λ2)r
Ω1 = (k + (r*B0-k)λ2)/den
Ω2 = (k + (r*B0-k)λ1)/den
c0 = r*B0 + (r)Ω1 + (r-ρ)Ω2
BB(t; k=k,r=r,ρ=ρ,Ω1=Ω1,Ω2=Ω2) = (k/r) - Ω1*exp((r)t) + Ω2*exp(-(r-ρ)t)
cc(t; k=k,r=r,ρ=ρ,c0=c0)       = k + (c0-k)*exp(-(r-ρ)t)

using Plots
ix = 2:(length(ts)-1) # index for plotting
plot(legend=:topright);
plot!(ts[ix], c_opt[ix], color = 1, lab = "c: consumption, InfiniteOpt");
plot!(ts[ix], cc, color = 1, linestyle=:dash, lab = "c: consumption, closed form");
plot!(ts[ix], B_opt[ix], color = 4, lab = "B: wealth balance, InfiniteOpt");
plot!(ts[ix], BB, color = 4, linestyle=:dash, lab = "B: wealth balance, closed form")

image

@pulsipher did a phenomenal job w/ this package!

In the above example, the interest rate parameter r=.05 and the discount rate ρ=0.025 .
Let’s do comparative statics on ρ : examine how changes in the exogenous parameter ρ affect the endogenous variables (consumption, wealth).

using InfiniteOpt, Ipopt, Plots

ρ = 0.025  # discount rate
k = 100.0  # utility bliss point
T = 10.0   # life horizon
r = 0.05   # interest rate
B0 = 100.0 # endowment
opt = Ipopt.Optimizer   # desired solver
ns = 1_000;             # number of gridpoints
u(c; k=k) = -(c - k)^2       # utility function
#discount(t; ρ=ρ) = exp(-ρ*t) # discount function
BC(B, c; r=r) = r*B - c      # budget constraint

# Setup the model (without the objective)
m = InfiniteModel(opt)
@infinite_parameter(m, t in [0, T], num_supports = ns) 
@variable(m, B, Infinite(t)) ## state variables
@variable(m, c, Infinite(t)) ## control variables
#@objective(m, Max, integral(u(c), t, weight_func = discount))
@constraint(m, B == B0, DomainRestrictions(t => 0))
@constraint(m, B == 0, DomainRestrictions(t => T))
@constraint(m, c1, deriv(B, t) == BC(B, c; r=r))
set_silent(m) #turns off output, warnings etc. 

# Define a grid of the parameter of interest ρ and solve over the grid:
grid_ρ = [.025, .05, .075]
c_data = Dict()
B_data = Dict()
for ρ in grid_ρ
    @objective(m, Max, integral(u(c), t, weight_func = t -> exp(-ρ*t)))
    optimize!(m)
    c_data[ρ]= value(c)
    B_data[ρ]= value(B)
    println("ρ=",ρ," ", termination_status(m))
end 


ts = supports(t)
ix = 2:(ns-1)

plot(legend=:topright);
plot!(ts[ix], c_data[grid_ρ[1]][ix], col=1, lab = "c: r=.05 > ρ="*string(grid_ρ[1]));
plot!(ts[ix], c_data[grid_ρ[2]][ix], col=2, lab = "c: r=.05 = ρ="*string(grid_ρ[2]));
pc = plot!(ts[ix], c_data[grid_ρ[3]][ix], col=3, lab = "c: r=.05 < ρ="*string(grid_ρ[3]));

plot(legend=:topright);
plot!(ts[ix], B_data[grid_ρ[1]][ix], col=1, lab = "B: r=.05 > ρ="*string(grid_ρ[1]));
plot!(ts[ix], B_data[grid_ρ[2]][ix], col=2, lab = "B: r=.05 = ρ="*string(grid_ρ[2]));
pB = plot!(ts[ix], B_data[grid_ρ[3]][ix], col=3, lab = "B: r=.05 < ρ="*string(grid_ρ[3]));

plot(pc, pB, layout = (2, 1), legendfontsize=6)

This gives the expected result, relatively patient households consume less earlier:
image

If anyone knows a smarter (faster) way to do comparative statics, I’m all ears.
@ptoche

7 Likes