Excel to Julia

A bit of background - I am an actuary who is trying to learn Julia. In doing so, I am trying to move some of the models that I would usually make in Excel to Julia.

I am stuck on one particular problem which Excel makes quite easy for me. I am looking for ways to do the same in Julia. The problem is as follows:

I know the probabilities of default for various ratings (AAA to CCC) at various maturities (year 1 to 30 - discrete). I don’t know the underlying transition matrix that produced this table and the task is to fit a transition matrix.

It is basically a discrete Markov model.

In excel I would do the following:

  1. I know the probability of default at various maturities which looks something like this:

  2. Define a matrix T which gives transition rates for various states: AAA, AA, A, BBB, BB, B, CCC and default. Where AAA to CCC are various ratings and final column is default state.

The default column is known now and is equal to the probability of default in year 1.
The diagonal entries of the matrix is set equal to “1 - sum of other entries” in the row.

  1. Initialize this matrix, say by setting zero everywhere except diagonal entries (formula) and default column (known)

  2. Do multiple matrix multiplications to get the probability of default at various years. For example, last column T^2 = probability of default at year 2.

  3. Calculate squared distance between actual probability of default (given by 1) and calculated in 4 at all years.

  4. Run excel solver to minimize sum of squared difference.

I am not sure if there is an easy way to set this up in Julia. I can experiment if someone can point me to a package that I should be looking at.

2 Likes

In your Excel table, there are both “numbers” (which you want to optimize) and calculated values. You need to separate them, i.e. have one matrix of the transition values without the (dependent) diagonal values and add them in a separate function.
For the optimization, you may use JuMP.jl.

1 Like

Sure, I can create a matrix that I am trying of optimize using numbers only. And have the condition on diagonal entries separately.

I will look at JuMP.jl, thanks

For the specific case of nonlinear least-squares fitting, you might look at https://github.com/JuliaNLSolvers/LsqFit.jl

2 Likes

@lungben Thank you for your guidance. I have been looking at JuMP and have done the following until now:

function calibrate_trans_rate(filepath_pd)
	
	# Get PD values
	pd_table = zeros(Float64, 30, 7)
	open(filepath_pd, "r") do fp
		for row in 1:30
			line = readline(fp)
			pd_table[row, :] .= parse.(Float64, split(line, ",")) 
		end
	end

	# Set up solver
	model = Model(Ipopt.Optimizer)  # need to decide what solver to use
	
	# Set variable to be transition values
	@variable(model, 0 <= x[1:8, 1:8] <= 1)
	
	@constraints(model, begin
			default_col[i = 1:7], x[i, 8] == pd_table[1, i]
			default_row[i = 1:7], x[8, i] == 0
			default_state, x[8, 8] == 1
			sum_of_rows[i = 1:8], sum(x[i, :]) == 1
		end)
	
	# Define statistic to minimize
	
         # need to do this as can't do vector operations in JuMP for NL?
	function sq_matrix_power(x)

		x_power = x
		x_temp = zeros(size(x))
		sum_distance = 0.0

		for t=1:30
			if t>2
				for row = 1:size(x,1), col = 1:size(x,2)
					for k = 1:size(x,2)
						x_temp[row, col] = x_temp[row, col] + x_power[row, k] * x[k, col]
					end
				end

				x_power = x_temp
				x_temp = zeros(size(x))
			end

			for k = 1:size(x,2)-1
				sum_distance = sum_distance + (x_power[k, 2] - pd_table[1, k])^2
			end
		end

		return sum_distance
	end
	
	register(model, :sq_matrix_power, 1, sq_matrix_power, autodiff=true)
	
	@objective(model, Min, sq_matrix_power(x))

	JuMP.optimize!(model)

    obj_value = JuMP.objective_value(model)
    trans_matrix = JuMP.value(x)
end

I get the following error when trying to run this:

InexactError: convert(Float64, x[1,1]²)

convert@quad_expr.jl:330[inlined]
setindex!@array.jl:849[inlined]
(::Main.workspace379.var"#sq_matrix_power#11"{Array{Float64,2}})(::Array{JuMP.VariableRef,2})@Other: 37
macro expansion@rewrite.jl:227[inlined]
macro expansion@macros.jl:830[inlined]
calibrate_trans_rate(::String)@Other: 55
top-level scope@Local: 1

When I run this line by line - it seems the error is here:

for row = 1:size(x,1), col = 1:size(x,2)
					for k = 1:size(x,2)
						x_temp[row, col] = x_temp[row, col] + x_power[row, k] * x[k, col]
					end
				end

But I am not sure how to correct this.

I have managed to solve the problem. See solution below for someone else who might be interested.

function calibrate_trans_rate(filepath_pd)
	
	# Get PD values
	pd_table = zeros(Float64, 30, 7)
	open(filepath_pd, "r") do fp
		for row in 1:30
			line = readline(fp)
			pd_table[row, :] .= parse.(Float64, split(line, ",")) 
		end
	end

	# Set up solver
	model = Model(Ipopt.Optimizer)  # need to decide what solver to use
	
	# Set variable to be transition values
	@variable(model, 0 <= x[1:8, 1:8] <= 1)
	@variable(model, y[1:8, 1:8, 1:31])	# This is needed as JuMP only supports matrix operators up to quadratic terms. It doesn’t support cubic.
	
	# Constraints on transition rates
	@constraints(model, begin
			default_col[i = 1:7], x[i, 8] == pd_table[1, i]
			default_row[i = 1:7], x[8, i] == 0
			default_state, x[8, 8] == 1
			sum_of_rows[i = 1:8], sum(x[i, :]) == 1
		end)
	
	# Constraints on y {this is just a hack to get cubic terms for JuMP}
	@constraints(model, begin
			first_t_cons, y[:, :, 1] .== x
			other_t_cons[t = 1:30], y[:, :, t] * x .== y[:, :, t + 1]
		end)

	# Define statistic to minimize
	@NLexpression(model, tot_sum_of_sq_diff, sum((y[k, 8, t] - pd_table[t, k])^2 for k=1:7, t=1:30))
	
	@NLobjective(model, Min, tot_sum_of_sq_diff)

	JuMP.optimize!(model)

    obj_value = JuMP.objective_value(model)
	trans_matrix = zeros(8, 8)
    for row=1:8, col=1:8
		trans_matrix[row, col] = JuMP.value(x[row,col])
	end
	
	return trans_matrix
end
2 Likes