# Replicate Python code in Julia

Hi everyone,
I want to replicate Python code in Julia but I get an error.

#Neural network construction
using Flux, Plots

model = Chain(Dense(5 => 32), Dense(32 => 32), Dense(32 => 2, bias=false))

function dr(a::Vector, z::Vector, e_r::Vector, R::Vector, C::Vector)
# create two empty vectors of length n
n = length(a)
x = zeros(n)
y = zeros(n)

``````# we normalize exogenous state variables by their 2 standard deviations
# so that they are typically between -1 and 1
a = a/sigma_e_a/2
z = z/sigma_e_z/2
e_r = e_r/sigma_e_r/2

# we normalze interest rate and consumtpion to be between -1 and 1
R = (R-Rmin)/(Rmax-Rmin)*2.0-1.0
C = (C-Cmin)/(Cmax-Cmin)*2.0-1.0

s = cat([_e[:, 1] for _e in [a, z, e_r, R, C]], dims=2)

x = model(s) # n x 2 matrix

# consumption share is always in [0,1]
lambda_desicion = exp.x[:,1]

# expectation of marginal consumption is always positive
pie_desicion  = x[:,2]

return (lambda_desicion, pie_desicion)
``````

end

Rvec = LinRange(Rmin, Rmax, 100)
Cvec = LinRange(cmin, cmax, 100)
Rvec = vcat(Rvec)
Cvec = vcat(Cvec)
lambdavec,pievec = dr(Rvec0, Rvec0, Rvec*0, Rvec, Cvec)

Can anyone, please, tell me what’s wrong with my code?

Hi there! What error messages do you see? The messages often contain the answers. 2 Likes

MethodError: no method matching -(::Vector{Float64}, ::Int64)
For element-wise subtraction, use broadcasting with dot syntax: array .- scalar
Closest candidates are:
-(::T, ::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} at int.jl:86
-(::UniformScaling, ::Number) at /Applications/Julia-1.8.app/Contents/Resources/julia/share/julia/stdlib/v1.8/LinearAlgebra/src/uniformscaling.jl:146
-(::Core.LLVMPtr, ::Integer) at ~/.julia/packages/LLVM/HykgZ/src/interop/pointer.jl:111

Stacktrace:
 dr(a::Vector{Float64}, z::Vector{Float64}, e_r::Vector{Float64}, R::Vector{Float64}, C::Vector{Float64})
@ Main ./In:14
 top-level scope
@ In:5

1 Like

Funny, the error message literally contains the answer: “For element-wise subtraction, use broadcasting with dot syntax: array .- scalar”.

So if you have a vector `v` and want to subtract a scalar `s` from every element of `v`, use `v .- s`.

2 Likes

Thanks, I corrected the code, but now I am getting another error message. It is similar to first one but I can’t figure it out

MethodError: no method matching +(::Vector{Float64}, ::Float32)
Closest candidates are:
+(::Any, ::Any, ::Any, ::Any…) at operators.jl:591
+(::T, ::T) where T<:Union{Float16, Float32, Float64} at float.jl:383
+(::Union{InitialValues.NonspecificInitialValue, InitialValues.SpecificInitialValue{typeof(+)}}, ::Any) at ~/.julia/packages/InitialValues/OWP8V/src/InitialValues.jl:154

Stacktrace:
 _getindex
 getindex
 copy
 materialize
 (::Dense{typeof(identity), Matrix{Float32}, Vector{Float32}})(x::Matrix{Vector{Float64}})
@ Flux ~/.julia/packages/Flux/uCLgc/src/layers/basic.jl:174
 macro expansion
@ ~/.julia/packages/Flux/uCLgc/src/layers/basic.jl:53 [inlined]
 _applychain
@ ~/.julia/packages/Flux/uCLgc/src/layers/basic.jl:53 [inlined]
 (::Chain{Tuple{Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Vector{Float32}}, Dense{typeof(identity), Matrix{Float32}, Bool}}})(x::Matrix{Vector{Float64}})
@ Flux ~/.julia/packages/Flux/uCLgc/src/layers/basic.jl:51
 dr(a::Vector{Float64}, z::Vector{Float64}, e_r::Vector{Float64}, R::Vector{Float64}, C::Vector{Float64})
@ Main ./In:19
 top-level scope
@ In:5

The error is in neural network part of my code and I don’t have any addition in it. So I don’t know what is wrong.

Two more things: try to format your posts so that the code is easier to read:

Also, please tell us if you’re using ChatGPT (and try to avoid using its code if you don’t understand it.

2 Likes