The context of this question is an algorithm called iteratively re-weighted least squares (IRLS) used to fit generalized linear models (GLMs). At each iteration vectors of the mean responses, μ
, the unit deviances, dev
, the working weights, wwt
, and the working residuals, wres
, are updated from a response vector, y
, and a linear predictor, η
.
These updates can be expressed as scalar evaluations to be mapped over the vectors. An example for a GLM with a Bernoulli response and the logit link function (for which the inverse link is the logistic function) can be run with
Bernoulliupdate.jl (934 Bytes)
Is there a way of writing this update using dot-vectorization? I tried expressions like
(μ, dev, wwt, wres) .= Bernoulliupdate.(y, η)
to update four vectors at once but that failed under Julia v1.8.5
It’s probably best/cleanest to gather all your vectors into a single array, basically make a table.
Then you get the desired broadcasting syntax:
# your Bernoulliupdate:
julia> f(x) = (a=x, b=x+1)
# your μ, dev, wwt, wres vectors together:
julia> A = StructArray(a=rand(2), b=rand(2))
2-element StructArray(::Vector{Float64}, ::Vector{Float64}) with eltype NamedTuple{(:a, :b), Tuple{Float64, Float64}}:
(a = 0.7037272421570892, b = 0.9034695137409692)
(a = 0.1435058682650946, b = 0.6068241303562021)
julia> A .= f.([1,2])
2-element StructArray(::Vector{Float64}, ::Vector{Float64}) with eltype NamedTuple{(:a, :b), Tuple{Float64, Float64}}:
(a = 1.0, b = 2.0)
(a = 2.0, b = 3.0)
Alternatively, and potentially even more conveniently, you can put all vectors into a single array, including y and η:
# your Bernoulliupdate:
julia> f(x) = (a=x.c, b=x.c+1)
julia> A = StructArray(a=rand(2), b=rand(2), c=rand(2))
# instead of broadcasting:
julia> map!(x -> merge(x, f(x)), A, A)