using Distributions
using FillArrays
using LinearAlgebra
Nz = 5
My = 10
a = reshape(MvNormal(Fill(0.5, My*Nz), 0.01 * I), (My, Nz))
axes(a)
axes(a,1)
Calling axes on a works fine, but raises a MethodError when specifying the dimensions:
julia> axes(a)
(Base.OneTo(10), Base.OneTo(5))
julia> axes(a,1)
ERROR: MethodError: no method matching axes(::MatrixReshaped{Continuous, MvNormal{Float64, PDMats.ScalMat{Float64}, Fill{Float64, 1, Tuple{…}}}}, ::Int64)
Closest candidates are:
axes(::Core.SimpleVector, ::Integer)
@ Base essentials.jl:784
axes(::SparseArrays.ReadOnly, ::Any...)
@ SparseArrays ~/.julia/juliaup/julia-1.10.4+0.x64.linux.gnu/share/julia/stdlib/v1.10/SparseArrays/src/readonly.jl:20
axes(::Base.Broadcast.Broadcasted{<:Any, <:Tuple{Vararg{T, N}} where T}, ::Integer) where N
@ Base broadcast.jl:239
...
Stacktrace:
[1] top-level scope
@ REPL[4]:1
Some type information was truncated. Use `show(err)` to see complete types.
The problem is that the type of a is a MatrixReshaped, which despite the name is not an ordinary reshaped array, it is defined by Distributions.jl to be a subtype of Distribution, and in particular it is not a subtype of AbstractArray.
axes(a) works because the default axes(a) method in Base works for any type of argument that defines size(a), whereas the default axes(a,d) method requires a::AbstractArray (maybe because it does the special AbstractArray thing where d can be >= ndims(a) in which case it allows indices of 1).
add a fallback method axes(A, d::Integer) = axes(A)[d] to Base
Option (1) will be quicker to get in the short term, and is a good thing anyway. Basically, if Distributions.jl wants to define its own “array-ish” type which is not an AbstractArray, they are responsible for defining the applicable array methods.
Thank you for clarifying. My confusion arises from trying to optimize a Turing model using this tip.
In my model I need to calculate a response function like
response(θ, sa) = sum(θ .* sa)
where sa is a Vector.
When I define my model as
@model function forward_model(signal, sa_matrix)
θ_guess = Array{Float64}(undef, My*Nz)
for ix ∈ eachindex(θ_guess)
θ_guess[ix] ~ Uniform(0.0, 1.0)
end
# Set variance prior
σ² ~ truncated(Normal(0.0, 0.01); lower=0)
μ = [response(θ_guess, s) for s ∈ eachslice(sa, dims=1)]
signal ~ MvNormal(μ, σ² * I)
end
it works fine. The indexing errors come up when I try to use a Multivariate distribution: