Broadcast strange on `0-dimensional Array`?

I stumbled over a small thing that surprised me. Assume we have a Szenario where we want to be able to have mutating functions, but we came along with floats. We could just turn them into 0-Arrays using fill(0.0). I was surprised about the following behaviour:

What I am used to do (for arbitrary arrays but here just for 1-dimensional arrays with one element)

julia> 1.0 .* [1.0]
1-element Vector{Float64}:

julia> 1.0 * [1.0]
1-element Vector{Float64}:

(or similarly .+ and + when I add two of these.

However when I use fill (0-dim arrays) I get

julia> 1.0 * fill(1.0)
0-dimensional Array{Float64, 0}:

julia> 1.0 .* fill(1.0) # <-- Why?

The same happens even for + vs .+ for fill(0.0)s – where especially

julia> fill(0.0) .+ fill(1.0)

Seems quite surprising to me, I would have expected a 0-dimensioal Array{Float64,0} for both broadcast operations here.

Can someone explain this behaviour?

For the largest context, see this PR, where I try to be able to provide an interface for optimization on the real line (where a user might think in p=1.0 floats and not p=[1.0] arrays)

possibly relevant: broadcast should not drop zero-dimensional results to scalars · Issue #28866 · JuliaLang/julia · GitHub


Wow! Super-fast reply, thanks!
So I will probably stick to using [1.0] instead of fill(1.0), since those 1-dim arrays behave much more as I expect arrays to behave.