Error in mul!(c,a,b) if size(b) = (n,1)

Just to put in context. If I read using DelimitedFiles the data from a file with a single column of data, with, for example,

x = readdlm("./data.txt")

x is returned as an Array{Float64,2} with sizes (n,1). If I try directly to use this array in some LinearAlgebra function, I get an error, which is described by a minimal working example below.

My question is: should this be filled as a bug somewhere? That DelimitedFiles returns an array with dimension 2 is a bug, or LinearAlgebra functions should be able to deal with that pecularity? Or, more globally, is there any reason for the existence of “multidimensional arrays” which are not really multidimensional?

The issue:

julia> using LinearAlgebra

julia> a = ones(2) ; c = zeros(1);

julia> b = ones(2);

julia> mul!(c,transpose(a),b)
1-element Array{Float64,1}:
 2.0

julia> b = ones(2,1)
2Ă—1 Array{Float64,2}:
 1.0
 1.0

julia> mul!(c,transpose(a),b)
ERROR: MethodError: no method matching mul!(::Array{Float64,1}, ::Transpose{Float64,Array{Float64,1}}, ::Array{Float64,2}, ::Bool, ::Bool)

the documentation (subtly) mentions that it reads a Matrix so this is expected. I think this is an edge case since most delimited files would be 2D.

you can reshaape(a,:) to make it 1D before doing anything.

1 Like

Seems obvious that readdlm must return a 2d array, having one column is just a special case.

Use vec(a).

I’m a bit surprised that mul! doesn’t work, though.

1 Like

Yes, sure, converting the data is easy, that is not the point. It just took me a while to figure out what was going on because that call of mul! was inside another package, etc. Once I figure out what was going on the problem was solved in practice.

I wander if mul! should take care of that properly. Is there any difference in those two representations of a vector for any computation, or the method returns an error only because of the type definition and everything would work if the accepted type was less specific?

My interpretation is that the problem is not in DeliminatedFiles, nor in LinearAlgebra's mul!. Rather, it’s in the allocation of in-place c.

This works:

julia> a = ones(2)     # 2-vector, transpose(a) yields 1x2
julia> b = ones(2)     # 2-vector, treated like 2x1
julia> c2 = zeros(1,1) # 1x1 array 
julia> mul!(c2, transpose(a), b) # expect 1x1
1Ă—1 Array{Float64,2}:
 2.0

The output matrix for mul!(C,A,B) should be size(A,1) by size(B,2), or a 1x1. Your first example is a bit tricky because ones(2) is a vector with size (2,), and the output size is indeed (1,) i.e. a 1-vector. In your second example, the output should be (1,1) because b is 2x1. The first b is a vector and the second is a matrix. It just so happens they are treated nearly the same and yield the “same” result, except one output retains its vector-ness, and the other its matrix-ness.

You didn’t mention what you plan to do with DelimitedFiles, but there is no bug in the packages. It’s correct to get back a (n,1) and that means you should allocate a 1x1 and not (1,). Or maybe just use an inner product like dot(a,b) and get back a scalar (no need for transpose here). And if you want to handle more than one column of data, you’ll definitely want to follow mul!'s allocation expectations.

I don’t think there’s a bug in mul!, but one might imagine a version that checks whether it’s been allocated an (N,) when it expects (N,1) and fill the output appropriately. But Julia tries to maintain a better distinction and consistency with vectors vs. matrices than Matlab, where the indistinction is not always welcome.

Edit: Fixed some markdown

2 Likes

I am not using directly any of these functions. I (actually a master student) was dealing with data sets which sometimes are multidimensional, sometimes not, and using the LsqFit package, which internally uses mul!, and she was getting this error. We do not have any other solution to the problem except converting the data before calling the LsqFit routines.

Are the actual allocations (the memory placement) of zeros(2) any different from zeros(2,1)? From my ignorant point of view it seems that if the underlying functions only need to receive the addresses of those arrays, the implementation of a variant that does not throw an error in these cases should be fairly simple.

Which function in LsqFit? And who is allocating, the package or your student? In general it sounds appropriate for sometimes multidimensional data to be read as 2d as @DNF explains. The most general analysis would not special-case one column of data vs many.

The paradigm in Julia is not to think of pointers and memory as in C. Even if the memory placement is the same, the type info is different, and should not match ambiguously as in Matlab or JavaScript. (Some would call it “weak typing” and would prefer to error out. Some insist the error should occur at compile time…) You could indeed use dispatch to make methods that match two separate cases if you really wanted, but that’s probably not the best or simplest approach.

Can you provide more specifics on how LsqFit Is used?

Yes, sure, here is a minimal example (with the only difference that the y data would be the data of my student):

julia> x = collect(1:10);

julia> y = rand(10,1);

julia> using LsqFit

julia> @. model(x, p) = p[1]*exp(-x*p[2])
model (generic function with 1 method)

julia> p0 = rand(2)
2-element Array{Float64,1}:
 0.4058105988552596
 0.5830419304135694

julia> fit = curve_fit(model, x, y, p0)
ERROR: MethodError: no method matching mul!(::Array{Float64,1}, ::LinearAlgebra.Transpose{Float64,Array{Float64,2}}, ::Array{Float64,2}, ::Bool, ::Bool)
Closest candidates are:


1 Like

It looks like LsqFit actually only accepts vectors, I can’t make any examples work with matrices (e.g. even if model(x,p) & y match, or just reshape(p,:,1) alone).

If so, it should probably call vec for you when possible, or else fail somewhere earlier than mul!.

I agree with that. The problem is however that in that case it would modify the input data structure, which does not seem desirable, or copy the data, which is also sub-optimal.

vec should be almost free, it doesn’t copy. Changing this line seems to work:

julia> @eval LsqFit function curve_fit(model, xdata::AbstractArray, ydata::AbstractArray, p0::AbstractArray; inplace = false, kwargs...)
           check_data_health(xdata, ydata)
           # construct the cost function
           T = eltype(ydata)

           if inplace
               f! = (F,p)  -> (model(F,xdata,p); @. F = F - ydata)
               lmfit(f!, p0, T[], ydata; kwargs...)
           else
               f = (p) -> vec(model(xdata, p) .- ydata)
               lmfit(f, p0, T[]; kwargs...)
           end
       end
curve_fit (generic function with 6 methods)

julia> fit = curve_fit(model, x, y, p0)
LsqFit.LsqFitResult{Array{Float64,1},Array{Float64,1},Array{Float64,2},Array{Float64,1}}([0.21569863842289005, -0.1138581876726169], [-0.1323849313529998, 0.20481817486535325, 0.043805073224580104, 0.155411990397395, -0.29901825872037796, -0.329567866582958, 0.37937477638777567, -0.08988966719591829, 0.19722118793395427, -0.12888003824352467], [1.1205931996875342 -0.2417104274027374; 1.2557291191811646 -0.5417181224843365; … ; 2.786325461960339 -5.409059477823121; 3.1223373647917025 -6.734839186952479], true, Float64[])

julia> fit2 = curve_fit(model, x, y, reshape(p0,:,1)) # this might be harder
ERROR: MethodError: no method matching levenberg_marquardt(::NLSolversBase.OnceDifferentiable{Array{Float64,1},Array{Float64,2},Array{Float64,2}}, ::Array{Float64,2})
1 Like

Yet I am not convinced that there is an important reason for uni-dimensional matrices not be considered exactly the same thing in linear algebra operations than vectors. I, most sincerely, may understand that there is such reason, and I may not be able to understand it because of my lack of knowledge, such that the only reasonable answer to me would be well, “deal with that conversion of dimensions yourself”. Yet, if there is no such a profound reason, I think the behavior of LA functions would be less prone to confusion if they considered that equivalence.

It appears that curve_fit expects a 1d array, but data are read as 2d. If you’re always reading one column, then maybe appropriate to do curve_fit(model, x, vec(y), p0). But more generally, if you sometimes have multidimensional y, there’s probably a way to do this without 1d special case. That depends on the specifics e.g. is x always 1d and y sometimes not? And is the idea to build separate models for each column of y, or a single model and single cost to be minimized? Maybe something like curve_fit(model[i], x, y[:,i], p0) or some such.

Your student was understandably confused. It could be argued that LsqFit could use additional methods to match 2d representations of 1d data. But I would suggest it just needs clearer documentation that curve_fit expects 1d arrays. (If you feel strongly enough either way, consider filing an issue.) Nevertheless I believe there’s no problem with mul! or readdlm.

1 Like

The reason is in essence that a dimension being size 1 is only known at runtime, while the types are known at compile-time. These two will perform the same calculations, but the first is compiled to a call to matrix multiplication, because it needs also to accept rand(100,3) * rand(3,100), while the second is a call to dot (I think).

julia> rand(1,3) * rand(3,1)
1Ă—1 Array{Float64,2}:
 0.7250260513729658

julia> rand(3)' * rand(3)
0.8451500972396014

But on the narrower issue of LsqFit, you should open an issue. It claims to accept arrays, but really only accepts ::AbstractVector; at very least this should be correctly documented, but ideally it would accept higher arrays and just work, when it’s easy to do so (as in my patch above).

3 Likes

I might agree with that view, but I think it is somewhat hard to document the fact that an array of sizes (n,1) is not a 1D array. This will likely lead to confusion anyway.

Specifically about our issue: we have that solved already, by using vec when we need to.

If you really want am in depth understanding of this behavior, you should read https://github.com/JuliaLang/julia/issues/4774. It is one of the notorious issues, and it’s basically where we realized that no one knows what vectors actually are. Specifically, there is an isomorphism between row matrices and vectors, as well as between column matrices and vectors, but not one between row and column matrices. The result of this is that if you want a type system to work, you have to force users to be explicit about the isomorphism when they want it.

3 Likes

I feel so much better now :rofl: :rofl:

Are you coming from Matlab? It is indeed a great feature that there is no distinction between a scalar, a 1-element vector, or a 1x1 matrix. But also a great source of devilish bugs and tricky special cases. In most other languages, there is a huge difference. E.g. in C, int a[3][1] is not the same as int a[3].

It was hotly debated whether/how Julia should deal with this, ultimately leading to the present situation. I won’t try to convince you it is correct, but only that it was very carefully considered and consistent with most other languages. It’s a pain point for Matlab people, but worth it to survive.

No, from Fortran, where everything is a column vector :slight_smile:

(i. e, this works:)

real function mysum(n,x)
  real :: x(*)
  mysum = 0.
  do i = 1, n
    mysum = mysum + x(i)
  end do
end

program main
  real :: x(3,1), mysum
  x(1,1) = 1.
  x(2,1) = 1.
  x(3,1) = 1.
  write(*,*) mysum(3,x)
end program main

I am completely fine with understanding that that behavior is a compromise of many different language properties and, unfortunately, which ends with that corner case strangeness, if that is the case.