# Select elements from each row in a matrix

Hi,

Is there a good way to select/index a set of elements from each row in a matrix, where the selected elements might be different for each row?

Through REPL-brute forcing I manged to get this abomination to do what I want:

``````selectcols(mat, filterfun) = mapfoldl(rowtup -> permutedims(filterfun(rowtup)), vcat, enumerate(eachrow(mat)))

A = reshape(collect(1:20), 4,5)
4×5 Array{Int64,2}:
1  5   9  13  17
2  6  10  14  18
3  7  11  15  19
4  8  12  16  20

# Just a dummy test function,  select all but elem 1 from row 1, all but elem 2 from row 2 etc.
select(indrow) = indrow[2][setdiff(1:end, indrow[1])]

selectcols(A, select)
4×4 Array{Int64,2}:
5   9  13  17
2  10  14  18
3   7  15  19
4   8  12  20
``````

But it seems to run in exponential time, maybe because it creates alot of temporary variables.

I would like it if the solution was applicable to higher dimensions as well, although a generic method that handles all cases is not needed. For example, I think that the above method can be rewritten to a 3D version and a 4D version and that would be good enough.

What do you want to do with the selected elements? You could put them into a new array as in your example, and there are certainly more efficient ways to do that, but if your intention is to do some additional processing on this data then you might be able to work in-place … some more context here would be helpful.

1 Like

I’m interested in putting the result in a new array. Basically the same API as in the example.

Yes, but why? What is the context? Since you’re asking about efficiency, in Julia we usually are also concerned about creating new arrays unnecessarily.

Will each row of the output always contain the same number of elements?

Hi,

Sorry for lack of context. I just wanted to avoid superflous information (and I’m also a little bit worried that people will find what I’m trying to do too silly and just ignore me ).

I’m writing a little swiss-army knife kinda library of mutation operations for neural networks and I’m thinking that one of the tools should be to change the shape of the filter kernels in convolutional layers.

The weights of a 2D conv-layer is typically structured as a 4D array and in this case (flux) the first two dimensions are the kernel sizes. So given an array of size `w,h,nout,nin` i want to change it into a new array of size `w',h',nout,nin` where the mapping from `w,h` to `w',h'` can be decided through some user defined policy, hopefully with a not too clunky API.

I guess this also answers @StefanKarpinski’s question about output having the same number of elements (yes, it is fine to crash if the policy does not always select the same number of elements).

I know it is probably a bad idea to try to do that to an already trained model, but I want to be able to play with evolving a population of models while they are training (which in itself might be a bad idea ofc), possibly even learning which operations “work” and which doesn’t.

I haven’t given much thought about what would be the least damaging way to shrink the kernel size, but I want to try to something like starting from the edges and remove the row/col with the smallest abssum for each kernel. This might very well be worse than just picking the one edge with lowest abssum across all kernels as it will mess up the phasing between kernels which probably means something to the activation, but I want to try it out.

Anyways, given how many nice convenience things Julia has for working with arrays, I was first hoping that there might be some CartesianIndex magic or similar which could do this for me. If the answer given the above is to preallocate the array and write to it in a for loop I think I can work it out.

I’m also aware that conv-layers seldom are large enough for an inefficient algorithm to be a significant problem (in the context I’m talking about here). I’m just out to learn something (and maybe to avoid the perhaps unlikely embarassment of someone pointing out a stupidly inefficient piece of code in my repo).