# Input to Neural Network

Hi,
I am new to both Julia and deep learning.

Suppose we define a following network:

using Flux

nn_width = 10
m = Chain(Dense(1,nn_width,tanh),
Dense(nn_width,nn_width,tanh),
Dense(nn_width,1))

input1 =Vector(-2:0.01:2)'
input2 =Vector(-2:0.01:2)

m(input1) # this works
m(input2) # this doesn't work



Q-1 What’s the reason for this behavior? My initial prior was both won’t work.

Q-2 How can we do something like m.(input) ? map(m, input) doesn’t work.

1 Like

The model m as defined takes a one-dimensional vector as input and outputs a one-dimensional vector:

julia> m([1.2])
1-element Vector{Float64}:
-0.7267242113929453


Now,

1. why does m(input2) not work?
You are passing a vector of 401 dimensions instead of one.
2. why does m.(input2) or map(m, input2) not work?
This calls the model on each element, i.e., a scalar, of input. Yet, a Flux model requires a vector as input. The following will work:
map(x -> m([x]), input2)
m.(eachrow(input2))

Passing multiple inputs in this fashion is inconvenient though and the output is a nested vector of vectors. Thus, Flux allows passing a matrix of shape input dimension \times batch size to compute the outputs on a whole batch of inputs.
3. why does m(input1) work?
input1 has shape (1, 401) and is therefore interpreted as a batch of 401 one-dimensional inputs. Accordingly, it does work, computes the outputs on all 401 inputs and collects them into a matrix of shape output dimension \times batch size.
3 Likes