To normalize or not to normalize a Neural or Universal ODE

Hey folks. I have been working on creating a Neural ODE and a Universal ODE for a 6 compartment model. One common neural network practice is to normalize the inputs to be between 0 and 1. For example in a convolutional neural net, the 3 channel 8bit image with color range from 0 to 255 per channel is normalized to the range of 0,1 per channel. In a Neural ODE or Universal ODE context this means normalizing the actual training data to be between 0,1 for each group. As a result of normalizing, I would also obtain outputs or predictions that were between 0,1.

Now in looking at the Universal ODE github repo, and the examples for NeuralODE in DiffEqFlux, I have not seen anyone normalize their Neural ODE/ Universal ODE training data and predictions. Hence I was wondering if anyone knows whether normalizing helps the performance of these Neural ODE models, or if it makes no difference. Thanks.

It probably helps. We just haven’t done much of it.

1 Like

I will give it a try and see what happens. It is worth an experiment.