Hi,
sometimes I want to benchmark my code on my neural networks, especially their forward pass, using TimerOutputs.jl, but I don’t want to comment out all this when calculating gradients.
Currently it’s crashing, not being able to derivate through the timing macro.
Is there a way to define adjoint for this macro so the macro would effectively be ignored during backprop and just delegated gradient to the timed function?
Thanks.