I have a problem which I have been using NLopt to solve, but am interested in converting it into a JuMP model to try other solvers and also get some autodifferentiation. The problem is that the objective function is a black box which uses a CUDA kernal. However, I have a Julia function for the same calculation that could be autodifferentiated. Is it possible to register the Julia version, but solve using the CUDA version?
1 Like
I’m not sure I understand the question, but you’re free to register different methods to evaluate the function and the gradient. You could generate a method to evaluate the gradient by using ForwardDiff/ReverseDiff on your pure Julia function. It doesn’t sound like JuMP’s autodiff=true
is appropriate here.
1 Like