You seem to be showing the optimized pulse now, so I’m assuming that part is solved…
(2) The GRAPE dose not achieve the desired outcome
Even though I’m not surprised, based on the implementation, you’ll have to be a bit more specific. In what way does it not achieve the desired outcome?
Here is my code
I would strongly recommend not writing scripts with global variables like that. You will want to put your GRAPE implementation (specifically, the loop for iter …) in a function. Global variables in Julia kill performance. Don’t get into that habit, not even for prototyping.
# Start state global ρ0 = s0 * s0'
Since this is a closed system, why are you working with density matrices? Not that it really matters for a two-level system, but it’s something to keep in mind when scaling to real systems.
global Ω = rand(length(t_list)) .* (2π) .- π
That’s okay for playing around, but for most systems, the piecewise-constant pulses are an approximation of physically time-continuous pulses. So setting them to random values isn’t going to be physical. Also, note that the choice of guess pulse has a major impact on convergence. So, you generally want to start with something that’s physically motivated.
# Gradient Calculation for i in 1:length(t_list) grad_Ω[i] = -real(tr(ρf_list[i]' * 1im * dt * ((tr01 + tr10) * ρ0_list[i] - ρ0_list[i] * (tr01 + tr10)))) end
GRAPE is known to require high-precision gradients to converge. This first-order approximation will generally not do. Have a look at J. Chem. Phys. 131, 234108 (2009) and J. Magnet. Res. 212, 412 (2011). You can, of course, play around with the first-order Taylor series you have here, but don’t be surprised if you don’t see convergence. At the very least, use a higher-order Taylor series. The state-of-the art would be the approach described in J. Chem. Phys. 143, 084113 (2015) and Quantum 6 , 871 (2022), also implemented in QuantumGradientGenerators.jl.
# Update Pulses lr = 0.01 - iter / 100 global Ω = Ω + lr * grad_Ω Ω = max.(-π, min.(π, Ω))
Even though that’s along the lines of the original GRAPE paper, there is absolutely no guarantee here that going in the direction of the gradient by exactly lr will get you to a lower error (or higher fidelity: it looks like you’re maximizing fidelity instead of the usual minimization; in any case, make sure you use the appropriate sign for the direction). You really want to determine the step width with a proper linesearch algorithm. Actually, what you want to do is get the gradient, and then feed it into any existing gradient-optimization toolbox. Preferably LBFGS: Using the quasi-Hessian that LBFGS gives you “for free” makes a huge difference. Nobody uses GRAPE with a plain gradient, in real applications. Again, nothing wrong with playing around with that, but don’t expect it to give very useful results.
Also, truncating the pulses in the last line can break your convergence. Usually it works, but be careful with it. You can also use LBFGS-B (where the -B is for “bounds”, which incorporates bounds in a mathematically more sound way).
Technically true, as a general statement for gradient-based optimization, but for the quantum control problem it has been shown (by Hersh Rabitz’ gazillion papers) that there are no local minima if the controls are not constrained. The PWC approximation is a constraint, so this holds only if the pulses are reasonably good approximations of time-continuous fields. Adding bounds on the amplitude is definitely a constraint, and I’ve seen local traps in the case.
or it might be the case that your Hamiltonian is simply physically incapable of achieving the final state you desire.
That’s very true. You have to check “controllability” first. But for typical quantum control problems like this one, they’re controllable, so that’s probably not something to worry about for this example.
A typical debugging tool in such situations would be to plot your fidelity as a function of the iteration step
That’s always useful, and a proper GRAPE implementation (with a high-precision gradient and a line search) should show monotonic convergence
If the fidelity plateaus, then probably you have reached a local minimum or your Hamiltonian does not permit the evolution you want.
Even though there are no local minima, there can be very flat areas in the optimization landscape. It’s not uncommon for the optimization to slow down and then pick up speed again. Especially if you start with a bad guess pulse. So, always give it a few 10000 iterations.
Another graphical debugging tool I would recommend is plotting out the linesearch like I do with the GRAPELinesearchAnalysis.jl package. That will tell you if your gradient is correct.
Also, consider using pre-packaged quantum control tools like:
JuliaQuantumControl/QuantumControl.jl: Julia Framework for Quantum Dynamics and Control
Not just QuantumControl.jl, but specifically GRAPE.jl. There’s nothing wrong with implementing your own GRAPE as a learning experience, of course. In fact, I would encourage it. But, you probably don’t want to use that implementation “in production”.
These tools are tested by many people
I wish that was true ![]()
To be fair, the Julia implementation evolved from a Fortran implementation, which has been tested by many people.
less likely to contain bugs.
That I will stand by: The GRAPE implementation should be pretty solid. And there are a few subtleties in GRAPE, some of which I’ve hinted at above. So bugs would definitely be expected in a “naive” implementation.
I would point @fqguo to the graphical scheme for GRAPE from Quantum 6 , 871 (2022):
There’s also pseudo-code in that paper that might be helpful, and it describes how to calculate exact gradients.
You could do that for a toy problem with just a few time steps, for debugging. But even in this example, there are 100 time steps, and I’m not sure finite difference will be able to handle that in any reasonable amount of time. I do recommend plotting fidelity against step size, in the direction of the gradient. If it doesn’t to up for a small enough step size, the gradient is wrong.
