You seem to be showing the optimized pulse now, so Iām assuming that part is solvedā¦
(2) The GRAPE dose not achieve the desired outcome
Even though Iām not surprised, based on the implementation, youāll have to be a bit more specific. In what way does it not achieve the desired outcome?
Here is my code
I would strongly recommend not writing scripts with global variables like that. You will want to put your GRAPE implementation (specifically, the loop for iter ā¦
) in a function. Global variables in Julia kill performance. Donāt get into that habit, not even for prototyping.
# Start state
global Ļ0 = s0 * s0'
Since this is a closed system, why are you working with density matrices? Not that it really matters for a two-level system, but itās something to keep in mind when scaling to real systems.
global Ī© = rand(length(t_list)) .* (2Ļ) .- Ļ
Thatās okay for playing around, but for most systems, the piecewise-constant pulses are an approximation of physically time-continuous pulses. So setting them to random values isnāt going to be physical. Also, note that the choice of guess pulse has a major impact on convergence. So, you generally want to start with something thatās physically motivated.
# Gradient Calculation
for i in 1:length(t_list)
grad_Ī©[i] = -real(tr(Ļf_list[i]' * 1im * dt * ((tr01 + tr10) * Ļ0_list[i] - Ļ0_list[i] * (tr01 + tr10))))
end
GRAPE is known to require high-precision gradients to converge. This first-order approximation will generally not do. Have a look at J. Chem. Phys. 131, 234108 (2009) and J. Magnet. Res. 212, 412 (2011). You can, of course, play around with the first-order Taylor series you have here, but donāt be surprised if you donāt see convergence. At the very least, use a higher-order Taylor series. The state-of-the art would be the approach described in J. Chem. Phys. 143, 084113 (2015) and Quantum 6 , 871 (2022), also implemented in QuantumGradientGenerators.jl.
# Update Pulses
lr = 0.01 - iter / 100
global Ī© = Ī© + lr * grad_Ī©
Ī© = max.(-Ļ, min.(Ļ, Ī©))
Even though thatās along the lines of the original GRAPE paper, there is absolutely no guarantee here that going in the direction of the gradient by exactly lr
will get you to a lower error (or higher fidelity: it looks like youāre maximizing fidelity instead of the usual minimization; in any case, make sure you use the appropriate sign for the direction). You really want to determine the step width with a proper linesearch algorithm. Actually, what you want to do is get the gradient, and then feed it into any existing gradient-optimization toolbox. Preferably LBFGS: Using the quasi-Hessian that LBFGS gives you āfor freeā makes a huge difference. Nobody uses GRAPE with a plain gradient, in real applications. Again, nothing wrong with playing around with that, but donāt expect it to give very useful results.
Also, truncating the pulses in the last line can break your convergence. Usually it works, but be careful with it. You can also use LBFGS-B (where the -B
is for āboundsā, which incorporates bounds in a mathematically more sound way).
Technically true, as a general statement for gradient-based optimization, but for the quantum control problem it has been shown (by Hersh Rabitzā gazillion papers) that there are no local minima if the controls are not constrained. The PWC approximation is a constraint, so this holds only if the pulses are reasonably good approximations of time-continuous fields. Adding bounds on the amplitude is definitely a constraint, and Iāve seen local traps in the case.
or it might be the case that your Hamiltonian is simply physically incapable of achieving the final state you desire.
Thatās very true. You have to check ācontrollabilityā first. But for typical quantum control problems like this one, theyāre controllable, so thatās probably not something to worry about for this example.
A typical debugging tool in such situations would be to plot your fidelity as a function of the iteration step
Thatās always useful, and a proper GRAPE implementation (with a high-precision gradient and a line search) should show monotonic convergence
If the fidelity plateaus, then probably you have reached a local minimum or your Hamiltonian does not permit the evolution you want.
Even though there are no local minima, there can be very flat areas in the optimization landscape. Itās not uncommon for the optimization to slow down and then pick up speed again. Especially if you start with a bad guess pulse. So, always give it a few 10000 iterations.
Another graphical debugging tool I would recommend is plotting out the linesearch like I do with the GRAPELinesearchAnalysis.jl package. That will tell you if your gradient is correct.
Also, consider using pre-packaged quantum control tools like:
JuliaQuantumControl/QuantumControl.jl: Julia Framework for Quantum Dynamics and Control
Not just QuantumControl.jl
, but specifically GRAPE.jl. Thereās nothing wrong with implementing your own GRAPE as a learning experience, of course. In fact, I would encourage it. But, you probably donāt want to use that implementation āin productionā.
These tools are tested by many people
I wish that was true
To be fair, the Julia implementation evolved from a Fortran implementation, which has been tested by many people.
less likely to contain bugs.
That I will stand by: The GRAPE implementation should be pretty solid. And there are a few subtleties in GRAPE, some of which Iāve hinted at above. So bugs would definitely be expected in a ānaiveā implementation.
I would point @fqguo to the graphical scheme for GRAPE from Quantum 6 , 871 (2022):
Thereās also pseudo-code in that paper that might be helpful, and it describes how to calculate exact gradients.
You could do that for a toy problem with just a few time steps, for debugging. But even in this example, there are 100 time steps, and Iām not sure finite difference will be able to handle that in any reasonable amount of time. I do recommend plotting fidelity against step size, in the direction of the gradient. If it doesnāt to up for a small enough step size, the gradient is wrong.