Because I want to allow both options, a single-argument form for onsitefield
(with one behaviour) and a two-argument form (with a different behaviour).
Wouldn’t the multi-argument one not be an onsite field but an interaction field though? You could just have another kwarg called interaction_field
that takes a two arg function.
To me they are conceptually the same. They are a modification of the onsite energy: the single-parameter form replaces the value of the onsite energy, while the two-argument form transforms the existing onsite energy in some site-dependent way (I’m here using the language of a tight-binding electron lattice). So yes, I could split the API into two different kwargs with different names, but I would do that as a result of my limitations to instrospect method arguments, not because I would prefer to do so (it becomes less convenient).
This is why I preferred to pose the question more as “can Julia do this?”. And “can it do it fast?”. It seems Base.@pure
is the answer, but it is dangerous. I wonder whether having a standard, fast and safe way to do this in Base would be desirable (i.e. Github-issue-worthy).
Ohh, I see. I thought when you were talking about a two argument function that would be some coupling between sites i
and j
, but it seems you mean its an onsite field with a parameter o
and possibly a site dependence i
?
If I’m understanding correctly and o
is a constant across the lattice, shouldn’t it just be passed as a constant to hamiltonian!
? ie, you have in one case
hamiltonian!(h; onsitefield = o)
where now you just do h += o*I
and in the other case
hamiltonian!(h; onsitefield = i -> cos(i) * o)
and this time you have to h[i, i] += cos(i)*o
? Or am I still missing something?
By the way (this is a bit off topic), since it seems you’re doing an exact diagonalization by looping over the indices, I found a much more performant way to build up lattice Hamiltonians so long as they’re sparse. Here’s an example with the transverse field Ising model (TFIM)
using LinearAlgebra, Arpack, SparseArrays
⊗(A, B) = kron(A, B)
function exact_solve_TFIM(g, L)
id = [1.0 0.0; 0.0 1.0] |> complex
σˣ = [0.0 1.0; 1.0 0.0] |> complex
σᶻ = [1.0 0.0; 0.0 -1.0] |> complex
σz(i) = foldl(⊗, sparse.([i==j ? σᶻ : id for j in 1:L]))
σx(i) = foldl(⊗, sparse.([i==j ? σˣ : id for j in 1:L]))
σzσz(i) = σz(i)*σz(i+1)
H = -sum(σzσz, 1:(L-1)) - g * sum(σx, 1:L)
(λ, ϕ) = eigs(H, nev=1, which=:SR)
return ϕ, λ, H
end
not sure if that’s helpful at all.
the above is of course a simplified version of the real problem, but I didn’t want to distract with non-essential details. But given your kind interest, here is goes.
The most relevant application of this would be wanting to apply a magnetic field or a strain field to a tight-binding hamiltonian. What is modified in this situation is not the diagonal (onsite energies h[i,i]
), but the hoppings between neighboring sites (h[i,j]
for linked i,j
). The magnetic field multiplies the existing hopping by a Peierls phase (h[i,j] * exp(im * dot(A(r(i), r(j)), r(i)-r(j)))
), for some vectors A(r)
and r(i)
. The strain field does a similar thing, multiplying the hoppings by a real factor. This would be achieved with hoppingfield = (i,j,h) -> h*...
.
However I also want to be able to replace all hoppings by something else using the same API, not just multiply them. This can be done with the above, three-argument form, of course, but I wanted to explore whether I could use a two-argument form of hoppingfield
to dispatch to a faster replace-only method. Such a use case would have the user write hoppingfield = (i,j) -> ...
, and the code could detect this different usage, and dispatch accordingly.
methods()
is going to have world age problems with @pure
. Especially if you are talking about a package where users can define new methods at any time. (I’m not saying actually I understand @pure
, but doing things like this have turned out badly for me)
I think you want to pass in a struct. It just works better with multiple dispatch than passing a function. Use a macro if you have to reduce boilerplate.
Ah of course. Sorry for dragging this away from your original question. Could you maybe just have a separate kwarg for the Peierls phase? Not the most elegant solution I know.
It really is too bad that we don’t have a way to turn off some of the genericness of Julia functions. Having objects with a definite type signature / number of args could be useful in many circumstances.
FWIW, applicable
seems to fare fairly well and should be safe:
julia> a(f) = applicable(f, 1) ? f(1) : f(1,2);
julia> t(f) = try; f(1); catch; f(1,2); end;
julia> m(f) = first(methods(h)).nargs == 1 ? f(1) : f(1,2);
julia> @btime a($h)
156.568 ns (0 allocations: 0 bytes)
3
julia> @btime t($h)
6.479 μs (2 allocations: 48 bytes)
3
julia> @btime m($h)
3.611 μs (14 allocations: 784 bytes)
3
This is precisely what I was looking for! Many thanks @pfitzseb
Thanks for that @Mason. I actually build my sparse d-dimensional lattice Hamiltonians in a rather general way, see GitHub - pablosanjose/Elsa.jl: Efficient lattice simulation algorithms - a Julia library for details if you’re interested. What I describe above is about efficiently modifying a Hamiltonian in-place once it’s created.
You could do
function hamiltonian!(h; onsitefield = nothing, onsitefield_nonautonomous = nothing)
if onsitefield !== nothing && onsitefield_nonautonomous !== nothing
error("at most one of onsitefield or onsitefield_nonautonomous is supported")
elseif onsitefield === nothing && onsitefield_nonautonomous === nothing
onsitefield = (o,i) -> o
elseif onsitefield === nothing && onsitefield_nonautonomous !== nothing
onsitefield = onsitefield_nonautonomous
else
onsitefield = (o, i)-> onsitefield(o)
end
#function body
end