Exact diagonalization : tutorial 1
Lanczos algorithm

Power method and Krylov basis

1/ Give an estimate of the number of non-zero matrix elements for a spin chain Hamiltonian of length $L$ and nearest neighbour interactions. Comment.

We do not want to store the Hamiltonian in a full matrix to take advantage of its sparse structure. There are two solutions: either use a sparse storage scheme (only non-zero elements are kept) or we can even recompute matrix elements on-the-fly. Yet, we need an algorithm that allows us to find the ground-state based on the mere operation $$ H|{\psi}\rangle $$ and basics linear algebra for vectors (and not matrices).

A variational argument

Consider an Hamiltonian $H$ on a finite-size Hilbert space, with $\{|{\psi_n}\rangle,E_n\}$ its eigenstates/energies. The mean-energy (Rayleigh quotient) of a given vector $|{x}\rangle$ reads: $$E(x) = \frac{\langle{x}|{H}|{x}\rangle}{\langle{x}|{x}\rangle}$$

2/ Show that $E_0 \leq E(x) \leq E_{\text{max}}$, with $E_0$ the ground-state energy and $E_{\text{max}}$ the maximal energy. By considering the gradient of $E(x)$, prove that equality is attained if and only if $|{x}\rangle$ corresponds to the extremal eigenstates.

Therefore, given in inital wave-vector $|{x}\rangle$, we may look for the ground-state in the space spanned by vectors $\{|{x}\rangle, H|{x}\rangle, H^2|{x}\rangle, \ldots, H^p|{x}\rangle \}$ with $p$ a number of iterations. This set of vectors is called the Krylov space.

The power method

A first possible algorithm is called the power method and amounts to filter the ground-state $|{\psi_0}\rangle$ by starting from an initial random and normalized state $|{x}\rangle$, and by iteratively applying $H$.

In [1]:
from quantumising import *

L, hx = 6, 0.5

HamH = GeneralOp([ [ -hx, Sx(i)] for i in range(L) ])
HamIsing = GeneralOp([ [ 1.0, SzSz(i,(i+1)%L)] for i in range(L) ])
Ham = HamIsing + HamH

hilbert = IsingHilbert(L=L)
x = randomWf(hilbert)
for i in range(101):
    x = Ham*x
    x.normalize()
    if not i%10: print(i,x.measure(Ham))

# to cross-check results
Ham.createMatrix(hilbert)
Ham.diagonalize(with_eigvec=False)
E0 = np.min(Ham.En)
print("expected E0=",E0)
0 -0.11297178113
10 -0.0327606755543
20 -0.398843223183
30 -0.679543349923
40 -0.860358866775
50 -0.964445269455
60 -1.02052111546
70 -1.04965632078
80 -1.06450922539
90 -1.07200782007
100 -1.07577494741
expected E0= -1.93185165258

3/ What is the condition on $|{x}\rangle$ for the algorithm to work. Why the algorithm above is not in general working ?

4/ Let is introduce a constant $C$ to circumvent this issue. How must $C$ be chosen in the algorithm below to ensure convergence ? Is there a better choice for $C$ ? Is that useful in practice?

In [2]:
E0, Emax = np.min(Ham.En), np.max(Ham.En)
C = -Emax
x = randomWf(hilbert)
#x = basisvectorWf(hilbert,3)
for i in range(201):
    x = Ham*x - C*x
    x.normalize()
    if not i%10: print(i,x.measure(Ham),E0,Emax)
0 0.535627631971 -1.93185165258 1.93185165258
10 1.74939963751 -1.93185165258 1.93185165258
20 1.90579005108 -1.93185165258 1.93185165258
30 1.91954951656 -1.93185165258 1.93185165258
40 1.92309731919 -1.93185165258 1.93185165258
50 1.92542042131 -1.93185165258 1.93185165258
60 1.9271590289 -1.93185165258 1.93185165258
70 1.92845347124 -1.93185165258 1.93185165258
80 1.92940507823 -1.93185165258 1.93185165258
90 1.93009768651 -1.93185165258 1.93185165258
100 1.9305981 -1.93185165258 1.93185165258
110 1.93095773541 -1.93185165258 1.93185165258
120 1.93121521088 -1.93185165258 1.93185165258
130 1.93139904249 -1.93185165258 1.93185165258
140 1.93153003762 -1.93185165258 1.93185165258
150 1.93162325239 -1.93185165258 1.93185165258
160 1.93168951727 -1.93185165258 1.93185165258
170 1.9317365907 -1.93185165258 1.93185165258
180 1.93177001409 -1.93185165258 1.93185165258
190 1.93179373715 -1.93185165258 1.93185165258
200 1.93181057091 -1.93185165258 1.93185165258

The Lanczos algorithm

The Lanczos algorithm builds a tridiagonal representation of the Hamiltonian in an orthonormalized basis of the Krylov space. It possesses remarkable properties, such as exponential convergence to the extremal eigenvalues and numerical statibility.

Recursion relations

The Lanczos vectors are denoted by $\{|{i}\rangle\}_{i=0,p-1}$ with $p$ the number of iterations. They statisfy the following recursion relations : \begin{align} |{i+1'}\rangle &= H|{i}\rangle - a_i|{i}\rangle - b_{i-1} |{i-1}\rangle \\ |{i+1}\rangle &= |{i+1'}\rangle/b_{i} \\ \end{align} where $b_i = \||{i+1'}\rangle\|$ and $b_{-1}=0$ by convention.

5/ Give the expression of the $a_i$ coefficient as a function of known data at iteration $i$. Show that the matrix representation of the Hamiltonian in the Lanczos subbasis is tridiagonal and gives the expression of its elements.

6/ Implement the Lanczos algorithm to compute the spectrum of the approximate representation of the Hamiltonian.

In [3]:
def lanczosSpectrum(a,b):
    tridiag = np.diag(a)
    tridiag+= np.diag(b,k=-1)
    return np.linalg.eigvalsh(tridiag)

numiteration = 20
numprint = 3
a, b = [], []
u2 = randomWf(hilbert)

for i in range(numiteration):
    a.append(u2.measure(Ham))
    print(i+1,lanczosSpectrum(a,b)[:numprint],E0)
    u3 = Ham*u2 - a[i]*u2
    if i>0: u3 -= b[i-1]*u1
    b.append(u3.norm())
    u3 /= b[i]
    u1 = u2
    u2 = u3
1 [ 0.17742299] -1.93185165258
2 [-0.63914825  1.05959927] -1.93185165258
3 [-0.94079266  0.1716674   1.35646522] -1.93185165258
4 [-1.36635064 -0.55389478  0.69160474] -1.93185165258
5 [-1.71387542 -0.77747337  0.1912321 ] -1.93185165258
6 [-1.86411827 -0.96286852 -0.43054225] -1.93185165258
7 [-1.91430246 -1.1667302  -0.68060938] -1.93185165258
8 [-1.92751864 -1.2972123  -0.80553195] -1.93185165258
9 [-1.93015384 -1.35282688 -0.88808338] -1.93185165258
10 [-1.93059128 -1.37138436 -0.93637485] -1.93185165258
11 [-1.93078028 -1.40225462 -1.15903404] -1.93185165258
12 [-1.93106745 -1.69928975 -1.35526551] -1.93185165258
13 [-1.93162173 -1.84666749 -1.36603792] -1.93185165258
14 [-1.9318106  -1.86292689 -1.36762915] -1.93185165258
15 [-1.93183969 -1.8650528  -1.36806186] -1.93185165258
16 [-1.93184896 -1.86577959 -1.36864402] -1.93185165258
17 [-1.93185118 -1.86597688 -1.37103902] -1.93185165258
18 [-1.9318516  -1.86601945 -1.39807725] -1.93185165258
19 [-1.93185165 -1.86602528 -1.41337153] -1.93185165258
20 [-1.93185165 -1.86602538 -1.41390976] -1.93185165258

Computing the ground-state

7/ We want to compute the ground-state without keeping in memory all the intermediate Lanczos vectors. How to do that ? Implement the corresponding algorithm.

In [4]:
def lanczosSpectrum(a,b):
    tridiag = np.diag(a)
    tridiag+= np.diag(b,k=-1)
    return np.linalg.eigvalsh(tridiag)

def lanczosGS(a,b):
    tridiag = np.diag(a)
    tridiag+= np.diag(b,k=-1)
    ev, vec = np.linalg.eigh(tridiag)
    return vec[:,0]

numiteration = 20
numprint = 3
a, b = [], []
u2 = randomWf(hilbert)
gs = copy(u2) # save the initial random state

for i in range(numiteration):
    a.append(u2.measure(Ham))
    print(i+1,lanczosSpectrum(a,b)[:numprint],E0)
    u3 = Ham*u2 - a[i]*u2
    if i>0: u3 -= b[i-1]*u1
    b.append(u3.norm())
    u3 /= b[i]
    u1 = u2
    u2 = u3

print("Now computing the ground-state")

# start again from the SAME initial random WF
u2 = copy(gs)
gs[:] = 0.0
gs_krylov = lanczosGS(a,b[:-1])
a,b = [],[]

for i in range(numiteration):
    gs += gs_krylov[i]*u2  # recomputing u2 on the fly
    a.append(u2.measure(Ham))
    u3 = Ham*u2 - a[i]*u2
    if i>0: u3 -= b[i-1]*u1
    b.append(u3.norm())
    u3 /= b[i]
    u1 = u2
    u2 = u3

Egs = gs.measure(Ham)
VarEgs = scalar(gs,Ham*(Ham*gs))-Egs**2
print("Checking energy and variance of GS : ", Egs, VarEgs)
1 [-0.03042997] -1.93185165258
2 [-1.00718322  0.93347613] -1.93185165258
3 [-1.33567918 -0.14474394  1.18384789] -1.93185165258
4 [-1.70177969 -0.83780698  0.57132858] -1.93185165258
5 [-1.84773539 -1.01103186  0.01222477] -1.93185165258
6 [-1.90311878 -1.14820785 -0.56134669] -1.93185165258
7 [-1.92414886 -1.29295373 -0.83882838] -1.93185165258
8 [-1.92884033 -1.36389605 -0.93498392] -1.93185165258
9 [-1.92947554 -1.38634869 -0.98891189] -1.93185165258
10 [-1.92987122 -1.51469194 -1.32174707] -1.93185165258
11 [-1.93076517 -1.80099472 -1.36697323] -1.93185165258
12 [-1.93150941 -1.85254007 -1.37083596] -1.93185165258
13 [-1.93175532 -1.86230181 -1.37212428] -1.93185165258
14 [-1.93181544 -1.86453716 -1.3729446 ] -1.93185165258
15 [-1.93184574 -1.86575866 -1.37786176] -1.93185165258
16 [-1.93185118 -1.866002   -1.40423254] -1.93185165258
17 [-1.93185161 -1.8660233  -1.41226468] -1.93185165258
18 [-1.93185165 -1.86602508 -1.41359694] -1.93185165258
19 [-1.93185165 -1.86602539 -1.41413972] -1.93185165258
20 [-1.93185165 -1.8660254  -1.41420693] -1.93185165258
Now computing the ground-state
Checking energy and variance of GS :  -1.93185165257 2.77955436445e-11

Practical use

Play with the Lanczos algorithm to observe the exponential convergence and the presence of ghosts states due to lack of numerical precision.

We have written a new module quantumising_sparse.py that implements the lanczos calculations above, the sparse storage of the Hamiltonian matrix and also the Arnoldi algorithm which preserves orthogonality (using scipy).

Check that the Arnoldi algorithm cures the ghosts states issue.

In [5]:
from quantumising_sparse import *

L, h = 10, 0.3

def IsingHam(L=4,h=0.0,J=1.0):
    HamH = GeneralOp([ [ -h, Sx(i)] for i in range(L) ])
    HamIsing = GeneralOp([ [ -J, SzSz(i,(i+1)%L)] for i in range(L) ])
    return HamIsing + HamH

hilbert = IsingHilbert(L=L)

Eigenvalues only

Lanczos
In [6]:
# without storing the matrix
Ham = IsingHam(L,h)
En = Ham.Energies(hilbert,num=2,algorithm="Lanczos")
print(En)
# with storing the matrix
Ham.createMatrix(hilbert,sparse=True)
En = Ham.Energies(hilbert,num=2,algorithm="Lanczos")
print(En)
[-2.73082864 -2.73082864]
[-2.73082864 -2.73082864]
Fulldiag or Arnoli
In [7]:
Ham = IsingHam(L,h)
En = Ham.Energies(hilbert,num=2,algorithm="Fulldiag")
print(En)
Ham = IsingHam(L,h)
En = Ham.Energies(hilbert,num=2,algorithm="Arnoldi") # recommended, default algorithm
print(En)
[-2.73082864 -2.73036332]
[-2.73082864 -2.73036332]

Get the ground-state

Lanczos

In [8]:
# without storing the matrix
Ham = IsingHam(L,h)
psi0 = Ham.Groundstate(hilbert,algorithm="Lanczos")
print(psi0.measure(Ham))
# with storing the matrix
Ham.createMatrix(hilbert,sparse=True)
psi0 = Ham.Groundstate(hilbert,algorithm="Lanczos")
print(psi0.measure(Ham))
-2.73082864231
-2.7308286423

Fuldiag or Arnoldi

In [9]:
Ham = IsingHam(L,h)
psi0 = Ham.Groundstate(hilbert,algorithm="Fulldiag")
print(psi0.measure(Ham))
-2.73082864231
In [10]:
Ham = IsingHam(L,h)
psi0 = Ham.Groundstate(hilbert,algorithm="Arnoldi") # recommended
print(psi0.measure(Ham))
-2.73082864231