Self-Consistent Iteration

scf!([fun!, ]fock[; ω=0, max_iter=200, tol=1e-8, verbosity=1])

Perform the SCF iterations for the fock operator. One iteration is performed by scf_iteration!. The optional fun!(P̃,c̃) argument allows for extra operations to be performed every SCF cycle (such as plotting, etc).

ω is a relaxation parameter; the orbitals and mixing coefficients are updated as wᵢ₊₁ = (1-ω)w̃ + ωwᵢ where is the solution in the current iteration and wᵢ the previous solution. The default (ω=0) is to only use this solution. A larger value of ω makes it easier to achieve convergence in oscillatory problems, but a too large value decreases the convergence rate. It is therefore desireable to reduce the value of ω if the convergence is deemed to be monotonous, the criterion for which is whether the change over the monotonous_window last iterations is steadily decreasing. If this is the case, ω will be reduced by multiplying by ωfactor. Conversely, if the change is non-monotonous, ω will be increased, but not beyond ωmax.

The SCF procedure continues until either the amount of iterations equals max_iter or the change in the coefficients is below tol.

scf_iteration!(fock, P, H, c, okws,
               [; verbosity=0, method=:arnoldi, σ=nothing, tol=1e-10,

Perform one step of the SCF iteration. This will

  1. Improve each of the orbitals in turn, using the values of the orbitals P and mixing coefficients c from the previous step as input.

  2. Solve the secular problem to find new values for the mixing coefficients, c, unless update_mixing_coefficients==false.

okws is a vector of OrthogonalKrylovWrappers, one for each orbital.

solve_orbital_equation!(Pj, okw, method, tol)

Solve the orbital equation for the jth orbital, by solving the eigenproblem of the OrthogonalKrylovWrapper operator okw, and store the result in the radial orbital vector of expansion coefficients Pj. The solution is computed using method; valid choices are

  1. :arnoldi, which uses ArnoldiMethod.jl. Really small grid spacings/large grid extents can be problematic with this method, due to high condition numbers.

  2. :lobpcg, which uses IterativeSolvers.jl. Too coarse grids can cause never-ending loops or errors from the Cholesky decomposition, complaining about non-positive definite subproblems.

Both methods are controlled by the stopping tolerance tol.

solve_secular_problem!(H, c, fock)

Form the energy matrix, store it in H, and then solve the secular problem Hc = Ec for the lowest eigenvalue.



The orbital improvements are performed via diagonalization of the integro-differential equations for each orbital. The diagonalization procedure is the Arnoldi method, which requires the repeated action of the Hamiltonian on a vector. The KrylovWrapper object is used to wrap Hamiltonians which are not simple <:AbstractMatrix objects, but which require a specialized implementation of mul! to act on a <:AbstractVector of coefficients. An example is the kind of Hamiltonians implemented in the Atoms.jl library, which are based on the function space algebra from ContinuumArrays.jl.


Proxy object used in the Krylov iterations, during orbital improvement. This is useful, since hamiltonian may be defined to act on objects such as vectors living in function spaces (as e.g. implemented using ContinuumArrays.jl), whereas the SCF iterations act on the coefficients directly.


Returns the dimension of the KrylovWrapper. For Hamiltonians which are not <:AbstractMatrix, this needs to be overloaded.

Missing docstring.

Missing docstring for LinearAlgebra.mul!(y::V₁, A::KrylovWrapper{T,Hamiltonian}, x::V₂) where {V₁,V₂,T,Hamiltonian<:AbstractMatrix}. Check Documenter's build log for details.

Gershgorin's circle theorem

Gershgorin's circle theorem can be used to estimate the bounds on the eigenspectrum of a matrix. It is always correct, but only a useful estimate when the matrix is diagonally dominant. Furthermore, gershgorin_bounds is limited to the case of Hermitian matrices, whose eigenvalues lie on the real line. If an experimental energy is known, that is almost always a better lower bound to use for the Shift-and-invert strategy.


For a Hermitian matrix A, compute the bounds on the real axis of the eigenspectrum. NB that this bound will only be useful for diagonally dominant As.