CCL Home Page
Up Directory CCL Optimization
Optimization Methods

Geometry Optimization Methods

Types of algorithm

Simple Energy Minimization
1st Derivative Methods
2nd Derivative Methods


Energy Minimizations


    • Simplest
    • Consistently moves downhill
    • Rapid at first, can oscillate near minimum
    • No test for convergence

  • Simplex method

  • Sequential Univariate
    1. Change one coordinate at a time
    2. Fit parabola to initial point plus 2 displacements
    3. Adjust individual coordinate to minimum of parabola
    • Problem with long narrow valley
    1 + 3N x m evaluations

1st Derivative/Gradient Methods


    • Order of magnitude faster
    • Convergence to stationary points (minima, saddle, maxima)

  • Steepest Descent
    • Consistently moves downhill
    • Can oscillate near minimum
    • In effect approximates Hessian w/ unit matrix

  • Conjugate Gradient/Fletcher-Reeves
    • Faster than Steepest Descent (& BDNR for PP)
    • Uses gradients as if updating Hessian
    • Fast convergence when far from stationary point
    • Slow to converge for floppy
    • May oscillate near minimum

  • NLLSQ (NonLinear Least Squares)
    • Nearest stationary point
    • Not on shoulder

  • Modified Fletcher-Powell
    1. Two displacements along each coordinate
    2. Fit quadratic surface w/ interactions (in effect calculating g & elements of H numerically)
    3. Two displacements along downhill direction
    4. Fit parabola to downhill direction
    5. Displace atoms
    2N + 4 + m x (N + 3) evaluations

Second Derivative/Hessian Methods

    • Assume quadratic gradient surface: f(E, g, H)

Newton-Raphson
    • Calculate Hessian matrix directly at each step

  • Augmented Hessian for transition state

Quasi-Newton
    1. Estimate Hessian (Inverse Hessian/Green’s Function)
    2. Calculate E, g
    3. Update Hessian
    4. Move to stationary point of model surface
    N+1 steps, but each one longer

  • Block Diagonal Newton-Raphson
    • Guesses Hessian
    • Unit matrix for cartesian, better for internal coordinate
    • Better convergence near minimum

  • DFP (Davidson-Fletcher-Powell)
    • Better for transition states

  • Murtagh-Sargent

  • BFGS (Broyden-Fletcher-Goldfarb-Shanno)
    • Faster than N-R
    • Better for optimizations

  • EF (Eigenvector Following)
    • Faster than NLLSQ, Sigma, BFGS
    • May fail with dummy atoms

  • Analytical Gradients
    • Analytical rather than finite differences
    • More accurate

Single Hessian Calculation
    • Hessian not recomputed after 1st step
    • Rapid for nearest stationary point
    • Fails if initial gradient large

  • Sigma

  • Gradient Norm Squared/Powell
    • Refine N-R geometry
    • Stationary point including shoulder


Back to Modeling Reference Page

Link to Chamot Labs Logo.


4/7/99 Ernie Chamot / Chamot Labs / (630) 637-1559 / echamot@chamotlabs.com
Modified: Wed Nov 29 21:06:16 2000 GMT
Page accessed 15814 times since Sat Apr 17 06:32:59 1999 GMT