|
Optimization Methods
Geometry Optimization Methods
Types of algorithm
Simple Energy Minimization
1st Derivative Methods
2nd Derivative Methods
Energy Minimizations
- Simplest
- Consistently moves downhill
- Rapid at first, can oscillate near minimum
- No test for convergence
- Simplex method
- Sequential Univariate
- Change one coordinate at a time
- Fit parabola to initial point plus 2 displacements
- Adjust individual coordinate to minimum of parabola
- Problem with long narrow valley
1 + 3N x m evaluations
1st Derivative/Gradient Methods
- Order of magnitude faster
- Convergence to stationary points (minima, saddle, maxima)
- Steepest Descent
- Consistently moves downhill
- Can oscillate near minimum
- In effect approximates Hessian w/ unit matrix
- Conjugate Gradient/Fletcher-Reeves
- Faster than Steepest Descent (& BDNR for PP)
- Uses gradients as if updating Hessian
- Fast convergence when far from stationary point
- Slow to converge for floppy
- May oscillate near minimum
- NLLSQ (NonLinear Least Squares)
- Nearest stationary point
- Not on shoulder
- Modified Fletcher-Powell
- Two displacements along each coordinate
- Fit quadratic surface w/ interactions (in effect calculating g & elements of H numerically)
- Two displacements along downhill direction
- Fit parabola to downhill direction
- Displace atoms
2N + 4 + m x (N + 3) evaluations
Second Derivative/Hessian Methods
- Assume quadratic gradient surface: f(E, g, H)
Newton-Raphson
- Calculate Hessian matrix directly at each step
- Augmented Hessian for transition state
Quasi-Newton
- Estimate Hessian (Inverse Hessian/Greens Function)
- Calculate E, g
- Update Hessian
- Move to stationary point of model surface
N+1 steps, but each one longer
- Block Diagonal Newton-Raphson
- Guesses Hessian
- Unit matrix for cartesian, better for internal coordinate
- Better convergence near minimum
- DFP (Davidson-Fletcher-Powell)
- Better for transition states
- Murtagh-Sargent
- BFGS (Broyden-Fletcher-Goldfarb-Shanno)
- Faster than N-R
- Better for optimizations
- EF (Eigenvector Following)
- Faster than NLLSQ, Sigma, BFGS
- May fail with dummy atoms
- Analytical Gradients
- Analytical rather than finite differences
- More accurate
Single Hessian Calculation
- Hessian not recomputed after 1st step
- Rapid for nearest stationary point
- Fails if initial gradient large
- Sigma
- Gradient Norm Squared/Powell
- Refine N-R geometry
- Stationary point including shoulder
Back to Modeling Reference Page
Link to .
4/7/99 Ernie Chamot / Chamot Labs / (630) 637-1559 / echamot@chamotlabs.com
|