Abstract
It is well known that the norm of the gradient may be unreliable
as a stopping test in unconstrained optimization, and that it often
exhibits oscillations in the course of the optimization. In this paper
we present results describing the properties of the gradient norm for the
steepest descent method applied to quadratic objective functions. We
also make some general observations that apply to nonlinear
problems, relating the gradient norm, the objective function value,
and the path generated by the iterates.
Original language | English |
---|---|
Pages (from-to) | 5-35 |
Number of pages | 31 |
Journal | Computational Optimization and Application |
Volume | 22 |
Issue number | 1 |
Publication status | Published - 2002 |
Keywords
- nonlinear optimization
- unconstrained optimization
- behavior of the gradeint norm
- steepest descent method