On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems

C. Cartis, Nick Gould, Philippe Toint

Research output: Contribution to journalArticlepeer-review

94 Downloads (Pure)

Abstract

It is shown that the steepest-descent and Newton's methods for unconstrained nonconvex optimization under standard assumptions may both require a number of iterations and function evaluations arbitrarily close to O(ε^{-2}) to drive the norm of the gradient below ε. This shows that the upper bound of O(ε^{-2}) evaluations known for the steepest descent is tight and that Newton's method may be as slow as the steepest-descent method in the worst case. The improved evaluation complexity bound of O(ε^{-3/2} ) evaluations known for cubically regularized Newton's methods is also shown to be tight. © 2010 Society for Industrial and Applied Mathematics.
Original languageEnglish
Pages (from-to)2833-2852
Number of pages20
JournalSIAM Journal on Optimization
Volume20
Issue number6
DOIs
Publication statusPublished - 1 Jan 2010

Fingerprint

Dive into the research topics of 'On the complexity of steepest descent, Newton's and regularized Newton's methods for nonconvex unconstrained optimization problems'. Together they form a unique fingerprint.

Cite this