Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

Philippe Toint, Coralia Cartis, Nick Gould

Research output: Contribution to journalArticlepeer-review

172 Downloads (Pure)

Abstract

The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis.
Original languageEnglish
Pages (from-to)197-219
Number of pages23
JournalOptimization Methods and Software
Volume27
Issue number2
DOIs
Publication statusPublished - 1 Apr 2012

Keywords

  • unconstrained optimization
  • worst-case analysis
  • convexity
  • cubic regularization
  • ARC methods

Fingerprint

Dive into the research topics of 'Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization'. Together they form a unique fingerprint.

Cite this