An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

Coralia Cartis, Nick Gould, Philippe Toint

Research output: Contribution to journalArticlepeer-review

122 Downloads (Pure)

Abstract

The adaptive cubic regularization algorithm described in Cartis et al. (2009, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program., 127, 245-295; 2010, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity [online]. Math. Program., DOI: 10.1007/s10107-009-0337-y) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, without any Lipschitz continuity requirement on the objective's Hessian. A worst-case complexity analysis in terms of evaluations of the problem's function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best-known bound for general unconstrained problems to nonlinear problems with convex constraints. © 2012 Crown copyright 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Original languageEnglish
Pages (from-to)1662-1695
Number of pages34
JournalIMA Journal of Numerical Analysis
Volume32
Issue number4
DOIs
Publication statusPublished - 1 Oct 2012

Keywords

  • global convergence
  • convex constraints
  • numerical algorithms
  • cubic regularisation
  • worst-case complexity.
  • Nonlinear optimization

Fingerprint

Dive into the research topics of 'An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity'. Together they form a unique fingerprint.

Cite this