An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

Coralia Cartis, Nick Gould, Philippe Toint

Research output: Contribution to journalArticle

8 Downloads (Pure)


The adaptive cubic regularization algorithm described in Cartis et al. (2009, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program., 127, 245-295; 2010, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity [online]. Math. Program., DOI: 10.1007/s10107-009-0337-y) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, without any Lipschitz continuity requirement on the objective's Hessian. A worst-case complexity analysis in terms of evaluations of the problem's function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best-known bound for general unconstrained problems to nonlinear problems with convex constraints. © 2012 Crown copyright 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Original languageEnglish
Pages (from-to)1662-1695
Number of pages34
JournalIMA Journal of Numerical Analysis
Issue number4
Publication statusPublished - 1 Oct 2012



  • global convergence
  • convex constraints
  • numerical algorithms
  • cubic regularisation
  • worst-case complexity.
  • Nonlinear optimization

Cite this