An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

Coralia Cartis, Nick Gould, Philippe Toint

Research output: Contribution to journalArticle

8 Downloads (Pure)

Abstract

The adaptive cubic regularization algorithm described in Cartis et al. (2009, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program., 127, 245-295; 2010, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity [online]. Math. Program., DOI: 10.1007/s10107-009-0337-y) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, without any Lipschitz continuity requirement on the objective's Hessian. A worst-case complexity analysis in terms of evaluations of the problem's function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best-known bound for general unconstrained problems to nonlinear problems with convex constraints. © 2012 Crown copyright 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Original languageEnglish
Pages (from-to)1662-1695
Number of pages34
JournalIMA Journal of Numerical Analysis
Volume32
Issue number4
DOIs
Publication statusPublished - 1 Oct 2012

Fingerprint

Convex Constraints
Nonconvex Optimization
Function evaluation
Evaluation Function
Regularization
Unconstrained Optimization
Regularization Method
Derivatives
Derivative
Worst-case Analysis
Lipschitz Continuity
Complexity Analysis
Convex Domain
Evaluation
Smooth function
Convergence Results
Lipschitz
Nonlinear Problem
Critical point
Objective function

Keywords

  • global convergence
  • convex constraints
  • numerical algorithms
  • cubic regularisation
  • worst-case complexity.
  • Nonlinear optimization

Cite this

@article{5658c8f618334ccf8b463fd03cba486f,
title = "An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity",
abstract = "The adaptive cubic regularization algorithm described in Cartis et al. (2009, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program., 127, 245-295; 2010, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity [online]. Math. Program., DOI: 10.1007/s10107-009-0337-y) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, without any Lipschitz continuity requirement on the objective's Hessian. A worst-case complexity analysis in terms of evaluations of the problem's function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best-known bound for general unconstrained problems to nonlinear problems with convex constraints. {\circledC} 2012 Crown copyright 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.",
keywords = "global convergence, convex constraints, numerical algorithms, cubic regularisation, worst-case complexity., Nonlinear optimization",
author = "Coralia Cartis and Nick Gould and Philippe Toint",
note = "Copyright 2012 Elsevier B.V., All rights reserved.",
year = "2012",
month = "10",
day = "1",
doi = "10.1093/imanum/drr035",
language = "English",
volume = "32",
pages = "1662--1695",
journal = "IMA Journal of Numerical Analysis",
issn = "0272-4979",
publisher = "Oxford University Press",
number = "4",

}

An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity. / Cartis, Coralia; Gould, Nick; Toint, Philippe.

In: IMA Journal of Numerical Analysis, Vol. 32, No. 4, 01.10.2012, p. 1662-1695.

Research output: Contribution to journalArticle

TY - JOUR

T1 - An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity

AU - Cartis, Coralia

AU - Gould, Nick

AU - Toint, Philippe

N1 - Copyright 2012 Elsevier B.V., All rights reserved.

PY - 2012/10/1

Y1 - 2012/10/1

N2 - The adaptive cubic regularization algorithm described in Cartis et al. (2009, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program., 127, 245-295; 2010, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity [online]. Math. Program., DOI: 10.1007/s10107-009-0337-y) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, without any Lipschitz continuity requirement on the objective's Hessian. A worst-case complexity analysis in terms of evaluations of the problem's function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best-known bound for general unconstrained problems to nonlinear problems with convex constraints. © 2012 Crown copyright 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

AB - The adaptive cubic regularization algorithm described in Cartis et al. (2009, Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program., 127, 245-295; 2010, Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity [online]. Math. Program., DOI: 10.1007/s10107-009-0337-y) is adapted to the problem of minimizing a nonlinear, possibly nonconvex, smooth objective function over a convex domain. Convergence to first-order critical points is shown under standard assumptions, without any Lipschitz continuity requirement on the objective's Hessian. A worst-case complexity analysis in terms of evaluations of the problem's function and derivatives is also presented for the Lipschitz continuous case and for a variant of the resulting algorithm. This analysis extends the best-known bound for general unconstrained problems to nonlinear problems with convex constraints. © 2012 Crown copyright 2012. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

KW - global convergence

KW - convex constraints

KW - numerical algorithms

KW - cubic regularisation

KW - worst-case complexity.

KW - Nonlinear optimization

UR - http://www.scopus.com/inward/record.url?scp=84867521936&partnerID=8YFLogxK

U2 - 10.1093/imanum/drr035

DO - 10.1093/imanum/drr035

M3 - Article

VL - 32

SP - 1662

EP - 1695

JO - IMA Journal of Numerical Analysis

JF - IMA Journal of Numerical Analysis

SN - 0272-4979

IS - 4

ER -