Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

Philippe Toint, Coralia Cartis, Nick Gould

Résultats de recherche: Contribution à un journal/une revueArticle

39 Downloads (Pure)

Résumé

The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis.
langue originaleAnglais
Pages (de - à)197-219
Nombre de pages23
journalOptimization Methods and Software
Volume27
Numéro de publication2
Les DOIs
étatPublié - 1 avr. 2012

Empreinte digitale

Convex optimization
Unconstrained Optimization
Regularization Method
Newton-Raphson method
Convex Optimization
Regularization
Evaluation
Newton Methods
Derivatives
Nonconvex Optimization
Global Solution
Convergence Results
Count
Exact Solution
Upper bound
Gradient
First-order
Derivative
Numerical Results
Approximation

Citer ceci

@article{53df4e9ac678454ea957dc6ae3d3c933,
title = "Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization",
abstract = "The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. {\circledC} 2012 Taylor & Francis.",
keywords = "unconstrained optimization, worst-case analysis, convexity, cubic regularization, ARC methods",
author = "Philippe Toint and Coralia Cartis and Nick Gould",
note = "Publication code : FP SB092/2010/05 ; SB04977/2010/05",
year = "2012",
month = "4",
day = "1",
doi = "10.1080/10556788.2011.602076",
language = "English",
volume = "27",
pages = "197--219",
journal = "Optimization Methods and Software",
issn = "1055-6788",
publisher = "Taylor & Francis",
number = "2",

}

Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization. / Toint, Philippe; Cartis, Coralia; Gould, Nick.

Dans: Optimization Methods and Software, Vol 27, Numéro 2, 01.04.2012, p. 197-219.

Résultats de recherche: Contribution à un journal/une revueArticle

TY - JOUR

T1 - Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization

AU - Toint, Philippe

AU - Cartis, Coralia

AU - Gould, Nick

N1 - Publication code : FP SB092/2010/05 ; SB04977/2010/05

PY - 2012/4/1

Y1 - 2012/4/1

N2 - The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis.

AB - The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis.

KW - unconstrained optimization

KW - worst-case analysis

KW - convexity

KW - cubic regularization

KW - ARC methods

UR - http://www.scopus.com/inward/record.url?scp=84859180670&partnerID=8YFLogxK

U2 - 10.1080/10556788.2011.602076

DO - 10.1080/10556788.2011.602076

M3 - Article

VL - 27

SP - 197

EP - 219

JO - Optimization Methods and Software

JF - Optimization Methods and Software

SN - 1055-6788

IS - 2

ER -