TY - JOUR
T1 - Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization
AU - Toint, Philippe
AU - Cartis, Coralia
AU - Gould, Nick
N1 - Publication code : FP SB092/2010/05 ; SB04977/2010/05
PY - 2012/4/1
Y1 - 2012/4/1
N2 - The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis.
AB - The adaptive cubic regularization algorithms described in Cartis, Gould and Toint [Adaptive cubic regularisation methods for unconstrained optimization Part II: Worst-case function- and derivative-evaluation complexity, Math. Program. (2010), doi:10.1007/s10107-009-0337-y (online)]; [Part I: Motivation, convergence and numerical results, Math. Program. 127(2) (2011), pp. 245-295] for unconstrained (nonconvex) optimization are shown to have improved worst-case efficiency in terms of the function- and gradient-evaluation count when applied to convex and strongly convex objectives. In particular, our complexity upper bounds match in order (as a function of the accuracy of approximation), and sometimes even improve, those obtained by Nesterov [Introductory Lectures on Convex Optimization, Kluwer Academic Publishers, Dordrecht, 2004; Accelerating the cubic regularization of Newton's method on convex problems, Math. Program. 112(1) (2008), pp. 159-181] and Nesterov and Polyak [Cubic regularization of Newton's method and its global performance, Math. Program. 108(1) (2006), pp. 177-205] for these same problem classes, without requiring exact Hessians or exact or global solution of the subproblem. An additional outcome of our approximate approach is that our complexity results can naturally capture the advantages of both first- and second-order methods. © 2012 Taylor & Francis.
KW - unconstrained optimization
KW - worst-case analysis
KW - convexity
KW - cubic regularization
KW - ARC methods
UR - http://www.scopus.com/inward/record.url?scp=84859180670&partnerID=8YFLogxK
U2 - 10.1080/10556788.2011.602076
DO - 10.1080/10556788.2011.602076
M3 - Article
VL - 27
SP - 197
EP - 219
JO - Optimization Methods and Software
JF - Optimization Methods and Software
IS - 2
ER -