Universal regularization methods: varying the power, the smoothness and the accuracy

Coralia Cartis, Nick I. Gould, Philippe L. Toint

Research output: Contribution to journalArticlepeer-review

16 Downloads (Pure)

Abstract

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving model accuracy. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.

Original languageEnglish
Pages (from-to)595-615
Number of pages21
JournalSIAM Journal on Optimization
Volume29
Issue number1
DOIs
Publication statusPublished - 1 Jan 2019

Keywords

  • Evaluation complexity
  • Regularization methods
  • Worst-case analysis

Fingerprint

Dive into the research topics of 'Universal regularization methods: varying the power, the smoothness and the accuracy'. Together they form a unique fingerprint.
  • University of Oxford

    Philippe Toint (Visiting researcher)

    20 Nov 20196 Dec 2019

    Activity: Visiting an external institution typesVisiting an external academic institution

Cite this