Universal regularization methods: varying the power, the smoothness and the accuracy

Coralia Cartis, Nick I. Gould, Philippe L. Toint

Résultats de recherche: Contribution à un journal/une revueArticle

1 Downloads (Pure)

Résumé

Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving model accuracy. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.

langue originaleAnglais
Pages (de - à)595-615
Nombre de pages21
journalSIAM Journal on Optimization
Volume29
Numéro de publication1
Les DOIs
étatPublié - 1 janv. 2019

Empreinte digitale

Regularization Method
Smoothness
Convergence of Algorithms
Trust Region
Nonconvex Optimization
Line Search
Constrained optimization
Adaptive Method
Constrained Optimization Problem
Regularization
Rate of Convergence
Vary
Model
Higher Order
Sufficient
First-order
Unknown
Alternatives
Evaluation

Citer ceci

@article{4b090166246c4d3f81867fe3275103ba,
title = "Universal regularization methods: varying the power, the smoothness and the accuracy",
abstract = "Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving model accuracy. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.",
keywords = "Evaluation complexity, Regularization methods, Worst-case analysis",
author = "Coralia Cartis and Gould, {Nick I.} and Toint, {Philippe L.}",
year = "2019",
month = "1",
day = "1",
doi = "10.1137/16M1106316_rfseq1",
language = "English",
volume = "29",
pages = "595--615",
journal = "SIAM Journal on Optimization",
issn = "1052-6234",
publisher = "Society for Industrial and Applied Mathematics",
number = "1",

}

Universal regularization methods : varying the power, the smoothness and the accuracy. / Cartis, Coralia; Gould, Nick I.; Toint, Philippe L.

Dans: SIAM Journal on Optimization, Vol 29, Numéro 1, 01.01.2019, p. 595-615.

Résultats de recherche: Contribution à un journal/une revueArticle

TY - JOUR

T1 - Universal regularization methods

T2 - varying the power, the smoothness and the accuracy

AU - Cartis, Coralia

AU - Gould, Nick I.

AU - Toint, Philippe L.

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving model accuracy. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.

AB - Adaptive cubic regularization methods have emerged as a credible alternative to linesearch and trust-region for smooth nonconvex optimization, with optimal complexity amongst second-order methods. Here we consider a general/new class of adaptive regularization methods that use first- or higher-order local Taylor models of the objective regularized by a(ny) power of the step size and applied to convexly constrained optimization problems. We investigate the worst-case evaluation complexity/global rate of convergence of these algorithms, when the level of sufficient smoothness of the objective may be unknown or may even be absent. We find that the methods accurately reflect in their complexity the degree of smoothness of the objective and satisfy increasingly better bounds with improving model accuracy. The bounds vary continuously and robustly with respect to the regularization power and accuracy of the model and the degree of smoothness of the objective.

KW - Evaluation complexity

KW - Regularization methods

KW - Worst-case analysis

UR - http://www.scopus.com/inward/record.url?scp=85063291604&partnerID=8YFLogxK

U2 - 10.1137/16M1106316_rfseq1

DO - 10.1137/16M1106316_rfseq1

M3 - Article

AN - SCOPUS:85063291604

VL - 29

SP - 595

EP - 615

JO - SIAM Journal on Optimization

JF - SIAM Journal on Optimization

SN - 1052-6234

IS - 1

ER -