TY - JOUR
T1 - Adaptive regularization algorithms with inexact evaluations for nonconvex optimization
AU - Bellavia, Stefania
AU - Gurioli, Gianmarco
AU - Morini, Benedetta
AU - Toint, Philippe
N1 - Funding Information:
Acknowledgments. The fourth author gratefully acknowledges the support and friendly environment provided by the Department of Industrial Engineering at the Università degli Studi, Florence, Italy, during his visit in fall 2018. The authors are also indebted to two careful referees, whose comments and perceptive questions have resulted in a significant improvement of the manuscript.
Funding Information:
The first, second, and third authors are members of the INdAM Research Group GNCS. INdAM-GNCS partially supported the first and third authors under Progetti di Ricerca 2018. The second author was partially supported by INdAM through a GNCS grant. The fourth author gratefully acknowledges the support and friendly environment provided by the Department of Industrial Engineering at the Universit? degli Studi, Florence, Italy, during his visit in fall 2018. The authors are also indebted to two careful referees, whose comments and perceptive questions have resulted in a significant improvement of the manuscript.
Funding Information:
∗Received by the editors November 12, 2018; accepted for publication (in revised form) July 19, 2019; published electronically November 14, 2019. https://doi.org/10.1137/18M1226282 Funding: The first, second, and third authors are members of the INdAM Research Group GNCS. INdAM-GNCS partially supported the first and third authors under Progetti di Ricerca 2018. The second author was partially supported by INdAM through a GNCS grant. †Dipartimento di Ingegneria Industriale, Università degli Studi, Firenze, 50134, Italy ([email protected], [email protected]). ‡Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi, Firenze, 50134, Italy ([email protected]). §Namur Center for Complex Systems (naXys), University of Namur, 61, rue de Bruxelles, B-5000 Namur, Belgium ([email protected]).
Publisher Copyright:
© 2019 Society for Industrial and Applied Mathematics.
PY - 2020/1/2
Y1 - 2020/1/2
N2 - A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is, constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is β-Hölder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when computing objective function values and derivatives. The complexity analysis covers arbitrary optimality order and arbitrary degree of available approximate derivatives. It extends results of Cartis, Gould, and Toint [SIAM J. Optim., to appear] on the evaluation complexity to the inexact case: if a qth-order minimizer is sought using approximations to the first p derivatives, it is proved that a suitable approximate minimizer within ε is computed by the propposed algorithm in at most O[Formula presented] iterations and at most O[Formula presented] approximate evaluations. An algorithmic variant, although more rigid in practice, can be proved to find such an approximate minimizer in O[Formula presented] evaluations. While the proposed framework remains so far conceptual for high degrees and orders, it is shown to yield simple and computationally realistic inexact methods when specialized to the unconstrained and bound-constrained first- and second-order cases. The deterministic complexity results are finally extended to the stochastic context, yielding adaptive sample-size rules for subsampling methods typical of machine learning.
AB - A regularization algorithm using inexact function values and inexact derivatives is proposed and its evaluation complexity analyzed. This algorithm is applicable to unconstrained problems and to problems with inexpensive constraints (that is, constraints whose evaluation and enforcement has negligible cost) under the assumption that the derivative of highest degree is β-Hölder continuous. It features a very flexible adaptive mechanism for determining the inexactness which is allowed, at each iteration, when computing objective function values and derivatives. The complexity analysis covers arbitrary optimality order and arbitrary degree of available approximate derivatives. It extends results of Cartis, Gould, and Toint [SIAM J. Optim., to appear] on the evaluation complexity to the inexact case: if a qth-order minimizer is sought using approximations to the first p derivatives, it is proved that a suitable approximate minimizer within ε is computed by the propposed algorithm in at most O[Formula presented] iterations and at most O[Formula presented] approximate evaluations. An algorithmic variant, although more rigid in practice, can be proved to find such an approximate minimizer in O[Formula presented] evaluations. While the proposed framework remains so far conceptual for high degrees and orders, it is shown to yield simple and computationally realistic inexact methods when specialized to the unconstrained and bound-constrained first- and second-order cases. The deterministic complexity results are finally extended to the stochastic context, yielding adaptive sample-size rules for subsampling methods typical of machine learning.
KW - evaluation complexity
KW - regularization methods
KW - inexact functions and derivatives
KW - subsampling methods
KW - machine learning
KW - Evaluation complexity
KW - Subsampling methods
KW - Inexact functions and derivatives
KW - Regularization methods
UR - http://www.scopus.com/inward/record.url?scp=85076364479&partnerID=8YFLogxK
U2 - 10.1137/18M1226282
DO - 10.1137/18M1226282
M3 - Article
SN - 1052-6234
VL - 29
SP - 2881
EP - 2915
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 4
ER -