OFFO minimization algorithms for second-order optimality and their complexity

Serge Gratton, Philippe TOINT

Résultats de recherche: Contribution à un journal/une revueArticleRevue par des pairs

19 Téléchargements (Pure)

Résumé

An Adagrad-inspired class of algorithms for smooth unconstrained optimization
is presented in which the objective function is never evaluated and yet the
gradient norms decrease at least as fast as lO(1/\sqrt{k+1}) while
second-order optimality measures converge to zero at least as fast as
O(1/(k+1)^{1/3}). This latter rate of convergence is shown to be
essentially sharp and is identical to that known for more standard algorithms
(like trust-region or adaptive-regularization methods) using both function and
derivatives' evaluations. A related ``divergent stepsize'' method is also
described, whose essentially sharp rate of convergence is slighly inferior. It
is finally discussed how to obtain weaker second-order optimality guarantees
at a (much) reduced computional cost.
langue originaleAnglais
Pages (de - à)573-607
Nombre de pages35
journalComputational Optimization and Applications
Volume84
Numéro de publication2
Les DOIs
Etat de la publicationPublié - 15 févr. 2023

Empreinte digitale

Examiner les sujets de recherche de « OFFO minimization algorithms for second-order optimality and their complexity ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation