Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an 0(epsilon^{-3/2}) complexity bound

Serge Gratton, Sadok Jerad, Philippe TOINT

Résultats de recherche: Contribution à un journal/une revueArticleRevue par des pairs

12 Téléchargements (Pure)

Résumé

An adaptive regularization algorithm for unconstrained nonconvex
optimization is presented in which the objective function is never
evaluated, but only derivatives are used. This algorithm belongs to
the class of adaptive regularization methods, for which optimal
worst-case complexity results are known for the standard framework
where the objective function is evaluated. It is shown in this paper
that these excellent complexity bounds are also valid for the new
algorithm, despite the fact that significantly less information is
used. In particular, it is shown that, if derivatives of degree one to
p are used, the algorithm will find an epsilon_1-approximate
first-order minimizer in at most $\calO(\epsilon_1^{-(p+1)/p})$
iterations, and an (epsilon_1,epsilon_2)-approximate second-order
minimizer in at most O(max(epsilon_1^{-(p+1)/p},epsilon_2^{-(p+1)/(p-1)}))
iterations. As a special case, the new algorithm using first and second
derivatives, when applied to functions with Lipschitz continuous Hessian,
will find an iterate x_k in at most O(epsilon_1^{-3/2})$ iterations.
langue originaleAnglais
Pages (de - à)1621-1646
journalSIAM Journal on Optimization
Volume33
Numéro de publication3
Etat de la publicationPublié - févr. 2023

Empreinte digitale

Examiner les sujets de recherche de « Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an 0(epsilon^{-3/2}) complexity bound ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation