Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an 0(epsilon^{-3/2}) complexity bound

Serge Gratton, Sadok Jerad, Philippe TOINT

Research output: Contribution to journalArticlepeer-review

18 Downloads (Pure)

Abstract

An adaptive regularization algorithm for unconstrained nonconvex
optimization is presented in which the objective function is never
evaluated, but only derivatives are used. This algorithm belongs to
the class of adaptive regularization methods, for which optimal
worst-case complexity results are known for the standard framework
where the objective function is evaluated. It is shown in this paper
that these excellent complexity bounds are also valid for the new
algorithm, despite the fact that significantly less information is
used. In particular, it is shown that, if derivatives of degree one to
p are used, the algorithm will find an epsilon_1-approximate
first-order minimizer in at most $\calO(\epsilon_1^{-(p+1)/p})$
iterations, and an (epsilon_1,epsilon_2)-approximate second-order
minimizer in at most O(max(epsilon_1^{-(p+1)/p},epsilon_2^{-(p+1)/(p-1)}))
iterations. As a special case, the new algorithm using first and second
derivatives, when applied to functions with Lipschitz continuous Hessian,
will find an iterate x_k in at most O(epsilon_1^{-3/2})$ iterations.
Original languageEnglish
Pages (from-to)1621-1646
JournalSIAM Journal on Optimization
Volume33
Issue number3
Publication statusPublished - Feb 2023

Fingerprint

Dive into the research topics of 'Convergence properties of an Objective-Function-Free Optimization regularization algorithm, including an 0(epsilon^{-3/2}) complexity bound'. Together they form a unique fingerprint.

Cite this