# OFFO minimization algorithms for second-order optimality and their complexity

Philippe TOINT, Serge Gratton

Research output: Working paper

## Abstract

is presented in which the objective function is never evaluated and yet the
gradient norms decrease at least as fast as lO(1/\sqrt{k+1}) while
second-order optimality measures converge to zero at least as fast as
O(1/(k+1)^{1/3}). This latter rate of convergence is shown to be
essentially sharp and is identical to that known for more standard algorithms
(like trust-region or adaptive-regularization methods) using both function and
derivatives' evaluations. A related divergent stepsize'' method is also
described, whose essentially sharp rate of convergence is slighly inferior. It
is finally discussed how to obtain weaker second-order optimality guarantees
at a (much) reduced computional cost.
Original language English Published - 7 Mar 2022

## Fingerprint

Dive into the research topics of 'OFFO minimization algorithms for second-order optimality and their complexity'. Together they form a unique fingerprint.
• ### Evaluation complexity of algorithms for nonconvex optimization

Cartis, C., Gould, N. I. M. & TOINT, P., Jul 2022, SIAM. 600 p. (SIAM-MOS Series on Optimization)

Research output: Book/Report/JournalBook

• ### Parametric complexity analysis for a class of first-order Adagrad-like algorithms

Gratton, S., Jerad, S. & TOINT, P., Jan 2022, Arxiv.

Research output: Working paper

File
• ### Complexity in nonlinear optimization

TOINT, P., Gould, N. I. M. & Cartis, C.

1/11/08 → …

Project: Research