A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity

Serge Gratton, Sadok Jerad, Philippe Toint

Research output: Working paper

30 Downloads (Pure)

Abstract

A fully stochastic second-order adaptive-regularization method for unconstrained nonconvex optimization is presented which never computes the objective-function value, but yet achieves the optimal $\mathcal{O}(\epsilon^{-3/2})$ complexity bound for finding first-order critical points. The method is noise-tolerant and the
inexactness conditions required for convergence depend on the history of past steps. Applications to cases where derivative evaluation is inexact and to minimization of finite sums by sampling are discussed. Numerical experiments on large binary classification problems illustrate the potential of the new method.
Original languageEnglish
PublisherArxiv
Number of pages32
Volume2407.08018
Publication statusSubmitted - 10 Jul 2024

Fingerprint

Dive into the research topics of 'A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity'. Together they form a unique fingerprint.

Cite this