A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity

Serge Gratton, Sadok Jerad, Philippe Toint

Résultats de recherche: Papier de travail

34 Téléchargements (Pure)

Résumé

A fully stochastic second-order adaptive-regularization method for unconstrained nonconvex optimization is presented which never computes the objective-function value, but yet achieves the optimal $\mathcal{O}(\epsilon^{-3/2})$ complexity bound for finding first-order critical points. The method is noise-tolerant and the
inexactness conditions required for convergence depend on the history of past steps. Applications to cases where derivative evaluation is inexact and to minimization of finite sums by sampling are discussed. Numerical experiments on large binary classification problems illustrate the potential of the new method.
langue originaleAnglais
ÉditeurArxiv
Nombre de pages32
Volume2407.08018
Etat de la publicationSoumis - 10 juil. 2024

Empreinte digitale

Examiner les sujets de recherche de « A Stochastic Objective-Function-Free Adaptive Regularization Method with Optimal Complexity ». Ensemble, ils forment une empreinte digitale unique.

Contient cette citation