Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives

S. Bellavia, G. Gurioli, B. Morini, Ph L. Toint

    Research output: Contribution to journalArticlepeer-review

    Abstract

    A regularization algorithm allowing random noise in derivatives and inexact function values is proposed for computing approximate local critical points of any order for smooth unconstrained optimization problems. For an objective function with Lipschitz continuous p-th derivative and given an arbitrary optimality order q≤p, an upper bound on the number of function and derivatives evaluations is established for this algorithm. This bound is in expectation, and in terms of a power of the required tolerances, this power depending on whether q≤2 or q>2. Moreover these bounds are sharp in the order of the accuracy tolerances. An extension to convexly constrained problems is also outlined.

    Original languageEnglish
    Article number101591
    JournalJournal of Complexity
    Volume68
    Early online date21 Jul 2021
    DOIs
    Publication statusPublished - Feb 2022

    Keywords

    • Evaluation complexity
    • Inexact functions and derivatives
    • Regularization methods
    • Stochastic analysis

    Fingerprint

    Dive into the research topics of 'Adaptive regularization for nonconvex optimization using inexact function values and randomly perturbed derivatives'. Together they form a unique fingerprint.

    Cite this