Universal regularization methods -- varying the power, the smoothness and the accuracy

Coralia Cartis, N. I. M. Gould, Ph Toint

Résultats de recherche: Contribution à une publication « grand public »Article

2 Downloads (Pure)

Résumé

The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis covers the entire range of meaningful powers in the regularization term as well as ifor the Hölder exponent for the gradient. The resulting complexity bounds vary according to the regularization power and the assumed Hölder exponent, recovering known results when available.
langue originaleAnglais
Pages595-615
Volume29
Non1
Publication spécialiséeSIAM Journal on Optimization
EditeurSIAM
étatPublié - 6 mars 2019

Empreinte digitale

Regularization Method
Smoothness
Regularization
Exponent
Gradient vector
Critical point
Objective function
Entire
Vary
Cover
Upper bound
Gradient
First-order
Evaluation
Term
Range of data

mots-clés

    Citer ceci

    Cartis, Coralia ; Gould, N. I. M. ; Toint, Ph. / Universal regularization methods -- varying the power, the smoothness and the accuracy. Dans: SIAM Journal on Optimization. 2019 ; Vol 29, Numéro 1. p. 595-615.
    @misc{1b6c55716b1a4707bcd43e594076e236,
    title = "Universal regularization methods -- varying the power, the smoothness and the accuracy",
    abstract = "The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis covers the entire range of meaningful powers in the regularization term as well as ifor the H{\"o}lder exponent for the gradient. The resulting complexity bounds vary according to the regularization power and the assumed H{\"o}lder exponent, recovering known results when available.",
    keywords = "Nonlinear optimization, complexity theory, regularization methods",
    author = "Coralia Cartis and Gould, {N. I. M.} and Ph Toint",
    year = "2019",
    month = "3",
    day = "6",
    language = "English",
    volume = "29",
    pages = "595--615",
    journal = "SIAM Journal on Optimization",
    issn = "1052-6234",
    publisher = "SIAM",

    }

    Universal regularization methods -- varying the power, the smoothness and the accuracy. / Cartis, Coralia; Gould, N. I. M.; Toint, Ph.

    Dans: SIAM Journal on Optimization, Vol 29, Numéro 1, 06.03.2019, p. 595-615.

    Résultats de recherche: Contribution à une publication « grand public »Article

    TY - GEN

    T1 - Universal regularization methods -- varying the power, the smoothness and the accuracy

    AU - Cartis, Coralia

    AU - Gould, N. I. M.

    AU - Toint, Ph

    PY - 2019/3/6

    Y1 - 2019/3/6

    N2 - The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis covers the entire range of meaningful powers in the regularization term as well as ifor the Hölder exponent for the gradient. The resulting complexity bounds vary according to the regularization power and the assumed Hölder exponent, recovering known results when available.

    AB - The worst-case behaviour of a general class of regularization algorithms is considered in the case where only objective function values and associated gradient vectors are evaluated. Upper bounds are derived on the number of such evaluations that are needed for the algorithm to produce an approximate first-order critical point whose accuracy is within a user-defined threshold. The analysis covers the entire range of meaningful powers in the regularization term as well as ifor the Hölder exponent for the gradient. The resulting complexity bounds vary according to the regularization power and the assumed Hölder exponent, recovering known results when available.

    KW - Nonlinear optimization

    KW - complexity theory

    KW - regularization methods

    M3 - Article

    VL - 29

    SP - 595

    EP - 615

    JO - SIAM Journal on Optimization

    JF - SIAM Journal on Optimization

    SN - 1052-6234

    PB - SIAM

    ER -