Minimizing convex quadratics with variable precision Krylov methods

Activity: Talk or presentation typesInvited talk


Iterative algorithms for the solution of convex quadratic optimization problems are investigated, which exploit inaccurate matrix-vector products. Theoretical bounds on the performance of a Conjugate Gradients method are derived, the necessary quantities occurring in the theoretical bounds estimated and a new practical algorithm derived. Numerical experiments suggest that the new method has significant potential, including in the steadily more important context of multi-precision computations.
Period28 Nov 2019
Held atUniversity of Oxford, United Kingdom
Degree of RecognitionRegional


  • linear algebra
  • optimization
  • inexact computations