From: Michael Orlitzky Date: Tue, 19 Mar 2013 13:27:56 +0000 (-0400) Subject: Add performance notes to steepest descent stuff. X-Git-Url: https://gitweb.michael.orlitzky.com/?a=commitdiff_plain;h=13374cc93fd93143fb57601c8251bd07a8a53031;p=octave.git Add performance notes to steepest descent stuff. --- diff --git a/optimization/steepest_descent.m b/optimization/steepest_descent.m index 32aad71..c5bf0bc 100644 --- a/optimization/steepest_descent.m +++ b/optimization/steepest_descent.m @@ -35,6 +35,12 @@ function [x, k] = steepest_descent(g, x0, step_size, tolerance, max_iterations) % % * ``k`` - the value of k when we stop; i.e. the number of % iterations. + % + % NOTES: + % + % A specialized implementation for solving e.g. Qx=b can avoid one + % matrix-vector multiplication below. + % % The initial gradient at x_{0} is not supplied, so we compute it % here and begin the loop at k=1. diff --git a/optimization/step_size_positive_definite.m b/optimization/step_size_positive_definite.m index 3e991a3..e32229c 100644 --- a/optimization/step_size_positive_definite.m +++ b/optimization/step_size_positive_definite.m @@ -21,6 +21,13 @@ function alpha = step_size_positive_definite(Q, b, x) % - ``alpha`` -- the optimal step size in the negative gradient % direction. % + % NOTES: + % + % It is possible to save one matrix-vector multiplication here, by + % taking d_k as a parameter. In fact, if the caller is specialized to + % our problem (1), we can avoid both matrix-vector multiplications here + % at the expense of some added roundoff error. + % % The gradient of f(x) is Qx - b, and d_k is the negative gradient % direction.