1 function [x, k] = steepest_descent(g, ...
7 % An implementation of the steepest-descent algorithm, with the
8 % search direction d_{k} = -\nabla f(x_{k}).
10 % We should terminate when either,
12 % a) The infinity-norm of the gradient at x_{k} is greater than
15 % b) We have performed ``max_iterations`` iterations.
19 % * ``g`` - the gradient of the function ``f`` we're minimizing.
21 % * ``x0`` - the starting point for the search.
23 % * ``step_size`` - a function of x which returns the optimal step
24 % size alpha_{k}. This will generally require more information
25 % than just x; for example it might need the function ``f`` or the
26 % gradient ``g``. However, our caller has this information so it
27 % should be incorporated into the step size algorithm that we
30 % * ``tolerance`` - the stopping tolerance. We stop when the norm
31 % of the gradient falls below this value.
33 % * ``max_iterations`` - a safety net; we return x_{k}
34 % unconditionally if we exceed this number of iterations.
38 % * ``x`` - the solution at the final value of x_{k}.
40 % * ``k`` - the value of k when we stop; i.e. the number of
45 % A specialized implementation for solving e.g. Qx=b can avoid one
46 % matrix-vector multiplication below.
49 % The initial gradient at x_{0} is not supplied, so we compute it
50 % here and begin the loop at k=0.
55 while (k <= max_iterations && norm(gk, 'inf') > tolerance)
56 % Loop until either of our stopping conditions are met. This
57 % should also work when x0 is a satisfactory point.
60 alpha_k = step_size(xk);
61 x_next = xk + (alpha_k * dk);
63 % We potentially just performed one more iteration than necessary
64 % in order to simplify the loop. Note that due to the structure of
65 % our loop, we will have k > max_iterations when we fail to