-function [x, k] = steepest_descent(g, x0, step_size, tolerance, max_iterations)
+function [x, k] = steepest_descent(g, ...
+ x0, ...
+ step_size, ...
+ tolerance, ...
+ max_iterations)
%
% An implementation of the steepest-descent algorithm, with the
% search direction d_{k} = -\nabla f(x_{k}).
%
% We should terminate when either,
%
- % a) The 2-norm of the gradient at x_{k} is greater than
- % ``tolerance``.
+ % a) The infinity-norm of the gradient at x_{k} is greater than
+ % ``tolerance``.
%
% b) We have performed ``max_iterations`` iterations.
%
% receive.
%
% * ``tolerance`` - the stopping tolerance. We stop when the norm
- % of the gradient falls below this value.
+ % of the gradient falls below this value.
%
% * ``max_iterations`` - a safety net; we return x_{k}
- % unconditionally if we exceed this number of iterations.
+ % unconditionally if we exceed this number of iterations.
%
% OUTPUT:
%
% * ``x`` - the solution at the final value of x_{k}.
%
% * ``k`` - the value of k when we stop; i.e. the number of
- % iterations.
+ % iterations.
%
% NOTES:
%
xk = x0;
gk = g(xk);
- while (k <= max_iterations)
- % Loop until either of our stopping conditions are met. If the
- % loop finishes, we have implicitly met the second stopping
- % condition (number of iterations).
-
- if (norm(gk) < tolerance)
- % This catches the k=0 case, too.
- x = xk;
- return;
- end
+ while (k <= max_iterations && norm(gk, 'inf') > tolerance)
+ % Loop until either of our stopping conditions are met. This
+ % should also work when x0 is a satisfactory point.
dk = -gk;
alpha_k = step_size(xk);