]> gitweb.michael.orlitzky.com - octave.git/blob - optimization/steepest_descent.m
Fix an inline #-comment.
[octave.git] / optimization / steepest_descent.m
1 function [x, k] = steepest_descent(g, x0, step_size, tolerance, max_iterations)
2 %
3 % An implementation of the steepest-descent algorithm, with the
4 % search direction d_{k} = -\nabla f(x_{k}).
5 %
6 % We should terminate when either,
7 %
8 % a) The 2-norm of the gradient at x_{k} is greater than
9 % ``tolerance``.
10 %
11 % b) We have performed ``max_iterations`` iterations.
12 %
13 % INPUT:
14 %
15 % * ``g`` - the gradient of the function ``f`` we're minimizing.
16 %
17 % * ``x0`` - the starting point for the search.
18 %
19 % * ``step_size`` - a function of x which returns the optimal step
20 % size alpha_{k}. This will generally require more information
21 % than just x; for example it might need the function ``f`` or the
22 % gradient ``g``. However, our caller has this information so it
23 % should be incorporated into the step size algorithm that we
24 % receive.
25 %
26 % * ``tolerance`` - the stopping tolerance. We stop when the norm
27 % of the gradient falls below this value.
28 %
29 % * ``max_iterations`` - a safety net; we return x_{k}
30 % unconditionally if we exceed this number of iterations.
31 %
32 % OUTPUT:
33 %
34 % * ``x`` - the solution at the final value of x_{k}.
35 %
36 % * ``k`` - the value of k when we stop; i.e. the number of
37 % iterations.
38 %
39 % NOTES:
40 %
41 % A specialized implementation for solving e.g. Qx=b can avoid one
42 % matrix-vector multiplication below.
43 %
44
45 % The initial gradient at x_{0} is not supplied, so we compute it
46 % here and begin the loop at k=0.
47 k = 0;
48 xk = x0;
49 gk = g(xk);
50
51 while (k <= max_iterations)
52 % Loop until either of our stopping conditions are met. If the
53 % loop finishes, we have implicitly met the second stopping
54 % condition (number of iterations).
55
56 if (norm(gk) < tolerance)
57 % This catches the k=0 case, too.
58 x = xk;
59 return;
60 end
61
62 dk = -gk;
63 alpha_k = step_size(xk);
64 xk = xk + (alpha_k * dk);
65 gk = g(xk);
66
67 % We potentially just performed one more iteration than necessary
68 % in order to simplify the loop. Note that due to the structure of
69 % our loop, we will have k > max_iterations when we fail to
70 % converge.
71 k = k + 1;
72 end
73
74 x = xk;
75 end