]> gitweb.michael.orlitzky.com - octave.git/blob - optimization/step_size_positive_definite.m
Add performance notes to steepest descent stuff.
[octave.git] / optimization / step_size_positive_definite.m
1 function alpha = step_size_positive_definite(Q, b, x)
2 % Let,
3 %
4 % f(x) = (1/2)<Qx,x> - <b,x> + a (1)
5 %
6 % where Q is symmetric and positive definite.
7 %
8 % If we seek to minimize f; that is, to solve Qx = b, then we can do
9 % so using the method of steepest-descent. This function computes
10 % the optimal step size alpha for the steepest descent method, in
11 % the negative-gradient direction, at x.
12 %
13 % INPUT:
14 %
15 % - ``Q`` -- the positive-definite matrix in the definition of f(x).
16 %
17 % - ``b`` -- the known vector in the definition of f(x).
18 %
19 % OUTPUT:
20 %
21 % - ``alpha`` -- the optimal step size in the negative gradient
22 % direction.
23 %
24 % NOTES:
25 %
26 % It is possible to save one matrix-vector multiplication here, by
27 % taking d_k as a parameter. In fact, if the caller is specialized to
28 % our problem (1), we can avoid both matrix-vector multiplications here
29 % at the expense of some added roundoff error.
30 %
31
32 % The gradient of f(x) is Qx - b, and d_k is the negative gradient
33 % direction.
34 d_k = b - Q*x;
35 alpha = (d_k' * d_k) / (d_k' * Q * d_k);
36 end