X-Git-Url: http://gitweb.michael.orlitzky.com/?p=octave.git;a=blobdiff_plain;f=optimization%2Fstep_length_cgm.m;h=87062f3e2d0bdbb10804851c0a814a68e8cb6894;hp=60f7cd491431b94cf183cb7caa375ebff19e929f;hb=2607d0ce1289b34ecce0a82b11e9935e239fe708;hpb=69d4adf22828f26858e89de560078e1b5ed1714a diff --git a/optimization/step_length_cgm.m b/optimization/step_length_cgm.m index 60f7cd4..87062f3 100644 --- a/optimization/step_length_cgm.m +++ b/optimization/step_length_cgm.m @@ -1,40 +1,40 @@ function alpha = step_length_cgm(r, A, p) - ## - ## Compute the step length for the conjugate gradient method (CGM). - ## The CGM attempts to solve, - ## - ## Ax = b - ## - ## or equivalently, - ## - ## min[phi(x) = (1/2) - ] - ## - ## where ``A`` is positive-definite. In the process, we need to - ## compute a number of search directions ``p`` and optimal step - ## lengths ``alpha``; i.e., - ## - ## x_{k+1} = x_{k} + alpha_{k}*p_{k} - ## - ## This function computes alpha_{k} in the formula above. - ## - ## INPUT: - ## - ## - ``r`` -- The residual, Ax - b, at the current step. - ## - ## - ``A`` -- The matrix ``A`` in the formulation above. - ## - ## - ``p`` -- The current search direction. - ## - ## OUTPUT: - ## - ## - ``alpha`` -- The minimizer of ``f(x) = x + alpha*p`` along ``p`. - ## - ## NOTES: - ## - ## All vectors are assumed to be *column* vectors. - ## + % + % Compute the step length for the conjugate gradient method (CGM). + % The CGM attempts to solve, + % + % Ax = b + % + % or equivalently, + % + % min[phi(x) = (1/2) - ] + % + % where ``A`` is positive-definite. In the process, we need to + % compute a number of search directions ``p`` and optimal step + % lengths ``alpha``; i.e., + % + % x_{k+1} = x_{k} + alpha_{k}*p_{k} + % + % This function computes alpha_{k} in the formula above. + % + % INPUT: + % + % - ``r`` -- The residual, Ax - b, at the current step. + % + % - ``A`` -- The matrix ``A`` in the formulation above. + % + % - ``p`` -- The current search direction. + % + % OUTPUT: + % + % - ``alpha`` -- The minimizer of ``f(x) = x + alpha*p`` along ``p`. + % + % NOTES: + % + % All vectors are assumed to be *column* vectors. + % - ## A simple calculation should convince you that the gradient of - ## phi(x) above is Ax - b == r. + % A simple calculation should convince you that the gradient of + % phi(x) above is Ax - b == r. alpha = step_length_positive_definite(r, A, p); end