From: Michael Orlitzky Date: Fri, 22 Mar 2013 01:50:23 +0000 (-0400) Subject: Remove the step_size_positive_definite() function; it looks like it was added by... X-Git-Url: http://gitweb.michael.orlitzky.com/?p=octave.git;a=commitdiff_plain;h=18d145078e12710b195793fffb93afb9efe31a38;ds=sidebyside Remove the step_size_positive_definite() function; it looks like it was added by accident and is superceded by step_length_positive_definite(). --- diff --git a/optimization/step_size_positive_definite.m b/optimization/step_size_positive_definite.m deleted file mode 100644 index e32229c..0000000 --- a/optimization/step_size_positive_definite.m +++ /dev/null @@ -1,36 +0,0 @@ -function alpha = step_size_positive_definite(Q, b, x) - % Let, - % - % f(x) = (1/2) - + a (1) - % - % where Q is symmetric and positive definite. - % - % If we seek to minimize f; that is, to solve Qx = b, then we can do - % so using the method of steepest-descent. This function computes - % the optimal step size alpha for the steepest descent method, in - % the negative-gradient direction, at x. - % - % INPUT: - % - % - ``Q`` -- the positive-definite matrix in the definition of f(x). - % - % - ``b`` -- the known vector in the definition of f(x). - % - % OUTPUT: - % - % - ``alpha`` -- the optimal step size in the negative gradient - % direction. - % - % NOTES: - % - % It is possible to save one matrix-vector multiplication here, by - % taking d_k as a parameter. In fact, if the caller is specialized to - % our problem (1), we can avoid both matrix-vector multiplications here - % at the expense of some added roundoff error. - % - - % The gradient of f(x) is Qx - b, and d_k is the negative gradient - % direction. - d_k = b - Q*x; - alpha = (d_k' * d_k) / (d_k' * Q * d_k); -end