1 function [x, k] = preconditioned_conjugate_gradient_method(Q, ...
14 % min [phi(x) = (1/2)*<Qx,x> + <b,x>]
16 % using the preconditioned conjugate gradient method (14.56 in
17 % Guler). If ``M`` is the identity matrix, we use the slightly
18 % faster implementation in conjugate_gradient_method.m.
22 % - ``Q`` -- The coefficient matrix of the system to solve. Must
23 % be positive definite.
25 % - ``M`` -- The preconditioning matrix. If the actual matrix used
26 % to precondition ``Q`` is called ``C``, i.e. ``C^(-1) * Q *
27 % C^(-T) == \bar{Q}``, then M=CC^T. However the matrix ``C`` is
28 % never itself needed. This is explained in Guler, section 14.9.
30 % - ``b`` -- The right-hand-side of the system to solve.
32 % - ``x0`` -- The starting point for the search.
34 % - ``tolerance`` -- How close ``Qx`` has to be to ``b`` (in
35 % magnitude) before we stop.
37 % - ``max_iterations`` -- The maximum number of iterations to
42 % - ``x`` - The computed solution to Qx=b.
44 % - ``k`` - The ending value of k; that is, the number of
45 % iterations that were performed.
49 % All vectors are assumed to be *column* vectors.
51 % The cited algorithm contains a typo; in "The Preconditioned
52 % Conjugate-Gradient Method", we are supposed to define
53 % d_{0} = -z_{0}, not -r_{0} as written.
55 % The rather verbose name of this function was chosen to avoid
56 % conflicts with other implementations.
60 % 1. Guler, Osman. Foundations of Optimization. New York, Springer,
64 % Set k=0 first, that way the references to xk,rk,zk,dk which
65 % immediately follow correspond (semantically) to x0,r0,z0,d0.
73 for k = [ 0 : max_iterations ]
75 if (norm(rk) < tolerance)
76 % Check our stopping condition. This should catch the k=0 case.
81 % Used twice, avoid recomputation.
84 % The term alpha_k*dk appears twice, but so does Q*dk. We can't
85 % do them both, so we precompute the more expensive operation.
88 % After substituting the two previously-created variables, the
89 % following algorithm occurs verbatim in the reference.
90 alpha_k = rkzk/(dk' * Qdk);
91 x_next = xk + (alpha_k * dk);
92 r_next = rk + (alpha_k * Qdk);
94 beta_next = (r_next' * z_next)/rkzk;
95 d_next = -z_next + beta_next*dk;
104 % The algorithm didn't converge, but we still want to return the
105 % terminal value of xk.