1 function [x, k] = simple_preconditioned_cgm(Q,
14 % min [phi(x) = (1/2)*<Qx,x> + <b,x>]
16 % using the preconditioned conjugate gradient method (14.54 in
21 % - ``Q`` -- The coefficient matrix of the system to solve. Must
22 % be positive definite.
24 % - ``M`` -- The preconditioning matrix. If the actual matrix used
25 % to precondition ``Q`` is called ``C``, i.e. ``C^(-1) * Q *
26 % C^(-T) == \bar{Q}``, then M=CC^T. Must be symmetric positive-
27 % definite. See for example Golub and Van Loan.
29 % - ``b`` -- The right-hand-side of the system to solve.
31 % - ``x0`` -- The starting point for the search.
33 % - ``tolerance`` -- How close ``Qx`` has to be to ``b`` (in
34 % magnitude) before we stop.
36 % - ``max_iterations`` -- The maximum number of iterations to
41 % - ``x`` - The solution to Qx=b.
43 % - ``k`` - The ending value of k; that is, the number of
44 % iterations that were performed.
48 % All vectors are assumed to be *column* vectors.
52 % 1. Guler, Osman. Foundations of Optimization. New York, Springer,
56 % This isn't great in practice, since the CGM is usually used on
57 % huge sparse systems.
63 Q_bar = C_inv * Q * Ct_inv;
66 % But it sure is easy.
67 [x_bar, k] = vanilla_cgm(Q_bar, b_bar, x0, tolerance, max_iterations);
69 % The solution to Q_bar*x_bar == b_bar is x_bar = Ct*x.