Abstract:
Solving systems of linear equations of the form Ax=b has always been a challenge for scientists because the input matrix A is very large and sparse. These systems are derived mainly from the discretization of Partial Differential Equations (PDE) which are crucial and essential in most scientific fields, that’s why they are usually solved using Krylov subspace methods such as : Conjugate Gradient (CG), Generalized Minimal Residual (GMRES), Bi-Conjugate Gradient (Bi-CG) and Bi-Conjugate Gradient Stabilized (Bi-CGStab). Even though these methods are efficient, they are ruled by Blas1 and Blas2 operations that are communication-bound when parallelized. To reduce communication, a new approach was to enlarge the Krylov subspace per iteration by a maximum of t vectors based on the domain decomposition of the graph of A. The enlarged Krylov subspace being a superset of the Krylov subspace will allow us to search for the solution of the system Ax=b in it. Several variants of enlarged CG were introduced along with their s-step versions, and it is shown that an approximation to x is obtained in less iterations as compared to classical CG. But increasing t also means increasing the memory requirements and the possibility of some of the basis vectors becoming linearly dependent. Thus, t has to be relatively small, but not too small so that the number of iterations is reduced.
In this thesis, we are mainly studying the possibility of flexibly varying the number of vectors added per iteration to the enlarged Krylov subspace, and its effect on the convergence of the enlarged CG methods : MSDO-CG and Modified MSDO-CG.