X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_gpu.git/blobdiff_plain/5d35abe2b7e45bef96c118ab97504d14cf0620f3..631c915c07ad1bc1816d27185eb17b38092aeded:/BookGPU/Chapters/chapter12/ch12.tex?ds=sidebyside diff --git a/BookGPU/Chapters/chapter12/ch12.tex b/BookGPU/Chapters/chapter12/ch12.tex index 63eae8c..3843597 100755 --- a/BookGPU/Chapters/chapter12/ch12.tex +++ b/BookGPU/Chapters/chapter12/ch12.tex @@ -1,15 +1,15 @@ -1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% +%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% %% %% CHAPTER 12 %% %% %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %\chapterauthor{}{} -\chapterauthor{Lilia Ziane Khodja}{Femto-ST Institute, University of Franche-Comte, France} -\chapterauthor{Raphaël Couturier}{Femto-ST Institute, University of Franche-Comte, France} -\chapterauthor{Jacques Bahi}{Femto-ST Institute, University of Franche-Comte, France} +\chapterauthor{Lilia Ziane Khodja, Raphaël Couturier, and Jacques Bahi}{Femto-ST Institute, University of Franche-Comte, France} +%\chapterauthor{Raphaël Couturier}{Femto-ST Institute, University of Franche-Comte, France} +%\chapterauthor{Jacques Bahi}{Femto-ST Institute, University of Franche-Comte, France} -\chapter{Solving sparse linear systems with GMRES and CG methods on GPU clusters} +\chapter[Solving linear systems with GMRES and CG methods on GPU clusters]{Solving sparse linear systems with GMRES and CG methods on GPU clusters} \label{ch12} %%--------------------------%% @@ -20,15 +20,15 @@ Sparse linear systems are used to model many scientific and industrial problems, such as the environmental simulations or the industrial processing of the complex or non-Newtonian fluids. Moreover, the resolution of these problems often involves the -solving of such linear systems which are considered as the most expensive process in +solving of such linear systems that are considered the most expensive process in terms of execution time and memory space. Therefore, solving sparse linear systems must be as efficient as possible in order to deal with problems of ever increasing size. There are, in the jargon of numerical analysis, different methods of solving sparse -linear systems that can be classified in two classes: the direct and iterative methods. -However, the iterative methods are often more suitable than their counterpart, direct -methods, to solve these systems. Indeed, they are less memory consuming and easier +linear systems that can be classified in two classes: direct and iterative methods. +However, the iterative methods are often more suitable than their counterparts, direct +methods, to solve these systems. Indeed, they are less memory-consuming and easier to parallelize on parallel computers than direct methods. Different computing platforms, sequential and parallel computers, are used to solve sparse linear systems with iterative solutions. Nowadays, graphics processing units (GPUs) have become attractive to solve @@ -38,8 +38,8 @@ traditional CPUs. In Section~\ref{ch12:sec:02}, we describe the general principle of two well-known iterative methods: the conjugate gradient method and the generalized minimal residual method. In Section~\ref{ch12:sec:03}, we give the main key points of the parallel implementation of both methods on a cluster of -GPUs. Finally, in Section~\ref{ch12:sec:04}, we present the experimental results obtained on a -CPU cluster and on a GPU cluster, to solve large sparse linear systems. +GPUs. Finally, in Section~\ref{ch12:sec:04}, we present the experimental results, obtained on a +CPU cluster and on a GPU cluster of solving large sparse linear systems. %%--------------------------%% @@ -54,12 +54,12 @@ Ax=b, \label{ch12:eq:01} \end{equation} where $A\in\mathbb{R}^{n\times n}$ is a sparse nonsingular square matrix, $x\in\mathbb{R}^{n}$ -is the solution vector, $b\in\mathbb{R}^{n}$ is the right-hand side and $n\in\mathbb{N}$ is a +is the solution vector, $b\in\mathbb{R}^{n}$ is the right-hand side, and $n\in\mathbb{N}$ is a large integer number. The iterative methods\index{Iterative~method} for solving the large sparse linear system~(\ref{ch12:eq:01}) proceed by successive iterations of a same block of elementary operations, during which an -infinite number of approximate solutions $\{x_k\}_{k\geq 0}$ are computed. Indeed, from an +infinite number of approximate solutions $\{x_k\}_{k\geq 0}$ is computed. Indeed, from an initial guess $x_0$, an iterative method determines at each iteration $k>0$ an approximate solution $x_k$ which, gradually, converges to the exact solution $x^{*}$ as follows: \begin{equation} @@ -78,9 +78,9 @@ where $\varepsilon<1$ is the required convergence tolerance threshold\index{Conv Some of the most iterative methods that have proven their efficiency for solving large sparse linear systems are those called \textit{Krylov subspace methods}~\cite{ch12:ref1}\index{Iterative~method!Krylov~subspace}. -In the present chapter, we describe two Krylov methods which are widely used: the conjugate -gradient method (CG) and the generalized minimal residual method (GMRES). In practice, the -Krylov subspace methods are usually used with preconditioners that allow to improve their +In the present chapter, we describe two Krylov methods which are widely used: the CG method (conjugate +gradient method) and the GMRES method (generalized minimal residual method). In practice, the +Krylov subspace methods are usually used with preconditioners that allow the improvement of their convergence. So, in what follows, the CG and GMRES methods are used to solve the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear system: \begin{equation} @@ -95,7 +95,7 @@ where $M$ is the preconditioning matrix. \subsection{CG method} \label{ch12:sec:02.01} The conjugate gradient method was initially developed by Hestenes and Stiefel in 1952~\cite{ch12:ref2}. -It is one of the well known iterative method to solve large sparse linear systems. In addition, it +It is one of the well-known iterative methods to solve large sparse linear systems. In addition, it can be adapted to solve nonlinear equations and optimization problems. However, it can only be applied to problems with positive definite symmetric matrices. @@ -111,7 +111,7 @@ such that the Galerkin condition\index{Galerkin~condition} must be satisfied: r_k \bot \mathcal{K}_k(A,r_0), \label{ch12:eq:05} \end{equation} -where $x_0$ is the initial guess, $r_k=b-Ax_k$ is the residual of the computed solution $x_k$ and $\mathcal{K}_k$ +where $x_0$ is the initial guess, $r_k=b-Ax_k$ is the residual of the computed solution $x_k$, and $\mathcal{K}_k$ the Krylov subspace of order $k$: \[\mathcal{K}_k(A,r_0) \equiv\text{span}\{r_0, Ar_0, A^2r_0,\ldots, A^{k-1}r_0\}.\] In fact, CG is based on the construction of a sequence $\{p_k\}_{k\in\mathbb{N}}$ of direction vectors in $\mathcal{K}_k$ which are pairwise $A$-conjugate ($A$-orthogonal): @@ -142,9 +142,9 @@ p_0=r_0, & p_k=r_k+\beta_k p_{k-1}, & \beta_k\in\mathbb{R}. \label{ch12:eq:09} \end{equation} Moreover, the scalars $\{\alpha_k\}_{k>0}$ are chosen so as to minimize the $A$-norm error $\|x^{*}-x_k\|_A$ -over the Krylov subspace $\mathcal{K}_{k}$ and the scalars $\{\beta_k\}_{k>0}$ are chosen so as to ensure +over the Krylov subspace $\mathcal{K}_{k}$, and the scalars $\{\beta_k\}_{k>0}$ are chosen so as to ensure that the direction vectors are pairwise $A$-conjugate. So, the assumption that matrix $A$ is symmetric and -the recurrences~(\ref{ch12:eq:08}) and~(\ref{ch12:eq:09}) allow to deduce that: +the recurrences~(\ref{ch12:eq:08}) and~(\ref{ch12:eq:09}) allow the deduction that: \begin{equation} \begin{array}{ll} \alpha_{k}=\frac{r^{T}_{k-1}r_{k-1}}{p_{k}^{T}Ap_{k}}, & \beta_{k}=\frac{r_{k}^{T}r_{k}}{r_{k-1}^{T}r_{k-1}}. @@ -176,21 +176,21 @@ the recurrences~(\ref{ch12:eq:08}) and~(\ref{ch12:eq:09}) allow to deduce that: $k = k + 1$\; } } -\caption{Left-preconditioned CG method} +\caption{left-preconditioned CG method} \label{ch12:alg:01} \end{algorithm} Algorithm~\ref{ch12:alg:01} shows the main key points of the preconditioned CG method. It allows -to solve the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear system~(\ref{ch12:eq:11}). +the solving the left-preconditioned\index{Sparse~linear~system!Preconditioned} sparse linear system~(\ref{ch12:eq:11}). In this algorithm, $\varepsilon$ is the convergence tolerance threshold, $maxiter$ is the maximum -number of iterations and $(\cdot,\cdot)$ defines the dot product between two vectors in $\mathbb{R}^{n}$. +number of iterations, and $(\cdot,\cdot)$ defines the dot product between two vectors in $\mathbb{R}^{n}$. At every iteration, a direction vector $p_k$ is determined, so that it is orthogonal to the preconditioned residual $z_k$ and to the direction vectors $\{p_i\}_{i