X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/6fda575b88fae086f925cd4dcf960fc2af0d3b4f..d0f18e8e70d6e9c4404a5c5b5300f736ac4257c1:/paper.tex diff --git a/paper.tex b/paper.tex index 04ad634..e3d19ec 100644 --- a/paper.tex +++ b/paper.tex @@ -369,7 +369,7 @@ % % paper title % can use linebreaks \\ within to get better formatting as desired -\title{TSIRM: A Two-Stage Iteration with least-square Residual Minimization algorithm to solve large sparse linear systems} +\title{TSIRM: A Two-Stage Iteration with least-squares Residual Minimization algorithm to solve large sparse linear systems} @@ -581,7 +581,7 @@ performances. The present article is organized as follows. Related works are presented in Section~\ref{sec:02}. Section~\ref{sec:03} details the two-stage algorithm using -a least-square residual minimization, while Section~\ref{sec:04} provides +a least-squares residual minimization, while Section~\ref{sec:04} provides convergence results regarding this method. Section~\ref{sec:05} shows some experimental results obtained on large clusters using routines of PETSc toolkit. This research work ends by a conclusion section, in which the proposal @@ -604,7 +604,7 @@ is summarized while intended perspectives are provided. %%%********************************************************* %%%********************************************************* -\section{Two-stage iteration with least-square residuals minimization algorithm} +\section{Two-stage iteration with least-squares residuals minimization algorithm} \label{sec:03} A two-stage algorithm is proposed to solve large sparse linear systems of the form $Ax=b$, where $A\in\mathbb{R}^{n\times n}$ is a sparse and square @@ -615,7 +615,7 @@ inner-outer iteration solver based on iterative Krylov methods. The main key-points of the proposed solver are given in Algorithm~\ref{algo:01}. It can be summarized as follows: the inner solver is a Krylov based one. In order to accelerate its convergence, the -outer solver periodically applies a least-square minimization on the residuals computed by the inner one. %Tsolver which does not required to be changed. +outer solver periodically applies a least-squares minimization on the residuals computed by the inner one. %Tsolver which does not required to be changed. At each outer iteration, the sparse linear system $Ax=b$ is partially solved using only $m$ @@ -636,7 +636,7 @@ with $R=AS$. Then the new solution $x$ is computed with $x=S\alpha$. In practice, $R$ is a dense rectangular matrix belonging in $\mathbb{R}^{n\times s}$, -with $s\ll n$. In order to minimize~\eqref{eq:01}, a least-square method such as +with $s\ll n$. In order to minimize~\eqref{eq:01}, a least-squares method such as CGLS ~\cite{Hestenes52} or LSQR~\cite{Paige82} is used. Remark that these methods are more appropriate than a single direct method in a parallel context. @@ -680,19 +680,19 @@ Let us summarize the most important parameters of TSIRM: \item $\epsilon_{tsirm}$: the threshold to stop the TSIRM method; \item $max\_iter_{kryl}$: the maximum number of iterations for the Krylov method; \item $s$: the number of outer iterations before applying the minimization step; -\item $max\_iter_{ls}$: the maximum number of iterations for the iterative least-square method; -\item $\epsilon_{ls}$: the threshold used to stop the least-square method. +\item $max\_iter_{ls}$: the maximum number of iterations for the iterative least-squares method; +\item $\epsilon_{ls}$: the threshold used to stop the least-squares method. \end{itemize} The parallelisation of TSIRM relies on the parallelization of all its -parts. More precisely, except the least-square step, all the other parts are +parts. More precisely, except the least-squares step, all the other parts are obvious to achieve out in parallel. In order to develop a parallel version of our code, we have chosen to use PETSc~\cite{petsc-web-page}. For line~\ref{algo:matrix_mul} the matrix-matrix multiplication is implemented and efficient since the matrix $A$ is sparse and since the matrix $S$ contains few colums in practice. As explained previously, at least two methods seem to be -interesting to solve the least-square minimization, CGLS and LSQR. +interesting to solve the least-squares minimization, CGLS and LSQR. In the following we remind the CGLS algorithm. The LSQR method follows more or less the same principle but it takes more place, so we briefly explain the parallelization of CGLS which is similar to LSQR. @@ -897,7 +897,7 @@ corresponds to 30*12, there are $max\_iter_{ls}$ which corresponds to 15. \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{nb_iter_sec_ex15_juqueen} -\caption{Number of iterations per second with ex15 and the same parameters than in Table~\ref{tab:03}} +\caption{Number of iterations per second with ex15 and the same parameters than in Table~\ref{tab:03} (weak scaling)} \label{fig:01} \end{figure} @@ -968,7 +968,7 @@ In Table~\ref{tab:04}, some experiments with example ex54 on the Curie architect \begin{figure}[htbp] \centering \includegraphics[width=0.45\textwidth]{nb_iter_sec_ex54_curie} -\caption{Number of iterations per second with ex54 and the same parameters than in Table~\ref{tab:05}} +\caption{Number of iterations per second with ex54 and the same parameters than in Table~\ref{tab:05} (strong scaling)} \label{fig:02} \end{figure}