The present paper is organized as follows. In Section~\ref{sec:02} some related
works are presented. Section~\ref{sec:03} presents our two-stage algorithm using
-a least-square residual minimization. Section~\ref{sec:04} describes some
-convergence results on this method.
- Section~\ref{sec:05} shows some
+a least-square residual minimization. Section~\ref{sec:04} describes some
+convergence results on this method. Section~\ref{sec:05} shows some
experimental results obtained on large clusters of our algorithm using routines
-of PETSc toolkit.
+of PETSc toolkit. Finally Section~\ref{sec:06} concludes and gives some
+perspectives.
%%%*********************************************************
%%%*********************************************************
inner-outer iteration solver based on iterative Krylov methods. The main key
points of our solver are given in Algorithm~\ref{algo:01}.
-In order to accelerate the convergence, the outer iteration is implemented as an
-iterative Krylov method which minimizes some error functions over a Krylov
-subspace~\cite{saad96}. At each iteration, the sparse linear system $Ax=b$ is
-solved iteratively with an iterative method, for example GMRES
-method~\cite{saad86} or some of its variants, and the Krylov subspace that we
-used is spanned by a basis $S$ composed of successive solutions issued from the
-inner iteration
-\begin{equation}
- S = \{x^1, x^2, \ldots, x^s\} \text{,~} s\leq n.
-\end{equation}
-The advantage of such a Krylov subspace is that we neither need an orthogonal
-basis nor any synchronization between processors to generate this basis. The
-algorithm is periodically restarted every $s$ iterations with a new initial
-guess $x=S\alpha$ which minimizes the residual norm $\|b-Ax\|_2$ over the Krylov
-subspace spanned by vectors of $S$, where $\alpha$ is a solution of the normal
-equations
-\begin{equation}
- R^TR\alpha = R^Tb,
-\end{equation}
-which is associated with the least-squares problem
+In order to accelerate the convergence, the outer iteration periodically applies
+a least-square minimization on the residuals computed by the inner solver. The
+inner solver is a Krylov based solver which does not required to be changed.
+
+At each outer iteration, the sparse linear system $Ax=b$ is solved, only for $m$
+iterations, using an iterative method restarting with the previous solution. For
+example, the GMRES method~\cite{Saad86} or some of its variants can be used as a
+inner solver. The current solution of the Krylov method is saved inside a matrix
+$S$ composed of successive solutions computed by the inner iteration.
+
+Periodically, every $s$ iterations, the minimization step is applied in order to
+compute a new solution $x$. For that, the previous residuals are computed with
+$(b-AS)$. The minimization of the residuals is obtained by
\begin{equation}
\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2
\label{eq:01}
\end{equation}
-such that $R=AS$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$,
-$s\ll n$, and $R^T$ denotes the transpose of matrix $R$. We use an iterative
-method to solve the least-squares problem~(\ref{eq:01}) such as CGLS
-~\cite{hestenes52} or LSQR~\cite{paige82} which are more appropriate than a
-direct method in the parallel context.
+with $R=AS$. Then the new solution $x$ is computed with $x=S\alpha$.
+
+
+In practice, $R$ is a dense rectangular matrix in $\mathbb{R}^{n\times s}$,
+$s\ll n$. In order to minimize~(\ref{eq:01}), a least-square method such as
+CGLS ~\cite{Hestenes52} or LSQR~\cite{Paige82} is used. Those methods are more
+appropriate than a direct method in a parallel context.
\begin{algorithm}[t]
-\caption{A Krylov two-stage algorithm}
+\caption{TSARM}
\begin{algorithmic}[1]
\Input $A$ (sparse matrix), $b$ (right-hand side)
\Output $x$ (solution vector)\vspace{0.2cm}
\State Set the initial guess $x^0$
\For {$k=1,2,3,\ldots$ until convergence} \label{algo:conv}
- \State Solve iteratively $Ax^k=b$ \label{algo:solve}
- \State $S_{k~mod~s}=x^k$
+ \State $x^k=Solve(A,b,x^{k-1},m)$ \label{algo:solve}
+ \State $S_{k~mod~s}=x^k$ \label{algo:store}
\If {$k$ mod $s=0$ {\bf and} not convergence}
- \State Compute dense matrix $R=AS$
+ \State $R=AS$ \Comment{compute dense matrix}
\State Solve least-squares problem $\underset{\alpha\in\mathbb{R}^{s}}{min}\|b-R\alpha\|_2$
- \State Compute minimizer $x^k=S\alpha$
+ \State $x^k=S\alpha$ \Comment{compute new solution}
\EndIf
\EndFor
\end{algorithmic}
\label{algo:01}
\end{algorithm}
-Operation $S_{k~ mod~ s}=x^k$ consists in copying the residual $x_k$ into the
-column $k~ mod~ s$ of the matrix $S$. After the minimization, the matrix $S$ is
-reused with the new values of the residuals.
+Algorithm~\ref{algo:01} summarizes the principle of our method. The outer
+iteration is inside the for loop. Line~\ref{algo:solve}, the Krylov method is
+called for a maximum of $m$ iterations. In practice, we suggest to choose $m$
+equals to the restart number of the GMRES like method. Line~\ref{algo:store},
+$S_{k~ mod~ s}=x^k$ consists in copying the solution $x_k$ into the column $k~
+mod~ s$ of the matrix $S$. After the minimization, the matrix $S$ is reused with
+the new values of the residuals.
%%%*********************************************************
%%%*********************************************************
characteristics. For all the matrices, the name, the field, the number of rows
and the number of nonzero elements is given.
-\begin{table}
+\begin{table*}
\begin{center}
\begin{tabular}{|c|c|r|r|r|}
\hline
\caption{Main characteristics of the sparse matrices chosen from the Davis collection}
\label{tab:01}
\end{center}
-\end{table}
+\end{table*}
The following parameters have been chosen for our experiments. As by default
the restart of GMRES is performed every 30 iterations, we have chosen to stop
%%%*********************************************************
%%%*********************************************************
\section{Conclusion}
-\label{sec:05}
+\label{sec:06}
%The conclusion goes here. this is more of the conclusion
%%%*********************************************************
%%%*********************************************************
% http://www.ctan.org/tex-archive/biblio/bibtex/contrib/doc/
% The IEEEtran BibTeX style support page is at:
% http://www.michaelshell.org/tex/ieeetran/bibtex/
-%\bibliographystyle{IEEEtran}
+\bibliographystyle{IEEEtran}
% argument is your BibTeX string definitions and bibliography database(s)
-%\bibliography{IEEEabrv,../bib/paper}
+\bibliography{biblio}
%
% <OR> manually copy in the resultant .bbl file
% set second argument of \begin to the number of references
% (used to reserve space for the reference number labels box)
-\begin{thebibliography}{1}
+%% \begin{thebibliography}{1}
-\bibitem{saad86} Y.~Saad and M.~H.~Schultz, \emph{GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems}, SIAM Journal on Scientific and Statistical Computing, 7(3):856--869, 1986.
+%% \bibitem{saad86} Y.~Saad and M.~H.~Schultz, \emph{GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems}, SIAM Journal on Scientific and Statistical Computing, 7(3):856--869, 1986.
-\bibitem{saad96} Y.~Saad, \emph{Iterative Methods for Sparse Linear Systems}, PWS Publishing, New York, 1996.
+%% \bibitem{saad96} Y.~Saad, \emph{Iterative Methods for Sparse Linear Systems}, PWS Publishing, New York, 1996.
-\bibitem{hestenes52} M.~R.~Hestenes and E.~Stiefel, \emph{Methods of conjugate gradients for solving linear system}, Journal of Research of National Bureau of Standards, B49:409--436, 1952.
+%% \bibitem{hestenes52} M.~R.~Hestenes and E.~Stiefel, \emph{Methods of conjugate gradients for solving linear system}, Journal of Research of National Bureau of Standards, B49:409--436, 1952.
-\bibitem{paige82} C.~C.~Paige and A.~M.~Saunders, \emph{LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares}, ACM Trans. Math. Softw. 8(1):43--71, 1982.
-\end{thebibliography}
+%% \bibitem{paige82} C.~C.~Paige and A.~M.~Saunders, \emph{LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares}, ACM Trans. Math. Softw. 8(1):43--71, 1982.
+%% \end{thebibliography}