X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/GMRES2stage.git/blobdiff_plain/0152824d3e001a7084c17325a1171e9efe4c51ec..8aeef74d04b37c2601749676e053a538ba3785cd:/paper.tex diff --git a/paper.tex b/paper.tex index fe7fa39..f1becbd 100644 --- a/paper.tex +++ b/paper.tex @@ -648,15 +648,15 @@ appropriate than a single direct method in a parallel context. \begin{algorithmic}[1] \Input $A$ (sparse matrix), $b$ (right-hand side) \Output $x$ (solution vector)\vspace{0.2cm} - \State Set the initial guess $x^0$ + \State Set the initial guess $x_0$ \For {$k=1,2,3,\ldots$ until convergence (error$<\epsilon_{tsirm}$)} \label{algo:conv} - \State $x^k=Solve(A,b,x^{k-1},max\_iter_{kryl})$ \label{algo:solve} + \State $x_k=Solve(A,b,x_{k-1},max\_iter_{kryl})$ \label{algo:solve} \State retrieve error - \State $S_{k \mod s}=x^k$ \label{algo:store} + \State $S_{k \mod s}=x_k$ \label{algo:store} \If {$k \mod s=0$ {\bf and} error$>\epsilon_{kryl}$} \State $R=AS$ \Comment{compute dense matrix} \label{algo:matrix_mul} - \State $\alpha=Solve\_Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:} - \State $x^k=S\alpha$ \Comment{compute new solution} + \State $\alpha=Least\_Squares(R,b,max\_iter_{ls})$ \label{algo:} + \State $x_k=S\alpha$ \Comment{compute new solution} \EndIf \EndFor \end{algorithmic} @@ -703,19 +703,21 @@ less the same principle but it takes more place, so we briefly explain the paral \begin{algorithmic}[1] \Input $A$ (matrix), $b$ (right-hand side) \Output $x$ (solution vector)\vspace{0.2cm} - \State $r=b-Ax$ - \State $p=A'r$ - \State $s=p$ - \State $g=||s||^2_2$ - \For {$k=1,2,3,\ldots$ until convergence (g$<\epsilon_{ls}$)} \label{algo2:conv} - \State $q=Ap$ - \State $\alpha=g/||q||^2_2$ - \State $x=x+alpha*p$ - \State $r=r-alpha*q$ - \State $s=A'*r$ - \State $g_{old}=g$ - \State $g=||s||^2_2$ - \State $\beta=g/g_{old}$ + \State Let $x_0$ be an initial approximation + \State $r_0=b-Ax_0$ + \State $p_1=A^Tr_0$ + \State $s_0=p_1$ + \State $\gamma=||s_0||^2_2$ + \For {$k=1,2,3,\ldots$ until convergence ($\gamma<\epsilon_{ls}$)} \label{algo2:conv} + \State $q_k=Ap_k$ + \State $\alpha_k=\gamma/||q_k||^2_2$ + \State $x_k=x_{k-1}+\alpha_kp_k$ + \State $r_k=r_{k-1}-\alpha_kq_k$ + \State $s_k=A^Tr_k$ + \State $\gamma_{old}=\gamma$ + \State $\gamma=||s_k||^2_2$ + \State $\beta_k=\gamma/\gamma_{old}$ + \State $p_{k+1}=s_k+\beta_kp_k$ \EndFor \end{algorithmic} \label{algo:02} @@ -743,7 +745,22 @@ where $\alpha = \lambda_min(M)^2$ and $\beta = \lambda_max(A^T A)$, which proves the convergence of GMRES($m$) for all $m$ under that assumption regarding $A$. \end{proposition} +<<<<<<< HEAD +======= +We can now claim that, +\begin{proposition} +If $A$ is a positive real matrix and GMRES($m$) is used as solver, then the TSIRM algorithm is convergent. +\end{proposition} + +\begin{proof} +Let $r_k = b-Ax_k$, where $x_k$ is the approximation of the solution after the +$k$-th iterate of TSIRM. +We will prove that $r_k \rightarrow 0$ when $k \rightarrow +\infty$. + +Each step of the TSIRM algorithm +\end{proof} +>>>>>>> 84e15020344b77e5497c4a516cc20b472b2914cd %%%********************************************************* %%%********************************************************* @@ -830,22 +847,21 @@ scalable linear equations solvers: \begin{itemize} \item ex15 is an example which solves in parallel an operator using a finite difference scheme. The diagonal is equal to 4 and 4 extra-diagonals - representing the neighbors in each directions is equal to -1. This example is + representing the neighbors in each directions are equal to -1. This example is used in many physical phenomena, for example, heat and fluid flow, wave - propagation... + propagation, etc. \item ex54 is another example based on 2D problem discretized with quadrilateral finite elements. For this example, the user can define the scaling of material - coefficient in embedded circle, it is called $\alpha$. + coefficient in embedded circle called $\alpha$. \end{itemize} -For more technical details on these applications, interested reader are invited -to read the codes available in the PETSc sources. Those problem have been -chosen because they are scalable with many cores. We have tested other problem -but they are not scalable with many cores. +For more technical details on these applications, interested readers are invited +to read the codes available in the PETSc sources. Those problems have been +chosen because they are scalable with many cores which is not the case of other problems that we have tested. In the following larger experiments are described on two large scale architectures: Curie and Juqeen... {\bf description...}\\ -{\bf Description of preconditioners} +{\bf Description of preconditioners}\\ \begin{table*}[htbp] \begin{center} @@ -866,15 +882,15 @@ In the following larger experiments are described on two large scale architectur \hline \end{tabular} -\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc with two preconditioner (mg and sor) with 25,000 components per core on Juqueen (threshold 1e-3, restart=30, s=12), time is expressed in seconds.} +\caption{Comparison of FGMRES and TSIRM with FGMRES for example ex15 of PETSc with two preconditioners (mg and sor) with 25,000 components per core on Juqueen (threshold 1e-3, restart=30, s=12), time is expressed in seconds.} \label{tab:03} \end{center} \end{table*} Table~\ref{tab:03} shows the execution times and the number of iterations of -example ex15 of PETSc on the Juqueen architecture. Differents number of cores -are studied rangin from 2,048 upto 16,383. Two preconditioners have been -tested. For those experiments, the number of components (or unknown of the +example ex15 of PETSc on the Juqueen architecture. Different numbers of cores +are studied ranging from 2,048 up-to 16,383. Two preconditioners have been +tested: {\it mg} and {\it sor}. For those experiments, the number of components (or unknowns of the problems) per processor is fixed to 25,000, also called weak scaling. This number can seem relatively small. In fact, for some applications that need a lot of memory, the number of components per processor requires sometimes to be @@ -882,11 +898,11 @@ small. -In this Table, we can notice that TSIRM is always faster than FGMRES. The last +In Table~\ref{tab:03}, we can notice that TSIRM is always faster than FGMRES. The last column shows the ratio between FGMRES and the best version of TSIRM according to the minimization procedure: CGLS or LSQR. Even if we have computed the worst -case between CGLS and LSQR, it is clear that TSIRM is alsways faster than -FGMRES. For this example, the multigrid preconditionner is faster than SOR. The +case between CGLS and LSQR, it is clear that TSIRM is always faster than +FGMRES. For this example, the multigrid preconditioner is faster than SOR. The gain between TSIRM and FGMRES is more or less similar for the two preconditioners. Looking at the number of iterations to reach the convergence, it is obvious that TSIRM allows the reduction of the number of iterations. It @@ -905,9 +921,9 @@ corresponds to 30*12, there are $max\_iter_{ls}$ which corresponds to 15. In Figure~\ref{fig:01}, the number of iterations per second corresponding to Table~\ref{tab:01} is displayed. It can be noticed that the number of -iterations per second of FMGRES is constant whereas it decrease with TSIRM with -both preconditioner. This can be explained by the fact that when the number of -core increases the time for the minimization step also increases but, generally, +iterations per second of FMGRES is constant whereas it decreases with TSIRM with +both preconditioners. This can be explained by the fact that when the number of +cores increases the time for the least-squares minimization step also increases but, generally, when the number of cores increases, the number of iterations to reach the threshold also increases, and, in that case, TSIRM is more efficient to reduce the number of iterations. So, the overall benefit of using TSIRM is interesting.