From 6857ecf2cdc87f1eec0618e8f148dae094224933 Mon Sep 17 00:00:00 2001 From: raphael couturier Date: Thu, 9 Oct 2014 15:45:23 +0200 Subject: [PATCH 1/1] no more parallelization part --- paper.tex | 30 ++++++++++++++---------------- 1 file changed, 14 insertions(+), 16 deletions(-) diff --git a/paper.tex b/paper.tex index 8ffd387..8bb4dc3 100644 --- a/paper.tex +++ b/paper.tex @@ -583,8 +583,7 @@ performances. The present paper is organized as follows. In Section~\ref{sec:02} some related works are presented. Section~\ref{sec:03} presents our two-stage algorithm using a least-square residual minimization. Section~\ref{sec:04} describes some -convergence results on this method. In Section~\ref{sec:05}, parallization -details of TSARM are given. Section~\ref{sec:06} shows some experimental +convergence results on this method. Section~\ref{sec:05} shows some experimental results obtained on large clusters of our algorithm using routines of PETSc toolkit. Finally Section~\ref{sec:06} concludes and gives some perspectives. %%%********************************************************* @@ -680,18 +679,6 @@ To summarize, the important parameters of TSARM are: \item $\epsilon_{ls}$ the threshold to stop the least-square method \end{itemize} -%%%********************************************************* -%%%********************************************************* - -\section{Convergence results} -\label{sec:04} - - - -%%%********************************************************* -%%%********************************************************* -\section{Parallelization} -\label{sec:05} The parallelisation of TSARM relies on the parallelization of all its parts. More precisely, except the least-square step, all the other parts are @@ -733,10 +720,21 @@ In each iteration of CGLS, there is two matrix-vector multiplications and some classical operations: dots, norm, multiplication and addition on vectors. All these operations are easy to implement in PETSc or similar environment. + + +%%%********************************************************* +%%%********************************************************* + +\section{Convergence results} +\label{sec:04} + + + + %%%********************************************************* %%%********************************************************* \section{Experiments using petsc} -\label{sec:06} +\label{sec:05} In order to see the influence of our algorithm with only one processor, we first @@ -913,7 +911,7 @@ but they are not scalable with many cores. %%%********************************************************* %%%********************************************************* \section{Conclusion} -\label{sec:07} +\label{sec:06} %The conclusion goes here. this is more of the conclusion %%%********************************************************* %%%********************************************************* -- 2.39.5