\documentclass{article}
+\usepackage[utf8]{inputenc}
+\usepackage{amsfonts,amssymb}
+\usepackage{amsmath}
+\usepackage{graphicx}
+
+\title{A scalable multisplitting algorithm for solving large sparse linear systems}
+
+
\begin{document}
+\author{Raphaël Couturier \and Lilia Ziane Khodja}
+
+\maketitle
+
+
+\begin{abstract}
+In this paper we revist the krylov multisplitting algorithm presented in
+\cite{huang1993krylov} which uses a scalar method to minimize the krylov
+iterations computed by a multisplitting algorithm. Our new algorithm is based on
+a parallel multisplitting algorithm with few blocks of large size using a
+parallel GMRES method inside each block and on a parallel krylov minimization in
+order to improve the convergence. Some large scale experiments with a 3D Poisson
+problem are presented. They show the obtained improvements compared to a
+classical GMRES both in terms of number of iterations and execution times.
+\end{abstract}
+
+\section{Introduction}
+
+Iterative methods are used to solve large sparse linear systems of equations of
+the form $Ax=b$ because they are easier to parallelize than direct ones. Many
+iterative methods have been proposed and adapted by many researchers. When
+solving large linear systems with many cores, iterative methods often suffer
+from scalability problems. This is due to their need for collective
+communications to perform matrix-vector products and reduction operations.
+Preconditionners can be used in order to increase the convergence of iterative
+solvers. However, most of the good preconditionners are not sclalable when
+thousands of cores are used.
+
+
+A completer...
+On ne peut pas parler de tout...
+
+\section{Related works}
+
+
+A general framework for studying parallel multisplitting has been presented in
+\cite{o1985multi} by O'Leary and White. Convergence conditions are given for the
+most general case. Many authors improved multisplitting algorithms by proposing
+for example a asynchronous version \cite{bru1995parallel} and convergence
+condition \cite{bai1999block,bahi2000asynchronous} in this case or other
+two-stage algorithms~\cite{frommer1992h,bru1995parallel}
+
+In \cite{huang1993krylov}, the authors proposed a parallel multisplitting
+algorithm in which all the tasks except one are devoted to solve a sub-block of
+the splitting and to send their local solution to the first task which is in
+charge to combine the vectors at each iteration. These vectors form a Krylov
+basis for which the first tasks minimize the error function over the basis to
+increase the convergence, then the other tasks receive the update solution until
+convergence of the global system.
+
+
+
+In \cite{couturier2008gremlins}, the authors proposed practical implementations
+of multisplitting algorithms that take benefit from multisplitting algorithms to
+solve large scale linear systems. Inner solvers could be based on scalar direct
+method with the LU method or scalar iterative one with GMRES.
-This paper presents ....
+\bibliographystyle{plain}
+\bibliography{biblio}
\end{document}