4 %\documentclass[a4paper,11pt]{article}
6 \usepackage[T1]{fontenc}
7 \usepackage[utf8]{inputenc}
8 \usepackage{graphicx,subfigure,graphics}
10 %\usepackage[usenames]{color}
11 %\usepackage{latexsym,stmaryrd}
12 %\usepackage{amsfonts,amssymb}
13 \usepackage{verbatim,theorem,moreverb}
14 %\usepackage{float,floatflt}
15 \usepackage{boxedminipage}
21 \usepackage{algorithm}
22 \usepackage{algorithmic}
23 %\usepackage{floatfig}
28 \def\sfixme#1{\fbox{\textbf{FIXME: }#1}}
30 \newcommand{\fixme}[1]{%
32 \begin{boxedminipage}{.8\linewidth}
37 \newcommand{\FIXME}[1]{\marginpar[\null\hspace{2cm} FIXME]{FIXME} \fixme{#1}}
39 %\psfigurepath{.:fig:IMAGES}
40 \graphicspath{{.}{fig/}{IMAGES/}}
46 \title{Gridification of a Radiotherapy Dose Computation Application with the XtremWeb-CH Environment}
48 \author{Nabil Abdennhader\inst{1} \and Raphaël Couturier\inst{1} \and David \and
49 Julien Henriet\inst{2} \and Laiymani\inst{1} \and Sébastien Miquée\inst{1}
50 \and Marc Sauget\inst{2}}
52 \institute{Laboratoire d'Informatique de l'universit\'{e}
53 de Franche-Comt\'{e} \\
54 IUT Belfort-Montbéliard, Rue Engel Gros, 90016 Belfort - France \\
55 \email{raphael.couturier, david.laiymani, sebastien.miquee@univ-fcomte.fr}
57 FEMTO-ST, ENISYS/IRMA, F-25210 Montb\'{e}liard , FRANCE\\
59 %\email{\texttt{[laiymani]@lifc.univ-fcomte.fr}}}
68 %-------------INTRODUCTION--------------------
69 \section{Introduction}
71 The use of distributed architectures for solving large scientific problems seems
72 to become mandatory in a lot of cases. For example, in the domain of
73 radiotherapy dose computation the problem is crucial. The main goal of external
74 beam radiotherapy is the treatment of tumours while minimizing exposure to
75 healthy tissue. Dosimetric planning has to be carried out in order to optimize
76 the dose distribution within the patient is necessary. Thus, for determining the
77 most accurate dose distribution during treatment planning, a compromise must be
78 found between the precision and the speed of calculation. Current techniques,
79 using analytic methods, models and databases, are rapid but lack
80 precision. Enhanced precision can be achieved by using calculation codes based,
81 for example, on Monte Carlo methods. In [] the authors proposed a novel approach
82 based on the use of neural networks. The approach is based on the collaboration
83 of computation codes and multi-layer neural networks used as universal
84 approximators. It provides a fast and accurate evaluation of radiation doses in
85 any given environment for given irradiation parameters. As the learning step is
86 often very time consumming, in \cite{bcvsv08:ip} the authors proposed a parallel
87 algorithm that enable to decompose the learning domain into subdomains. The
88 decomposition has the advantage to significantly reduce the complexity of the
89 target functions to approximate.
91 Now, as there exist several classes of distributed/parallel architectures
92 (supercomputers, clusters, global computing...) we have to choose the best
93 suited one for the parallel Neurad application. The Global or Volunteer
94 computing model seems to be an interesting approach. Here, the computing power
95 is obtained by agregating unused (or volunteer) public resources connected to
96 the Internet. For our case, we can imagine for example, that a part of the
97 architecture will be composed of some of the different computers of the
98 hospital. This approach present the advantage to be clearly cheaper than a more
99 dedicated approach like the use of supercomputer or clusters.
101 The aim of this paper is to propose and evaluate a gridification of the Neurad
102 application (more precisely, of the most time consuming part, the learning step)
103 using a Global computing approach. For this, we focus on the XtremWeb-CH
104 environnement []. We choose this environnent because it tackles the centralized
105 aspect of other global computing environments such as XTremWeb [] or Seti []. It
106 tends to a peer-to-peer approach by distributing some components of the
107 architecture. For instance, the computing nodes are allowed to directly
108 communicate. Experimentations were conducted on a real Global Computing
109 testbed. The results are very encouraging. They exhibit an interesting speed-up
110 and show that the overhead induced by the use of XTremWeb-CH is very acceptable.
112 The paper is organized as follows. In section 2 we present the Neurad
113 application and particularly it most time consuming part i.e. the learning
114 step. Section 3 details the XtremWeb-CH environnement while in section 4 we
115 expose the gridification of the Neurad application. Experimental results are
116 presented in section 5 and we end in section 6 by some concluding remarks and
119 \section{The Neurad application}
123 \includegraphics[width=0.7\columnwidth]{figures/neurad.pdf}
124 \caption{The Neurad projects}
128 The \emph{Neurad}~\cite{Neurad} project presented in this paper takes place in a
129 multi-disciplinary project , involving medical physicists and computer
130 scientists whose goal is to enhance the treatment planning of cancerous tumors
131 by external radiotherapy. In our previous
132 works~\cite{RADIO09,ICANN10,NIMB2008}, we have proposed an original approach to
133 solve scientific problems whose accurate modeling and/or analytical description
134 are difficult. That method is based on the collaboration of computational codes
135 and neural networks used as universal interpolator. Thanks to that method, the
136 \emph{Neurad} software provides a fast and accurate evaluation of radiation
137 doses in any given environment (possibly inhomogeneous) for given irradiation
138 parameters. We have shown in a previous work (\cite{AES2009}) the interest to
139 use a distributed algorithm for the neural network learning. We use a classical
140 RPROP algorithm with a HPU topology to do the training of our neural network.
142 The Figure~\ref{f_neurad} presents the {\it{Neurad}} scheme. Three parts are
143 clearly independant : the initial data production, the learning process and the
144 dose deposit evaluation. The first step, the data production, is outside the
145 {\it{Neurad}} project. They are many solutions to obtains data about the
146 radiotherapy treatments like the measure or the simulation. The only essential
147 criterion is that the result must be obtain in a homogeneous environment. We
148 have chosen to use only a Monte Carlo simulation because this tools are the
149 references in the radiotherapy domains. The advantages to use data obtain with a
150 Monte Carlo simulator are the following : accuracy, profusing, quantify error
151 and regularity of measure point. But, they are too disagreement and the most
152 important is the statistical noise forcing a data post treatment. The
153 Figure~\ref{f_tray} present the general behavior of a dose deposit in water.
158 \includegraphics[width=0.7\columnwidth]{figures/testC.pdf}
159 \caption{Dose deposit by a photon beam of 24 mm of width in water (Normalized value). }
163 The secondary stage of the {\it{Neurad}} project is about the learning step and
164 it is the most time consuming step. This step is off-line but is it important to
165 reduce the time used for the learning process to keep a workable tools. Indeed,
166 if the learning time is too important (for the moment, this time could reach one
167 week for a limited works domain), the use of this process could be be limited
168 only at a major modification of the use context. However, it is interesting to
169 do an update to the learning process when the bound of the learning domain
170 evolves (evolution in material used for the prosthesis or evolution on the beam
171 (size, shape or energy)). The learning time is linked with the volume of data
172 who could be very important in real medical context. We have work to reduce
173 this learning time with a parallel method of the learning process using a
174 partitioning method of the global dataset. The goal of this method is to train
175 many neural networks on sub-domain of the global dataset. After this training,
176 the use of this neural networks together allows to obtain a response for the
177 global domain of study.
182 \includegraphics[width=0.5\columnwidth]{figures/overlap.pdf}
183 \caption{Overlapping for a sub-network in a two-dimensional domain with ratio
189 However, performing the learnings on sub-domains constituting a partition of the
190 initial domain is not satisfying according to the quality of the results. This
191 comes from the fact that the accuracy of the approximation performed by a neural
192 network is not constant over the learned domain. Thus, it is necessary to use
193 an overlapping of the sub-domains. The overall principle is depicted in
194 Figure~\ref{fig:overlap}. In this way, each sub-network has an exploitation
195 domain smaller than its training domain and the differences observed at the
196 borders are no longer relevant. Nonetheless, in order to preserve the
197 performances of the parallel algorithm, it is important to carefully set the
198 overlapping ratio $\alpha$. It must be large enough to avoid the border's
199 errors, and as small as possible to limit the size increase of the data subsets.
205 \section{The XtremWeb-CH environment}
206 \section{Neurad gridification with XTremweb-ch}
207 \section{Experimental results}
208 \section{Conclusion and future works}
212 \bibliographystyle{plain}
213 \bibliography{biblio}