2 \documentclass[10pt, conference, compsocconf]{IEEEtran}
5 \usepackage[utf8]{inputenc}
6 %\usepackage[cyr]{aeguill}
7 %\usepackage{pstricks,pst-node,pst-text,pst-3d}
16 \usepackage{subfigure}
21 \usepackage[ruled,lined,linesnumbered]{algorithm2e}
23 %%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands.
24 \newcommand{\noun}[1]{\textsc{#1}}
26 \newcommand{\tab}{\ \ \ }
33 %% \author{\IEEEauthorblockN{Authors Name/s per 1st Affiliation (Author)}
34 %% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\
35 %% line 2: name of organization, acronyms acceptable\\
36 %% line 3: City, Country\\
37 %% line 4: Email: name@xyz.com}
39 %% \IEEEauthorblockN{Authors Name/s per 2nd Affiliation (Author)}
40 %% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\
41 %% line 2: name of organization, acronyms acceptable\\
42 %% line 3: City, Country\\
43 %% line 4: Email: name@xyz.com}
48 \title{Using FPGAs for high speed and real time cantilever deflection estimation}
49 \author{\IEEEauthorblockN{Raphaël Couturier\IEEEauthorrefmark{1}, Stéphane Domas\IEEEauthorrefmark{1}, Gwenhaël Goavec-Merou\IEEEauthorrefmark{2} and Michel Lenczner\IEEEauthorrefmark{2}}
50 \IEEEauthorblockA{\IEEEauthorrefmark{1}FEMTO-ST, DISC, University of Franche-Comte, Belfort, France\\
51 \{raphael.couturier,stephane.domas\}@univ-fcomte.fr}
52 \IEEEauthorblockA{\IEEEauthorrefmark{2}FEMTO-ST, Time-Frequency, University of Franche-Comte, Besançon, France\\
53 \{michel.lenczner@utbm.fr,gwenhael.goavec@trabucayre.com}
69 {\it keywords}: FPGA, cantilever, interferometry.
72 \section{Introduction}
74 Cantilevers are used inside atomic force microscope (AFM) which provides high
75 resolution images of surfaces. Several technics have been used to measure the
76 displacement of cantilevers in litterature. For example, it is possible to
77 determine accurately the deflection with different mechanisms.
78 In~\cite{CantiPiezzo01}, authors used piezoresistor integrated into the
79 cantilever. Nevertheless this approach suffers from the complexity of the
80 microfabrication process needed to implement the sensor in the cantilever.
81 In~\cite{CantiCapacitive03}, authors have presented an cantilever mechanism
82 based on capacitive sensing. This kind of technic also involves to instrument
83 the cantiliver which result in a complex fabrication process.
85 In this paper our attention is focused on a method based on interferometry to
86 measure cantilevers' displacements. In this method cantilevers are illuminated
87 by an optic source. The interferometry produces fringes on each cantilevers
88 which enables to compute the cantilever displacement. In order to analyze the
89 fringes a high speed camera is used. Images need to be processed quickly and
90 then a estimation method is required to determine the displacement of each
91 cantilever. In~\cite{AFMCSEM11}, the authors have used an algorithm based on
92 spline to estimate the cantilevers' positions.
94 The overall process gives
95 accurate results but all the computation are performed on a standard computer
96 using labview. Consequently, the main drawback of this implementation is that
97 the computer is a bootleneck in the overall process. In this paper we propose to
98 use a method based on least square and to implement all the computation on a
101 The remainder of the paper is organized as follows. Section~\ref{sec:measure}
102 describes more precisely the measurement process. Our solution based on the
103 least square method and the implementation on FPGA is presented in
104 Section~\ref{sec:solus}. Experimentations are described in
105 Section~\ref{sec:results}. Finally a conclusion and some perspectives are
110 %% quelques ref commentées sur les calculs basés sur l'interférométrie
112 \section{Measurement principles}
122 \subsection{Architecture}
124 %% description de l'architecture générale de l'acquisition d'images
125 %% avec au milieu une unité de traitement dont on ne précise pas ce
128 In order to develop simple, cost effective and user-friendly cantilever arrays,
129 authors of ~\cite{AFMCSEM11} have developped a system based of
130 interferometry. In opposition to other optical based systems, using a laser beam
131 deflection scheme and sentitive to the angular displacement of the cantilever,
132 interferometry is sensitive to the optical path difference induced by the
133 vertical displacement of the cantilever.
135 The system build by authors of~\cite{AFMCSEM11} has been developped based on a
136 Linnick interferomter~\cite{Sinclair:05}. It is illustrated in
137 Figure~\ref{fig:AFM}. A laser diode is first split (by the splitter) into a
138 reference beam and a sample beam that reachs the cantilever array. In order to
139 be able to move the cantilever array, it is mounted on a translation and
140 rotational hexapod stage with five degrees of freedom. The optical system is
141 also fixed to the stage. Thus, the cantilever array is centered in the optical
142 system which can be adjusted accurately. The beam illuminates the array by a
143 microscope objective and the light reflects on the cantilevers. Likewise the
144 reference beam reflects on a movable mirror. A CMOS camera chip records the
145 reference and sample beams which are recombined in the beam splitter and the
146 interferogram. At the beginning of each experiment, the movable mirror is
147 fitted manually in order to align the interferometric fringes approximately
148 parallel to the cantilevers. When cantilevers move due to the surface, the
149 bending of cantilevers produce movements in the fringes that can be detected
150 with the CMOS camera. Finally the fringes need to be
151 analyzed. In~\cite{AFMCSEM11}, the authors used a LabView program to compute the
152 cantilevers' movements from the fringes.
156 \includegraphics[width=\columnwidth]{AFM}
158 \caption{schema of the AFM}
163 %% image tirée des expériences.
165 \subsection{Cantilever deflection estimation}
168 As shown on image \ref{img:img-xp}, each cantilever is covered by
169 interferometric fringes. The fringes will distort when cantilevers are
170 deflected. Estimating the deflection is done by computing this
171 distortion. For that, (ref A. Meister + M Favre) proposed a method
172 based on computing the phase of the fringes, at the base of each
173 cantilever, near the tip, and on the base of the array. They assume
174 that a linear relation binds these phases, which can be use to
175 "unwrap" the phase at the tip and to determine the deflection.\\
177 More precisely, segment of pixels are extracted from images taken by a
178 high-speed camera. These segments are large enough to cover several
179 interferometric fringes and are placed at the base and near the tip of
180 the cantilevers. They are called base profile and tip profile in the
181 following. Furthermore, a reference profile is taken on the base of
182 the cantilever array.
184 The pixels intensity $I$ (in gray level) of each profile is modelized by :
188 I(x) = ax+b+A.cos(2\pi f.x + \theta)
191 where $x$ is the position of a pixel in its associated segment.
193 The global method consists in two main sequences. The first one aims
194 to determin the frequency $f$ of each profile with an algorithm based
195 on spline interpolation (see section \ref{algo-spline}). It also
196 computes the coefficient used for unwrapping the phase. The second one
197 is the acquisition loop, while which images are taken at regular time
198 steps. For each image, the phase $\theta$ of all profiles is computed
199 to obtain, after unwrapping, the deflection of
200 cantilevers. Originally, this computation was also done with an
201 algorithm based on spline. This article proposes a new version based
202 on a least square method.
204 \subsection{Design goals}
207 The main goal is to implement a computing unit to estimate the
208 deflection of about $10\times10$ cantilevers, faster than the stream of
209 images coming from the camera. The accuracy of results must be close
210 to the maximum precision ever obtained experimentally on the
211 architecture, i.e. 0.3nm. Finally, the latency between an image
212 entering in the unit and the deflections must be as small as possible
213 (NB : future works plan to add some control on the cantilevers).\\
215 If we put aside some hardware issues like the speed of the link
216 between the camera and the computation unit, the time to deserialize
217 pixels and to store them in memory, ... the phase computation is
218 obviously the bottle-neck of the whole process. For example, if we
219 consider the camera actually in use, an exposition time of 2.5ms for
220 $1024\times 1204$ pixels seems the minimum that can be reached. For
221 100 cantilevers, if we neglect the time to extract pixels, it implies
222 that computing the deflection of a single
223 cantilever should take less than 25$\mu$s, thus 12.5$\mu$s by phase.\\
225 In fact, this timing is a very hard constraint. Let consider a very
226 small programm that initializes twenty million of doubles in memory
227 and then does 1000000 cumulated sums on 20 contiguous values
228 (experimental profiles have about this size). On an intel Core 2 Duo
229 E6650 at 2.33GHz, this program reaches an average of 155Mflops. It
230 implies that the phase computation algorithm should not take more than
231 $155\times 12.5 = 1937$ floating operations. For integers, it gives
232 $3000$ operations. Obviously, some cache effects and optimizations on
233 huge amount of computations can drastically increase these
234 performances : peak efficiency is about 2.5Gflops for the considered
235 CPU. But this is not the case for phase computation that used only few
238 In order to evaluate the original algorithm, we translated it in C
239 language. Profiles are read from a 1Mo file, as if it was an image
240 stored in a device file representing the camera. The file contains 100
241 profiles of 21 pixels, equally scattered in the file. We obtained an
242 average of 10.5$\mu$s by profile (including I/O accesses). It is under
243 are requirements but close to the limit. In case of an occasional load
244 of the system, it could be largely overtaken. A solution would be to
245 use a real-time operating system but another one to search for a more
248 But the main drawback is the latency of such a solution : since each
249 profile must be treated one after another, the deflection of 100
250 cantilevers takes about $200\times 10.5 = 2.1$ms, which is inadequate
251 for an efficient control. An obvious solution is to parallelize the
252 computations, for example on a GPU. Nevertheless, the cost to transfer
253 profile in GPU memory and to take back results would be prohibitive
254 compared to computation time. It is certainly more efficient to
255 pipeline the computation. For example, supposing that 200 profiles of
256 20 pixels can be pushed sequentially in the pipelined unit cadenced at
257 a 100MHz (i.e. a pixel enters in the unit each 10ns), all profiles
258 would be treated in $200\times 20\times 10.10^{-9} =$ 40$\mu$s plus
259 the latency of the pipeline. This is about 500 times faster than
262 For these reasons, an FPGA as the computation unit is the best choice
263 to achieve the required performance. Nevertheless, passing from
264 a C code to a pipelined version in VHDL is not obvious at all. As
265 explained in the next section, it can even be impossible because of
266 some hardware constraints specific to FPGAs.
269 \section{Proposed solution}
272 Project Oscar aims to provide an hardware and software architecture to
273 estimate and control the deflection of cantilevers. The hardware part
274 consists in a high-speed camera, linked on an embedded board hosting
275 FPGAs. By the way, the camera output stream can be pushed directly
276 into the FPGA. The software part is mostly the VHDL code that
277 deserializes the camera stream, extracts profile and computes the
278 deflection. Before focusing on our work to implement the phase
279 computation, we give some general informations about FPGAs and the
284 A field-programmable gate array (FPGA) is an integrated circuit designed to be
285 configured by the customer. A hardware description language (HDL) is used to
286 configure a FPGA. FGPAs are composed of programmable logic components, called
287 logic blocks. These blocks can be configured to perform simple (AND, XOR, ...)
288 or complex combinational functions. Logic blocks are interconnected by
289 reconfigurable links. Modern FPGAs contains memory elements and multipliers
290 which enables to simplify the design and increase the speed. As the most complex
291 operation operation on FGPAs is the multiplier, design of FGPAs should not used
292 complex operations. For example, a divider is not an available operation and it
293 should be programmed using simple components.
295 FGPAs programming is very different from classic processors programming. When
296 logic block are programmed and linked to performed an operation, they cannot be
297 reused anymore. FPGA are cadenced more slowly than classic processors but they can
298 performed pipelined as well as parallel operations. A pipeline provides a way
299 manipulate data quickly since at each clock top to handle a new data. However,
300 using a pipeline consomes more logics and components since they are not
301 reusable, nevertheless it is probably the most efficient technique on FPGA.
302 Parallel operations can be used in order to manipulate several data
303 simultaneously. When it is possible, using a pipeline is a good solution to
304 manipulate new data at each clock top and using parallelism to handle
305 simultaneously several data streams.
307 \subsection{The board}
309 The board we use is designed by the Armadeus compagny, under the name
310 SP Vision. It consists in a development board hosting a i.MX27 ARM
311 processor (from Freescale). The board includes all classical
312 connectors : USB, Ethernet, ... A Flash memory contains a Linux kernel
313 that can be launched after booting the board via u-Boot.
315 The processor is directly connected to a Spartan3A FPGA (from Xilinx)
316 via its special interface called WEIM. The Spartan3A is itself
317 connected to a Spartan6 FPGA. Thus, it is possible to develop programs
318 that communicate between i.MX and Spartan6, using Spartan3 as a
319 tunnel. By default, the WEIM interface provides a clock signal at
320 100MHz that is connected to dedicated FPGA pins.
322 The Spartan6 is an LX100 version. It has 15822 slices, equivalent to
323 101261 logic cells. There are 268 internal block RAM of 18Kbits, and
324 180 dedicated multiply-adders (named DSP48), which is largely enough
327 Some I/O pins of Spartan6 are connected to two $2\times 17$ headers
328 that can be used as user wants. For the project, they will be
329 connected to the interface card of the camera.
331 \subsection{Considered algorithms}
333 Two solutions have been studied to achieve phase computation. The
334 original one, proposed by A. Meister and M. Favre, is based on
335 interpolation by splines. It allows to compute frequency and
336 phase. The second one, detailed in this article, is based on a
337 classical least square method but suppose that frequency is already
340 \subsubsection{Spline algorithm}
341 \label{sec:algo-spline}
342 Let consider a profile $P$, that is a segment of $M$ pixels with an
343 intensity in gray levels. Let call $I(x)$ the intensity of profile in $x
346 At first, only $M$ values of $I$ are known, for $x = 0, 1,
347 \ldots,M-1$. A normalisation allows to scale known intensities into
348 $[-1,1]$. We compute splines that fit at best these normalised
349 intensities. Splines are used to interpolate $N = k\times M$ points
350 (typically $k=4$ is sufficient), within $[0,M[$. Let call $x^s$ the
351 coordinates of these $N$ points and $I^s$ their intensities.
353 In order to have the frequency, the mean line $a.x+b$ (see equation \ref{equ:profile}) of $I^s$ is
354 computed. Finding intersections of $I^s$ and this line allow to obtain
355 the period thus the frequency.
357 The phase is computed via the equation :
359 \theta = atan \left[ \frac{\sum_{i=0}^{N-1} sin(2\pi f x^s_i) \times I^s(x^s_i)}{\sum_{i=0}^{N-1} cos(2\pi f x^s_i) \times I^s(x^s_i)} \right]
362 Two things can be noticed :
364 \item the frequency could also be obtained using the derivates of
365 spline equations, which only implies to solve quadratic equations.
366 \item frequency of each profile is computed a single time, before the
367 acquisition loop. Thus, $sin(2\pi f x^s_i)$ and $cos(2\pi f x^s_i)$
368 could also be computed before the loop, which leads to a much faster
369 computation of $\theta$.
372 \subsubsection{Least square algorithm}
374 Assuming that we compute the phase during the acquisition loop,
375 equation \ref{equ:profile} has only 4 parameters :$a, b, A$, and
376 $\theta$, $f$ and $x$ being already known. Since $I$ is non-linear, a
377 least square method based an Gauss-newton algorithm must be used to
378 determine these four parameters. Since it is an iterative process
379 ending with a convergence criterion, it is obvious that it is not
380 particularly adapted to our design goals.
382 Fortunatly, it is quite simple to reduce the number of parameters to
383 only $\theta$. Let $x^p$ be the coordinates of pixels in a segment of
384 size $M$. Thus, $x^p = 0, 1, \ldots, M-1$. Let $I(x^p)$ be their
385 intensity. Firstly, we "remove" the slope by computing :
387 \[I^{corr}(x^p) = I(x^p) - a.x^p - b\]
389 Since linear equation coefficients are searched, a classical least
390 square method can be used to determine $a$ and $b$ :
392 \[a = \frac{covar(x^p,I(x^p))}{var(x^p)} \]
394 Assuming an overlined symbol means an average, then :
396 \[b = \overline{I(x^p)} - a.\overline{{x^p}}\]
398 Let $A$ be the amplitude of $I^{corr}$, i.e.
400 \[A = \frac{max(I^{corr}) - min(I^{corr})}{2}\]
402 Then, the least square method to find $\theta$ is reduced to search the minimum of :
404 \[\sum_{i=0}^{M-1} \left[ cos(2\pi f.i + \theta) - \frac{I^{corr}(i)}{A} \right]^2\]
406 It is equivalent to derivate this expression and to solve the following equation :
409 2\left[ cos\theta \sum_{i=0}^{M-1} I^{corr}(i).sin(2\pi f.i) + sin\theta \sum_{i=0}^{M-1} I^{corr}(i).cos(2\pi f.i)\right] \\
410 - A\left[ cos2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] = 0
413 Several points can be noticed :
415 \item As in the spline method, some parts of this equation can be
416 computed before the acquisition loop. It is the case of sums that do
417 not depend on $\theta$ :
419 \[ \sum_{i=0}^{M-1} sin(4\pi f.i), \sum_{i=0}^{M-1} cos(4\pi f.i) \]
421 \item Lookup tables for $sin(2\pi f.i)$ and $cos(2\pi f.i)$ can also be
424 \item The simplest method to find the good $\theta$ is to discretize
425 $[-\pi,\pi]$ in $nb_s$ steps, and to search which step leads to the
426 result closest to zero. By the way, three other lookup tables can
427 also be computed before the loop :
429 \[ sin \theta, cos \theta, \]
431 \[ \left[ cos 2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin 2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] \]
433 \item This search can be very fast using a dichotomous process in $log_2(nb_s)$
437 Finally, the whole summarizes in an algorithm (called LSQ in the following) in two parts, one before and one during the acquisition loop :
439 \caption{LSQ algorithm - before acquisition loop.}
440 \label{alg:lsq-before}
442 $M \leftarrow $ number of pixels of the profile\\
443 I[] $\leftarrow $ intensities of pixels\\
444 $f \leftarrow $ frequency of the profile\\
445 $s4i \leftarrow \sum_{i=0}^{M-1} sin(4\pi f.i)$\\
446 $c4i \leftarrow \sum_{i=0}^{M-1} cos(4\pi f.i)$\\
447 $nb_s \leftarrow $ number of discretization steps of $[-\pi,\pi]$\\
449 \For{$i=0$ to $nb_s $}{
450 $\theta \leftarrow -\pi + 2\pi\times \frac{i}{nb_s}$\\
451 lut$_s$[$i$] $\leftarrow sin \theta$\\
452 lut$_c$[$i$] $\leftarrow cos \theta$\\
453 lut$_A$[$i$] $\leftarrow cos 2 \theta \times s4i + sin 2 \theta \times c4i$\\
454 lut$_{sfi}$[$i$] $\leftarrow sin (2\pi f.i)$\\
455 lut$_{cfi}$[$i$] $\leftarrow cos (2\pi f.i)$\\
459 \begin{algorithm}[ht]
460 \caption{LSQ algorithm - during acquisition loop.}
461 \label{alg:lsq-during}
463 $\bar{x} \leftarrow \frac{M-1}{2}$\\
464 $\bar{y} \leftarrow 0$, $x_{var} \leftarrow 0$, $xy_{covar} \leftarrow 0$\\
465 \For{$i=0$ to $M-1$}{
466 $\bar{y} \leftarrow \bar{y} + $ I[$i$]\\
467 $x_{var} \leftarrow x_{var} + (i-\bar{x})^2$\\
469 $\bar{y} \leftarrow \frac{\bar{y}}{M}$\\
470 \For{$i=0$ to $M-1$}{
471 $xy_{covar} \leftarrow xy_{covar} + (i-\bar{x}) \times (I[i]-\bar{y})$\\
473 $slope \leftarrow \frac{xy_{covar}}{x_{var}}$\\
474 $start \leftarrow y_{moy} - slope\times \bar{x}$\\
475 \For{$i=0$ to $M-1$}{
476 $I[i] \leftarrow I[i] - start - slope\times i$\\
479 $I_{max} \leftarrow max_i(I[i])$, $I_{min} \leftarrow min_i(I[i])$\\
480 $amp \leftarrow \frac{I_{max}-I_{min}}{2}$\\
482 $Is \leftarrow 0$, $Ic \leftarrow 0$\\
483 \For{$i=0$ to $M-1$}{
484 $Is \leftarrow Is + I[i]\times $ lut$_{sfi}$[$i$]\\
485 $Ic \leftarrow Ic + I[i]\times $ lut$_{cfi}$[$i$]\\
488 $\delta \leftarrow \frac{nb_s}{2}$, $b_l \leftarrow 0$, $b_r \leftarrow \delta$\\
489 $v_l \leftarrow -2.I_s - amp.$lut$_A$[$b_l$]\\
491 \While{$\delta >= 1$}{
493 $v_r \leftarrow 2.[ Is.$lut$_c$[$b_r$]$ + Ic.$lut$_s$[$b_r$]$ ] - amp.$lut$_A$[$b_r$]\\
495 \If{$!(v_l < 0$ and $v_r >= 0)$}{
496 $v_l \leftarrow v_r$ \\
497 $b_l \leftarrow b_r$ \\
499 $\delta \leftarrow \frac{\delta}{2}$\\
500 $b_r \leftarrow b_l + \delta$\\
502 \uIf{$!(v_l < 0$ and $v_r >= 0)$}{
503 $v_l \leftarrow v_r$ \\
504 $b_l \leftarrow b_r$ \\
505 $b_r \leftarrow b_l + 1$\\
506 $v_r \leftarrow 2.[ Is.$lut$_c$[$b_r$]$ + Ic.$lut$_s$[$b_r$]$ ] - amp.$lut$_A$[$b_r$]\\
509 $b_r \leftarrow b_l + 1$\\
512 \uIf{$ abs(v_l) < v_r$}{
513 $b_{\theta} \leftarrow b_l$ \\
516 $b_{\theta} \leftarrow b_r$ \\
518 $\theta \leftarrow \pi\times \left[\frac{2.b_{ref}}{nb_s}-1\right]$\\
522 \subsubsection{Comparison}
524 We compared the two algorithms on the base of three criterions :
526 \item precision of results on a cosinus profile, distorted with noise,
527 \item number of operations,
528 \item complexity to implement an FPGA version.
531 For the first item, we produced a matlab version of each algorithm,
532 running with double precision values. The profile was generated for
533 about 34000 different values of period ($\in [3.1, 6.1]$, step = 0.1),
534 phase ($\in [-3.1 , 3.1]$, step = 0.062) and slope ($\in [-2 , 2]$,
535 step = 0.4). For LSQ, $nb_s = 1024$, which leads to a maximal error of
536 $\frac{\pi}{1024}$ on phase computation. Current A. Meister and
537 M. Favre experiments show a ratio of 50 between variation of phase and
538 the deflection of a lever. Thus, the maximal error due to
539 discretization correspond to an error of 0.15nm on the lever
540 deflection, which is smaller than the best precision they achieved,
543 For each test, we add some noise to the profile : each group of two
544 pixels has its intensity added to a random number picked in $[-N,N]$
545 (NB: it should be noticed that picking a new value for each pixel does
546 not distort enough the profile). The absolute error on the result is
547 evaluated by comparing the difference between the reference and
548 computed phase, out of $2\pi$, expressed in percents. That is : $err =
549 100\times \frac{|\theta_{ref} - \theta_{comp}|}{2\pi}$.
551 Table \ref{tab:algo_prec} gives the maximum and average error for the two algorithms and increasing values of $N$.
555 \begin{tabular}{|c|c|c|c|c|}
557 & \multicolumn{2}{c|}{SPL} & \multicolumn{2}{c|}{LSQ} \\ \cline{2-5}
558 noise & max. err. & aver. err. & max. err. & aver. err. \\ \hline
559 0 & 2.46 & 0.58 & 0.49 & 0.1 \\ \hline
560 2.5 & 2.75 & 0.62 & 1.16 & 0.22 \\ \hline
561 5 & 3.77 & 0.72 & 2.47 & 0.41 \\ \hline
562 7.5 & 4.72 & 0.86 & 3.33 & 0.62 \\ \hline
563 10 & 5.62 & 1.03 & 4.29 & 0.81 \\ \hline
564 15 & 7.96 & 1.38 & 6.35 & 1.21 \\ \hline
565 30 & 17.06 & 2.6 & 13.94 & 2.45 \\ \hline
568 \caption{Error (in \%) for cosinus profiles, with noise.}
569 \label{tab:algo_prec}
573 These results show that the two algorithms are very close, with a
574 slight advantage for LSQ. Furthemore, both behave very well against
575 noise. Assuming the experimental ratio of 50 (see above), an error of
576 1 percent on phase correspond to an error of 0.5nm on the lever
577 deflection, which is very close to the best precision.
579 Obviously, it is very hard to predict which level of noise will be
580 present in real experiments and how it will distort the
581 profiles. Nevertheless, we can see on figure \ref{fig:noise20} the
582 profile with $N=10$ that leads to the biggest error. It is a bit
583 distorted, with pikes and straight/rounded portions, and relatively
584 close to most of that come from experiments. Figure \ref{fig:noise60}
585 shows a sample of worst profile for $N=30$. It is completly distorted,
586 largely beyond the worst experimental ones.
590 \includegraphics[width=9cm]{intens-noise20-spl}
592 \caption{Sample of worst profile for N=10}
598 \includegraphics[width=9cm]{intens-noise60-lsq}
600 \caption{Sample of worst profile for N=30}
604 The second criterion is relatively easy to estimate for LSQ and harder
605 for SPL because of $atan$ operation. In both cases, it is proportional
606 to numbers of pixels $M$. For LSQ, it also depends on $nb_s$ and for
607 SPL on $N = k\times M$, i.e. the number of interpolated points.
609 We assume that $M=20$, $nb_s=1024$, $k=4$, all possible parts are
610 already in lookup tables and a limited set of operations (+, -, *, /,
611 <, >) is taken account. Translating the two algorithms in C code, we
612 obtain about 430 operations for LSQ and 1550 (plus few tenth for
613 $atan$) for SPL. This result is largely in favor of LSQ. Nevertheless,
614 considering the total number of operations is not really pertinent for
615 an FPGA implementation : it mainly depends on the type of operations
617 ordering. The final decision is thus driven by the third criterion.\\
619 The Spartan 6 used in our architecture has hard constraint : it has no
620 built-in floating point units. Obviously, it is possible to use some
621 existing "black-boxes" for double precision operations. But they have
622 a quite long latency. It is much simpler to exclusively use integers,
623 with a quantization of all double precision values. Obviously, this
624 quantization should not decrease too much the precision of
625 results. Furthermore, it should not lead to a design with a huge
626 latency because of operations that could not complete during a single
627 or few clock cycles. Divisions are in this case and, moreover, they
628 need an varying number of clock cycles to complete. Even
629 multiplications can be a problem : DSP48 take inputs of 18 bits
630 maximum. For larger multiplications, several DSP must be combined,
631 increasing the latency.
633 Nevertheless, the hardest constraint does not come from the FPGA
634 characteristics but from the algorithms. Their VHDL implentation will
635 be efficient only if they can be fully (or near) pipelined. By the
636 way, the choice is quickly done : only a small part of SPL can be.
637 Indeed, the computation of spline coefficients implies to solve a
638 tridiagonal system $A.m = b$. Values in $A$ and $b$ can be computed
639 from incoming pixels intensity but after, the back-solve starts with
640 the lastest values, which breaks the pipeline. Moreover, SPL relies on
641 interpolating far more points than profile size. Thus, the end
642 of SPL works on a larger amount of data than the beginning, which
643 also breaks the pipeline.
645 LSQ has not this problem : all parts except the dichotomial search
646 work on the same amount of data, i.e. the profile size. Furthermore,
647 LSQ needs less operations than SPL, implying a smaller output
648 latency. Consequently, it is the best candidate for phase
649 computation. Nevertheless, obtaining a fully pipelined version
650 supposes that operations of different parts complete in a single clock
651 cycle. It is the case for simulations but it completely fails when
652 mapping and routing the design on the Spartan6. By the way,
653 extra-latency is generated and there must be idle times between two
654 profiles entering into the pipeline.
656 Before obtaining the least bitstream, the crucial question is : how to
657 translate the C code the LSQ into VHDL ?
660 \subsection{VHDL design paradigms}
662 \subsection{VHDL implementation}
664 \section{Experimental results}
670 \section{Conclusion and perspectives}
673 \bibliographystyle{plain}
674 \bibliography{biblio}