2 \documentclass[10pt, peerreview, compsocconf]{IEEEtran}
5 \usepackage[utf8]{inputenc}
6 %\usepackage[cyr]{aeguill}
7 %\usepackage{pstricks,pst-node,pst-text,pst-3d}
16 \usepackage{subfigure}
21 \usepackage[ruled,lined,linesnumbered]{algorithm2e}
23 %%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands.
24 \newcommand{\noun}[1]{\textsc{#1}}
26 \newcommand{\tab}{\ \ \ }
32 %% \author{\IEEEauthorblockN{Authors Name/s per 1st Affiliation (Author)}
33 %% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\
34 %% line 2: name of organization, acronyms acceptable\\
35 %% line 3: City, Country\\
36 %% line 4: Email: name@xyz.com}
38 %% \IEEEauthorblockN{Authors Name/s per 2nd Affiliation (Author)}
39 %% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\
40 %% line 2: name of organization, acronyms acceptable\\
41 %% line 3: City, Country\\
42 %% line 4: Email: name@xyz.com}
47 \title{Using FPGAs for high speed and real time cantilever deflection estimation}
48 \author{\IEEEauthorblockN{Raphaël Couturier\IEEEauthorrefmark{1}, Stéphane Domas\IEEEauthorrefmark{1}, Gwenhaël Goavec-Merou\IEEEauthorrefmark{2} and Michel Lenczner\IEEEauthorrefmark{2}}
49 \IEEEauthorblockA{\IEEEauthorrefmark{1}FEMTO-ST, DISC, University of Franche-Comte, Belfort, France\\
50 \{raphael.couturier,stephane.domas\}@univ-fcomte.fr}
51 \IEEEauthorblockA{\IEEEauthorrefmark{2}FEMTO-ST, Time-Frequency, University of Franche-Comte, Besançon, France\\
52 \{michel.lenczner@utbm.fr,gwenhael.goavec@trabucayre.com}
72 FPGA, cantilever, interferometry.
76 \IEEEpeerreviewmaketitle
78 \section{Introduction}
80 Cantilevers are used inside atomic force microscope (AFM) which provides high
81 resolution images of surfaces. Several technics have been used to measure the
82 displacement of cantilevers in litterature. For example, it is possible to
83 determine accurately the deflection with different mechanisms.
84 In~\cite{CantiPiezzo01}, authors used piezoresistor integrated into the
85 cantilever. Nevertheless this approach suffers from the complexity of the
86 microfabrication process needed to implement the sensor in the cantilever.
87 In~\cite{CantiCapacitive03}, authors have presented an cantilever mechanism
88 based on capacitive sensing. This kind of technic also involves to instrument
89 the cantiliver which result in a complex fabrication process.
91 In this paper our attention is focused on a method based on interferometry to
92 measure cantilevers' displacements. In this method cantilevers are illuminated
93 by an optic source. The interferometry produces fringes on each cantilevers
94 which enables to compute the cantilever displacement. In order to analyze the
95 fringes a high speed camera is used. Images need to be processed quickly and
96 then a estimation method is required to determine the displacement of each
97 cantilever. In~\cite{AFMCSEM11}, the authors have used an algorithm based on
98 spline to estimate the cantilevers' positions.
100 The overall process gives
101 accurate results but all the computation are performed on a standard computer
102 using labview. Consequently, the main drawback of this implementation is that
103 the computer is a bootleneck in the overall process. In this paper we propose to
104 use a method based on least square and to implement all the computation on a
107 The remainder of the paper is organized as follows. Section~\ref{sec:measure}
108 describes more precisely the measurement process. Our solution based on the
109 least square method and the implementation on FPGA is presented in
110 Section~\ref{sec:solus}. Experimentations are described in
111 Section~\ref{sec:results}. Finally a conclusion and some perspectives are
116 %% quelques ref commentées sur les calculs basés sur l'interférométrie
118 \section{Measurement principles}
128 \subsection{Architecture}
130 %% description de l'architecture générale de l'acquisition d'images
131 %% avec au milieu une unité de traitement dont on ne précise pas ce
134 In order to develop simple, cost effective and user-friendly cantilever arrays,
135 authors of ~\cite{AFMCSEM11} have developped a system based of
136 interferometry. In opposition to other optical based systems, using a laser beam
137 deflection scheme and sentitive to the angular displacement of the cantilever,
138 interferometry is sensitive to the optical path difference induced by the
139 vertical displacement of the cantilever.
141 The system build by authors of~\cite{AFMCSEM11} has been developped based on a
142 Linnick interferomter~\cite{Sinclair:05}. It is illustrated in
143 Figure~\ref{fig:AFM}. A laser diode is first split (by the splitter) into a
144 reference beam and a sample beam that reachs the cantilever array. In order to
145 be able to move the cantilever array, it is mounted on a translation and
146 rotational hexapod stage with five degrees of freedom. The optical system is
147 also fixed to the stage. Thus, the cantilever array is centered in the optical
148 system which can be adjusted accurately. The beam illuminates the array by a
149 microscope objective and the light reflects on the cantilevers. Likewise the
150 reference beam reflects on a movable mirror. A CMOS camera chip records the
151 reference and sample beams which are recombined in the beam splitter and the
152 interferogram. At the beginning of each experiment, the movable mirror is
153 fitted manually in order to align the interferometric fringes approximately
154 parallel to the cantilevers. When cantilevers move due to the surface, the
155 bending of cantilevers produce movements in the fringes that can be detected
156 with the CMOS camera. Finally the fringes need to be
157 analyzed. In~\cite{AFMCSEM11}, the authors used a LabView program to compute the
158 cantilevers' movements from the fringes.
162 \includegraphics[width=\columnwidth]{AFM}
164 \caption{schema of the AFM}
169 %% image tirée des expériences.
171 \subsection{Cantilever deflection estimation}
174 As shown on image \ref{img:img-xp}, each cantilever is covered by
175 interferometric fringes. The fringes will distort when cantilevers are
176 deflected. Estimating the deflection is done by computing this
177 distortion. For that, (ref A. Meister + M Favre) proposed a method
178 based on computing the phase of the fringes, at the base of each
179 cantilever, near the tip, and on the base of the array. They assume
180 that a linear relation binds these phases, which can be use to
181 "unwrap" the phase at the tip and to determine the deflection.\\
183 More precisely, segment of pixels are extracted from images taken by a
184 high-speed camera. These segments are large enough to cover several
185 interferometric fringes and are placed at the base and near the tip of
186 the cantilevers. They are called base profile and tip profile in the
187 following. Furthermore, a reference profile is taken on the base of
188 the cantilever array.
190 The pixels intensity $I$ (in gray level) of each profile is modelized by:
194 I(x) = ax+b+A.cos(2\pi f.x + \theta)
197 where $x$ is the position of a pixel in its associated segment.
199 The global method consists in two main sequences. The first one aims
200 to determin the frequency $f$ of each profile with an algorithm based
201 on spline interpolation (see section \ref{algo-spline}). It also
202 computes the coefficient used for unwrapping the phase. The second one
203 is the acquisition loop, while which images are taken at regular time
204 steps. For each image, the phase $\theta$ of all profiles is computed
205 to obtain, after unwrapping, the deflection of
206 cantilevers. Originally, this computation was also done with an
207 algorithm based on spline. This article proposes a new version based
208 on a least square method.
210 \subsection{Design goals}
213 The main goal is to implement a computing unit to estimate the
214 deflection of about $10\times10$ cantilevers, faster than the stream of
215 images coming from the camera. The accuracy of results must be close
216 to the maximum precision ever obtained experimentally on the
217 architecture, i.e. 0.3nm. Finally, the latency between an image
218 entering in the unit and the deflections must be as small as possible
219 (NB: future works plan to add some control on the cantilevers).\\
221 If we put aside some hardware issues like the speed of the link
222 between the camera and the computation unit, the time to deserialize
223 pixels and to store them in memory, ... the phase computation is
224 obviously the bottle-neck of the whole process. For example, if we
225 consider the camera actually in use, an exposition time of 2.5ms for
226 $1024\times 1204$ pixels seems the minimum that can be reached. For
227 100 cantilevers, if we neglect the time to extract pixels, it implies
228 that computing the deflection of a single
229 cantilever should take less than 25$\mu$s, thus 12.5$\mu$s by phase.\\
231 In fact, this timing is a very hard constraint. Let consider a very
232 small programm that initializes twenty million of doubles in memory
233 and then does 1000000 cumulated sums on 20 contiguous values
234 (experimental profiles have about this size). On an intel Core 2 Duo
235 E6650 at 2.33GHz, this program reaches an average of 155Mflops.
237 %%Itimplies that the phase computation algorithm should not take more than
238 %%$155\times 12.5 = 1937$ floating operations. For integers, it gives $3000$ operations.
240 Obviously, some cache effects and optimizations on
241 huge amount of computations can drastically increase these
242 performances: peak efficiency is about 2.5Gflops for the considered
243 CPU. But this is not the case for phase computation that used only few
246 In order to evaluate the original algorithm, we translated it in C
247 language. As said further, for 20 pixels, it does about 1550
248 operations, thus an estimated execution time of $1550/155
249 =$10$\mu$s. For a more realistic evaluation, we constructed a file of
250 1Mo containing 200 profiles of 20 pixels, equally scattered. This file
251 is equivalent to an image stored in a device file representing the
252 camera. We obtained an average of 10.5$\mu$s by profile (including I/O
253 accesses). It is under are requirements but close to the limit. In
254 case of an occasional load of the system, it could be largely
255 overtaken. A solution would be to use a real-time operating system but
256 another one to search for a more efficient algorithm.
258 But the main drawback is the latency of such a solution: since each
259 profile must be treated one after another, the deflection of 100
260 cantilevers takes about $200\times 10.5 = 2.1$ms, which is inadequate
261 for an efficient control. An obvious solution is to parallelize the
262 computations, for example on a GPU. Nevertheless, the cost to transfer
263 profile in GPU memory and to take back results would be prohibitive
264 compared to computation time. It is certainly more efficient to
265 pipeline the computation. For example, supposing that 200 profiles of
266 20 pixels can be pushed sequentially in the pipelined unit cadenced at
267 a 100MHz (i.e. a pixel enters in the unit each 10ns), all profiles
268 would be treated in $200\times 20\times 10.10^{-9} =$ 40$\mu$s plus
269 the latency of the pipeline. This is about 500 times faster than
272 For these reasons, an FPGA as the computation unit is the best choice
273 to achieve the required performance. Nevertheless, passing from
274 a C code to a pipelined version in VHDL is not obvious at all. As
275 explained in the next section, it can even be impossible because of
276 some hardware constraints specific to FPGAs.
279 \section{Proposed solution}
282 Project Oscar aims to provide a hardware and software architecture to estimate
283 and control the deflection of cantilevers. The hardware part consists in a
284 high-speed camera, linked on an embedded board hosting FPGAs. By the way, the
285 camera output stream can be pushed directly into the FPGA. The software part is
286 mostly the VHDL code that deserializes the camera stream, extracts profile and
287 computes the deflection. Before focusing on our work to implement the phase
288 computation, we give some general information about FPGAs and the board we use.
292 A field-programmable gate array (FPGA) is an integrated circuit
293 designed to be configured by the customer. FGPAs are composed of
294 programmable logic components, called configurable logic blocks
295 (CLB). These blocks mainly contains look-up tables (LUT), flip/flops
296 (F/F) and latches, organized in one or more slices connected
297 together. Each CLB can be configured to perform simple (AND, XOR, ...)
298 or complex combinational functions. They are interconnected by
299 reconfigurable links. Modern FPGAs contain memory elements and
300 multipliers which enable to simplify the design and to increase the
301 performance. Nevertheless, all other complex operations, like
302 division, trigonometric functions, $\ldots$ are not available and must
303 be done by configuring a set of CLBs. Since this configuration is not
304 obvious at all, it can be done via a framework, like ISE. Such a
305 software can synthetize a design written in an hardware description
306 language (HDL), map it onto CLBs, place/route them for a specific
307 FPGA, and finally produce a bitstream that is used to configre the
308 FPGA. Thus, from the developper point of view, the main difficulty is
309 to translate an algorithm in HDL code, taking account FPGA resources
310 and constraints like clock signals and I/O values that drive the FPGA.
312 Indeed, HDL programming is very different from classic languages like
313 C. A program can be seen as a state-machine, manipulating signals that
314 evolve from state to state. By the way, HDL instructions can execute
315 concurrently. Basic logic operations are used to agregate signals to
316 produce new states and assign it to another signal. States are mainly
317 expressed as arrays of bits. Fortunaltely, libraries propose some
318 higher levels representations like signed integers, and arithmetic
321 Furthermore, even if FPGAs are cadenced more slowly than classic
322 processors, they can perform pipeline as well as parallel
323 operations. A pipeline consists in cutting a process in sequence of
324 small tasks, taking the same execution time. It accepts a new data at
325 each clock top, thus, after a known latency, it also provides a result
326 at each clock top. However, using a pipeline consumes more logics
327 since the components of a task are not reusable by another
328 one. Nevertheless it is probably the most efficient technique on
329 FPGA. Because of its architecture, it is also very easy to process
330 several data concurrently. When it is possible, the best performance
331 is reached using parallelism to handle simultaneously several
332 pipelines in order to handle multiple data streams.
334 \subsection{The board}
336 The board we use is designed by the Armadeus compagny, under the name
337 SP Vision. It consists in a development board hosting a i.MX27 ARM
338 processor (from Freescale). The board includes all classical
339 connectors: USB, Ethernet, ... A Flash memory contains a Linux kernel
340 that can be launched after booting the board via u-Boot.
342 The processor is directly connected to a Spartan3A FPGA (from Xilinx)
343 via its special interface called WEIM. The Spartan3A is itself
344 connected to a Spartan6 FPGA. Thus, it is possible to develop programs
345 that communicate between i.MX and Spartan6, using Spartan3 as a
346 tunnel. By default, the WEIM interface provides a clock signal at
347 100MHz that is connected to dedicated FPGA pins.
349 The Spartan6 is an LX100 version. It has 15822 slices, equivalent to
350 101261 logic cells. There are 268 internal block RAM of 18Kbits, and
351 180 dedicated multiply-adders (named DSP48), which is largely enough
354 Some I/O pins of Spartan6 are connected to two $2\times 17$ headers
355 that can be used as user wants. For the project, they will be
356 connected to the interface card of the camera.
358 \subsection{Considered algorithms}
360 Two solutions have been studied to achieve phase computation. The
361 original one, proposed by A. Meister and M. Favre, is based on
362 interpolation by splines. It allows to compute frequency and
363 phase. The second one, detailed in this article, is based on a
364 classical least square method but suppose that frequency is already
367 \subsubsection{Spline algorithm}
368 \label{sec:algo-spline}
369 Let consider a profile $P$, that is a segment of $M$ pixels with an
370 intensity in gray levels. Let call $I(x)$ the intensity of profile in $x
373 At first, only $M$ values of $I$ are known, for $x = 0, 1, \ldots,M-1$. A
374 normalisation allows to scale known intensities into $[-1,1]$. We compute
375 splines that fit at best these normalised intensities. Splines (SPL in the
376 following) are used to interpolate $N = k\times M$ points (typically $k=4$ is
377 sufficient), within $[0,M[$. Let call $x^s$ the coordinates of these $N$ points
378 and $I^s$ their intensities.
380 In order to have the frequency, the mean line $a.x+b$ (see equation \ref{equ:profile}) of $I^s$ is
381 computed. Finding intersections of $I^s$ and this line allow to obtain
382 the period thus the frequency.
384 The phase is computed via the equation:
386 \theta = atan \left[ \frac{\sum_{i=0}^{N-1} sin(2\pi f x^s_i) \times I^s(x^s_i)}{\sum_{i=0}^{N-1} cos(2\pi f x^s_i) \times I^s(x^s_i)} \right]
389 Two things can be noticed:
391 \item the frequency could also be obtained using the derivates of
392 spline equations, which only implies to solve quadratic equations.
393 \item frequency of each profile is computed a single time, before the
394 acquisition loop. Thus, $sin(2\pi f x^s_i)$ and $cos(2\pi f x^s_i)$
395 could also be computed before the loop, which leads to a much faster
396 computation of $\theta$.
399 \subsubsection{Least square algorithm}
401 Assuming that we compute the phase during the acquisition loop,
402 equation \ref{equ:profile} has only 4 parameters: $a, b, A$, and
403 $\theta$, $f$ and $x$ being already known. Since $I$ is non-linear, a
404 least square method based on a Gauss-newton algorithm can be used to
405 determine these four parameters. Since it is an iterative process
406 ending with a convergence criterion, it is obvious that it is not
407 particularly adapted to our design goals.
409 Fortunatly, it is quite simple to reduce the number of parameters to
410 only $\theta$. Let $x^p$ be the coordinates of pixels in a segment of
411 size $M$. Thus, $x^p = 0, 1, \ldots, M-1$. Let $I(x^p)$ be their
412 intensity. Firstly, we "remove" the slope by computing:
414 \[I^{corr}(x^p) = I(x^p) - a.x^p - b\]
416 Since linear equation coefficients are searched, a classical least
417 square method can be used to determine $a$ and $b$:
419 \[a = \frac{covar(x^p,I(x^p))}{var(x^p)} \]
421 Assuming an overlined symbol means an average, then:
423 \[b = \overline{I(x^p)} - a.\overline{{x^p}}\]
425 Let $A$ be the amplitude of $I^{corr}$, i.e.
427 \[A = \frac{max(I^{corr}) - min(I^{corr})}{2}\]
429 Then, the least square method to find $\theta$ is reduced to search the minimum of:
431 \[\sum_{i=0}^{M-1} \left[ cos(2\pi f.i + \theta) - \frac{I^{corr}(i)}{A} \right]^2\]
433 It is equivalent to derivate this expression and to solve the following equation:
436 2\left[ cos\theta \sum_{i=0}^{M-1} I^{corr}(i).sin(2\pi f.i) + sin\theta \sum_{i=0}^{M-1} I^{corr}(i).cos(2\pi f.i)\right] \\
437 - A\left[ cos2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] = 0
440 Several points can be noticed:
442 \item As in the spline method, some parts of this equation can be
443 computed before the acquisition loop. It is the case of sums that do
444 not depend on $\theta$:
446 \[ \sum_{i=0}^{M-1} sin(4\pi f.i), \sum_{i=0}^{M-1} cos(4\pi f.i) \]
448 \item Lookup tables for $sin(2\pi f.i)$ and $cos(2\pi f.i)$ can also be
451 \item The simplest method to find the good $\theta$ is to discretize
452 $[-\pi,\pi]$ in $nb_s$ steps, and to search which step leads to the
453 result closest to zero. By the way, three other lookup tables can
454 also be computed before the loop:
456 \[ sin \theta, cos \theta, \]
458 \[ \left[ cos 2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin 2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] \]
460 \item This search can be very fast using a dichotomous process in $log_2(nb_s)$
464 Finally, the whole summarizes in an algorithm (called LSQ in the following) in two parts, one before and one during the acquisition loop:
466 \caption{LSQ algorithm - before acquisition loop.}
467 \label{alg:lsq-before}
469 $M \leftarrow $ number of pixels of the profile\\
470 I[] $\leftarrow $ intensities of pixels\\
471 $f \leftarrow $ frequency of the profile\\
472 $s4i \leftarrow \sum_{i=0}^{M-1} sin(4\pi f.i)$\\
473 $c4i \leftarrow \sum_{i=0}^{M-1} cos(4\pi f.i)$\\
474 $nb_s \leftarrow $ number of discretization steps of $[-\pi,\pi]$\\
476 \For{$i=0$ to $nb_s $}{
477 $\theta \leftarrow -\pi + 2\pi\times \frac{i}{nb_s}$\\
478 lut$_s$[$i$] $\leftarrow sin \theta$\\
479 lut$_c$[$i$] $\leftarrow cos \theta$\\
480 lut$_A$[$i$] $\leftarrow cos 2 \theta \times s4i + sin 2 \theta \times c4i$\\
481 lut$_{sfi}$[$i$] $\leftarrow sin (2\pi f.i)$\\
482 lut$_{cfi}$[$i$] $\leftarrow cos (2\pi f.i)$\\
486 \begin{algorithm}[ht]
487 \caption{LSQ algorithm - during acquisition loop.}
488 \label{alg:lsq-during}
490 $\bar{x} \leftarrow \frac{M-1}{2}$\\
491 $\bar{y} \leftarrow 0$, $x_{var} \leftarrow 0$, $xy_{covar} \leftarrow 0$\\
492 \For{$i=0$ to $M-1$}{
493 $\bar{y} \leftarrow \bar{y} + $ I[$i$]\\
494 $x_{var} \leftarrow x_{var} + (i-\bar{x})^2$\\
496 $\bar{y} \leftarrow \frac{\bar{y}}{M}$\\
497 \For{$i=0$ to $M-1$}{
498 $xy_{covar} \leftarrow xy_{covar} + (i-\bar{x}) \times (I[i]-\bar{y})$\\
500 $slope \leftarrow \frac{xy_{covar}}{x_{var}}$\\
501 $start \leftarrow y_{moy} - slope\times \bar{x}$\\
502 \For{$i=0$ to $M-1$}{
503 $I[i] \leftarrow I[i] - start - slope\times i$\\
506 $I_{max} \leftarrow max_i(I[i])$, $I_{min} \leftarrow min_i(I[i])$\\
507 $amp \leftarrow \frac{I_{max}-I_{min}}{2}$\\
509 $Is \leftarrow 0$, $Ic \leftarrow 0$\\
510 \For{$i=0$ to $M-1$}{
511 $Is \leftarrow Is + I[i]\times $ lut$_{sfi}$[$i$]\\
512 $Ic \leftarrow Ic + I[i]\times $ lut$_{cfi}$[$i$]\\
515 $\delta \leftarrow \frac{nb_s}{2}$, $b_l \leftarrow 0$, $b_r \leftarrow \delta$\\
516 $v_l \leftarrow -2.I_s - amp.$lut$_A$[$b_l$]\\
518 \While{$\delta >= 1$}{
520 $v_r \leftarrow 2.[ Is.$lut$_c$[$b_r$]$ + Ic.$lut$_s$[$b_r$]$ ] - amp.$lut$_A$[$b_r$]\\
522 \If{$!(v_l < 0$ and $v_r >= 0)$}{
523 $v_l \leftarrow v_r$ \\
524 $b_l \leftarrow b_r$ \\
526 $\delta \leftarrow \frac{\delta}{2}$\\
527 $b_r \leftarrow b_l + \delta$\\
529 \uIf{$!(v_l < 0$ and $v_r >= 0)$}{
530 $v_l \leftarrow v_r$ \\
531 $b_l \leftarrow b_r$ \\
532 $b_r \leftarrow b_l + 1$\\
533 $v_r \leftarrow 2.[ Is.$lut$_c$[$b_r$]$ + Ic.$lut$_s$[$b_r$]$ ] - amp.$lut$_A$[$b_r$]\\
536 $b_r \leftarrow b_l + 1$\\
539 \uIf{$ abs(v_l) < v_r$}{
540 $b_{\theta} \leftarrow b_l$ \\
543 $b_{\theta} \leftarrow b_r$ \\
545 $\theta \leftarrow \pi\times \left[\frac{2.b_{ref}}{nb_s}-1\right]$\\
549 \subsubsection{Comparison}
551 We compared the two algorithms on the base of three criteria:
553 \item precision of results on a cosinus profile, distorted with noise,
554 \item number of operations,
555 \item complexity to implement an FPGA version.
558 For the first item, we produced a matlab version of each algorithm,
559 running with double precision values. The profile was generated for
560 about 34000 different values of period ($\in [3.1, 6.1]$, step = 0.1),
561 phase ($\in [-3.1 , 3.1]$, step = 0.062) and slope ($\in [-2 , 2]$,
562 step = 0.4). For LSQ, $nb_s = 1024$, which leads to a maximal error of
563 $\frac{\pi}{1024}$ on phase computation. Current A. Meister and
564 M. Favre experiments show a ratio of 50 between variation of phase and
565 the deflection of a lever. Thus, the maximal error due to
566 discretization correspond to an error of 0.15nm on the lever
567 deflection, which is smaller than the best precision they achieved,
570 For each test, we add some noise to the profile: each group of two
571 pixels has its intensity added to a random number picked in $[-N,N]$
572 (NB: it should be noticed that picking a new value for each pixel does
573 not distort enough the profile). The absolute error on the result is
574 evaluated by comparing the difference between the reference and
575 computed phase, out of $2\pi$, expressed in percents. That is: $err =
576 100\times \frac{|\theta_{ref} - \theta_{comp}|}{2\pi}$.
578 Table \ref{tab:algo_prec} gives the maximum and average error for the two algorithms and increasing values of $N$.
582 \begin{tabular}{|c|c|c|c|c|}
584 & \multicolumn{2}{c|}{SPL} & \multicolumn{2}{c|}{LSQ} \\ \cline{2-5}
585 noise & max. err. & aver. err. & max. err. & aver. err. \\ \hline
586 0 & 2.46 & 0.58 & 0.49 & 0.1 \\ \hline
587 2.5 & 2.75 & 0.62 & 1.16 & 0.22 \\ \hline
588 5 & 3.77 & 0.72 & 2.47 & 0.41 \\ \hline
589 7.5 & 4.72 & 0.86 & 3.33 & 0.62 \\ \hline
590 10 & 5.62 & 1.03 & 4.29 & 0.81 \\ \hline
591 15 & 7.96 & 1.38 & 6.35 & 1.21 \\ \hline
592 30 & 17.06 & 2.6 & 13.94 & 2.45 \\ \hline
595 \caption{Error (in \%) for cosinus profiles, with noise.}
596 \label{tab:algo_prec}
600 These results show that the two algorithms are very close, with a
601 slight advantage for LSQ. Furthemore, both behave very well against
602 noise. Assuming the experimental ratio of 50 (see above), an error of
603 1 percent on phase correspond to an error of 0.5nm on the lever
604 deflection, which is very close to the best precision.
606 Obviously, it is very hard to predict which level of noise will be
607 present in real experiments and how it will distort the
608 profiles. Nevertheless, we can see on figure \ref{fig:noise20} the
609 profile with $N=10$ that leads to the biggest error. It is a bit
610 distorted, with pikes and straight/rounded portions, and relatively
611 close to most of that come from experiments. Figure \ref{fig:noise60}
612 shows a sample of worst profile for $N=30$. It is completly distorted,
613 largely beyond the worst experimental ones.
617 \includegraphics[width=\columnwidth]{intens-noise20}
619 \caption{Sample of worst profile for N=10}
625 \includegraphics[width=\columnwidth]{intens-noise60}
627 \caption{Sample of worst profile for N=30}
631 The second criterion is relatively easy to estimate for LSQ and harder
632 for SPL because of $atan$ operation. In both cases, it is proportional
633 to numbers of pixels $M$. For LSQ, it also depends on $nb_s$ and for
634 SPL on $N = k\times M$, i.e. the number of interpolated points.
636 We assume that $M=20$, $nb_s=1024$, $k=4$, all possible parts are
637 already in lookup tables and a limited set of operations (+, -, *, /,
638 $<$, $>$) is taken account. Translating the two algorithms in C code, we
639 obtain about 430 operations for LSQ and 1550 (plus few tenth for
640 $atan$) for SPL. This result is largely in favor of LSQ. Nevertheless,
641 considering the total number of operations is not really pertinent for
642 an FPGA implementation: it mainly depends on the type of operations
644 ordering. The final decision is thus driven by the third criterion.\\
646 The Spartan 6 used in our architecture has hard constraint: it has no
647 built-in floating point units. Obviously, it is possible to use some
648 existing "black-boxes" for double precision operations. But they have
649 a quite long latency. It is much simpler to exclusively use integers,
650 with a quantization of all double precision values. Obviously, this
651 quantization should not decrease too much the precision of
652 results. Furthermore, it should not lead to a design with a huge
653 latency because of operations that could not complete during a single
654 or few clock cycles. Divisions are in this case and, moreover, they
655 need an varying number of clock cycles to complete. Even
656 multiplications can be a problem: DSP48 take inputs of 18 bits
657 maximum. For larger multiplications, several DSP must be combined,
658 increasing the latency.
660 Nevertheless, the hardest constraint does not come from the FPGA
661 characteristics but from the algorithms. Their VHDL implentation will
662 be efficient only if they can be fully (or near) pipelined. By the
663 way, the choice is quickly done: only a small part of SPL can be.
664 Indeed, the computation of spline coefficients implies to solve a
665 tridiagonal system $A.m = b$. Values in $A$ and $b$ can be computed
666 from incoming pixels intensity but after, the back-solve starts with
667 the lastest values, which breaks the pipeline. Moreover, SPL relies on
668 interpolating far more points than profile size. Thus, the end
669 of SPL works on a larger amount of data than the beginning, which
670 also breaks the pipeline.
672 LSQ has not this problem: all parts except the dichotomial search
673 work on the same amount of data, i.e. the profile size. Furthermore,
674 LSQ needs less operations than SPL, implying a smaller output
675 latency. Consequently, it is the best candidate for phase
676 computation. Nevertheless, obtaining a fully pipelined version
677 supposes that operations of different parts complete in a single clock
678 cycle. It is the case for simulations but it completely fails when
679 mapping and routing the design on the Spartan6. By the way,
680 extra-latency is generated and there must be idle times between two
681 profiles entering into the pipeline.
683 %%Before obtaining the least bitstream, the crucial question is: how to
684 %%translate the C code the LSQ into VHDL ?
687 %\subsection{VHDL design paradigms}
689 \section{Experimental tests}
691 \subsection{VHDL implementation}
693 % - ecriture d'un code en C avec integer
694 % - calcul de la taille max en bit de chaque variable en fonction de la quantization.
695 % - tests de quantization : équilibre entre précision et contraintes FPGA
696 % - en parallèle : simulink et VHDL à la main
698 \subsection{Simulation}
701 % au mieux : une phase tous les 33 cycles, latence de 95 cycles.
702 % mais routage/placement impossible.
703 \subsection{Bitstream creation}
705 % pas fait mais prévision d'une sortie tous les 480ns avec une latence de 1120
712 \section{Conclusion and perspectives}
715 \bibliographystyle{plain}
716 \bibliography{biblio}