\begin{abstract}
- Atomic force microscope (AFM) provides high resolution images of
- surfaces. We focus our attention on an interferometry method to
- estimate the cantilevers deflection. The initial method was based
- on splines to determine the phase of interference fringes, and thus
- the deflection. Computations were performed on a PC with LabView.
- In this paper, we propose a new approach based on the least square
- methods and its implementation that we developped on a FPGA, using
- the pipelining technique. Simulations and real tests showed us that
- this implementation is very efficient and should allow us to control
- a cantilevers array in real time.
+Atomics force microscope (AFM) provide high resolution images of surfaces. In
+this paper, we focus our attention on an interferometry method to estimate the
+cantilevers deflection. The initial method was based on splines to determine
+the phase of interference fringes, and thus the deflection. Computations were
+performed on a PC with LabView. Here, we propose a new approach based on the
+least square methods and its implementation that we have developed on a FPGA,
+using the pipelining technique. Simulations and real tests showed us that this
+implementation is very efficient and should allow us to control a cantilevers
+array in real time.
\end{abstract}
\section{Introduction}
-Cantilevers are used inside atomic force microscope (AFM) which provides high
-resolution images of surfaces. Several technics have been used to measure the
-displacement of cantilevers in litterature. For example, it is possible to
-determine accurately the deflection with different mechanisms.
+Cantilevers are used inside atomic force microscopes (AFM) which provide high
+resolution images of surfaces. Several techniques have been used to measure the
+displacement of cantilevers in literature. For example, it is possible to
+determine accurately the deflection with different mechanisms.
In~\cite{CantiPiezzo01}, authors used piezoresistor integrated into the
cantilever. Nevertheless this approach suffers from the complexity of the
microfabrication process needed to implement the sensor in the cantilever.
-In~\cite{CantiCapacitive03}, authors have presented an cantilever mechanism
-based on capacitive sensing. This kind of technic also involves to instrument
-the cantiliver which result in a complex fabrication process.
+In~\cite{CantiCapacitive03}, authors have presented a cantilever mechanism
+based on capacitive sensing. This kind of technique also involves to instrument
+the cantilever which results in a complex fabrication process.
In this paper our attention is focused on a method based on interferometry to
measure cantilevers' displacements. In this method cantilevers are illuminated
-by an optic source. The interferometry produces fringes on each cantilever
-which enables to compute the cantilever displacement. In order to analyze the
+by an optic source. The interferometry produces fringes on each cantilever which
+enables us to compute the cantilever displacement. In order to analyze the
fringes a high speed camera is used. Images need to be processed quickly and
-then a estimation method is required to determine the displacement of each
-cantilever. In~\cite{AFMCSEM11}, authors have used an algorithm based on
-spline to estimate the cantilevers' positions.
+then an estimation method is required to determine the displacement of each
+cantilever. In~\cite{AFMCSEM11}, authors have used an algorithm based on spline
+to estimate the cantilevers' positions.
-The overall process gives accurate results but all the computations
-are performed on a standard computer using LabView. Consequently, the
-main drawback of this implementation is that the computer is a
-bootleneck. In this paper we propose to use a method based on least
-square and to implement all the computation on a FGPA.
+The overall process gives accurate results but all the computations are
+performed on a standard computer using LabView. Consequently, the main drawback
+of this implementation is that the computer is a bottleneck. In this paper we
+propose to use a method based on least squares and to implement all the
+computation on a FPGA.
The remainder of the paper is organized as follows. Section~\ref{sec:measure}
describes more precisely the measurement process. Our solution based on the
-%% quelques ref commentées sur les calculs basés sur l'interférométrie
+
\section{Measurement principles}
\label{sec:measure}
\subsection{Architecture}
\label{sec:archi}
-%% description de l'architecture générale de l'acquisition d'images
-%% avec au milieu une unité de traitement dont on ne précise pas ce
-%% qu'elle est.
+
In order to develop simple, cost effective and user-friendly cantilever arrays,
-authors of ~\cite{AFMCSEM11} have developped a system based of
+authors of ~\cite{AFMCSEM11} have developed a system based on
interferometry. In opposition to other optical based systems, using a laser beam
-deflection scheme and sentitive to the angular displacement of the cantilever,
+deflection scheme and sensitive to the angular displacement of the cantilever,
interferometry is sensitive to the optical path difference induced by the
vertical displacement of the cantilever.
-The system build by these authors is based on a Linnick
-interferomter~\cite{Sinclair:05}. It is illustrated in
-Figure~\ref{fig:AFM}. A laser diode is first split (by the splitter)
-into a reference beam and a sample beam that reachs the cantilever
-array. In order to be able to move the cantilever array, it is
-mounted on a translation and rotational hexapod stage with five
-degrees of freedom. The optical system is also fixed to the stage.
-Thus, the cantilever array is centered in the optical system which can
-be adjusted accurately. The beam illuminates the array by a
-microscope objective and the light reflects on the cantilevers.
-Likewise the reference beam reflects on a movable mirror. A CMOS
-camera chip records the reference and sample beams which are
-recombined in the beam splitter and the interferogram. At the
-beginning of each experiment, the movable mirror is fitted manually in
-order to align the interferometric fringes approximately parallel to
-the cantilevers. When cantilevers move due to the surface, the
-bending of cantilevers produce movements in the fringes that can be
-detected with the CMOS camera. Finally the fringes need to be
-analyzed. In~\cite{AFMCSEM11}, authors used a LabView program to
-compute the cantilevers' deflections from the fringes.
+The system built by these authors is based on a Linnick
+interferometer~\cite{Sinclair:05}. It is illustrated in Figure~\ref{fig:AFM}.
+A laser diode is first split (by the splitter) into a reference beam and a
+sample beam that reach the cantilever array. In order to be able to move the
+cantilever array, it is mounted on a translation and rotational hexapod stage
+with five degrees of freedom. The optical system is also fixed to the stage.
+Thus, the cantilever array is centered in the optical system which can be
+adjusted accurately. The beam illuminates the array by a microscope objective
+and the light reflects on the cantilevers. Likewise the reference beam reflects
+on a movable mirror. A CMOS camera chip records the reference and sample beams
+which are recombined in the beam splitter and the interferogram. At the
+beginning of each experiment, the movable mirror is fitted manually in order to
+align the interferometric fringes approximately parallel to the cantilevers.
+When cantilevers move due to the surface, the bending of cantilevers produce
+movements in the fringes that can be detected with the CMOS camera. Finally the
+fringes need to be analyzed. In~\cite{AFMCSEM11}, authors used a LabView program
+to compute the cantilevers' deflections from the fringes.
\begin{figure}
\begin{center}
proposed a method based on computing the phase of the fringes, at the
base of each cantilever, near the tip, and on the base of the
array. They assume that a linear relation binds these phases, which
-can be use to "unwrap" the phase at the tip and to determine the deflection.\\
+can be used to "unwrap" the phase at the tip and to determine the deflection.\\
-More precisely, segment of pixels are extracted from images taken by
+More precisely, segments of pixels are extracted from images taken by
the camera. These segments are large enough to cover several
interferometric fringes. As said above, they are placed at the base
and near the tip of the cantilevers. They are called base profile and
where $x$ is the position of a pixel in its associated segment.
The global method consists in two main sequences. The first one aims
-to determin the frequency $f$ of each profile with an algorithm based
-on spline interpolation (see section \ref{algo-spline}). It also
+to determine the frequency $f$ of each profile with an algorithm based
+on spline interpolation (see section \ref{sec:algo-spline}). It also
computes the coefficient used for unwrapping the phase. The second one
-is the acquisition loop, while which images are taken at regular time
+is the acquisition loop, during which images are taken at regular time
steps. For each image, the phase $\theta$ of all profiles is computed
to obtain, after unwrapping, the deflection of
cantilevers. Originally, this computation was also done with an
that computing the deflection of a single
cantilever should take less than 25$\mu$s, thus 12.5$\mu$s by phase.\\
-In fact, this timing is a very hard constraint. Let consider a very
-small programm that initializes twenty million of doubles in memory
-and then does 1000000 cumulated sums on 20 contiguous values
+In fact, this timing is a very hard constraint. Let us consider a very
+small program that initializes twenty million of doubles in memory
+and then does 1,000,000 cumulated sums on 20 contiguous values
(experimental profiles have about this size). On an intel Core 2 Duo
E6650 at 2.33GHz, this program reaches an average of 155Mflops.
Obviously, some cache effects and optimizations on
huge amount of computations can drastically increase these
performances: peak efficiency is about 2.5Gflops for the considered
-CPU. But this is not the case for phase computation that used only few
+CPU. But this is not the case for phase computation that used only a few
tenth of values.\\
In order to evaluate the original algorithm, we translated it in C
-language. As said further, for 20 pixels, it does about 1550
-operations, thus an estimated execution time of $1550/155
+language. As stated before, for 20 pixels, it does about 1,550
+operations, thus an estimated execution time of $1,550/155
=$10$\mu$s. For a more realistic evaluation, we constructed a file of
1Mo containing 200 profiles of 20 pixels, equally scattered. This file
is equivalent to an image stored in a device file representing the
camera. We obtained an average of 10.5$\mu$s by profile (including I/O
-accesses). It is under are requirements but close to the limit. In
+accesses). It is under our requirements but close to the limit. In
case of an occasional load of the system, it could be largely
overtaken. A solution would be to use a real-time operating system but
another one to search for a more efficient algorithm.
profile must be treated one after another, the deflection of 100
cantilevers takes about $200\times 10.5 = 2.1$ms, which is inadequate
for an efficient control. An obvious solution is to parallelize the
-computations, for example on a GPU. Nevertheless, the cost to transfer
-profile in GPU memory and to take back results would be prohibitive
+computations, for example on a GPU. Nevertheless, the cost of transferring
+profile in GPU memory and of taking back results would be prohibitive
compared to computation time. It is certainly more efficient to
pipeline the computation. For example, supposing that 200 profiles of
20 pixels can be pushed sequentially in the pipelined unit cadenced at
the latency of the pipeline. This is about 500 times faster than
actual results.\\
-For these reasons, an FPGA as the computation unit is the best choice
+For these reasons, a FPGA as the computation unit is the best choice
to achieve the required performance. Nevertheless, passing from
a C code to a pipelined version in VHDL is not obvious at all. As
explained in the next section, it can even be impossible because of
Project Oscar aims to provide a hardware and software architecture to estimate
and control the deflection of cantilevers. The hardware part consists in a
-high-speed camera, linked on an embedded board hosting FPGAs. By the way, the
+high-speed camera, linked on an embedded board hosting FPGAs. In this way, the
camera output stream can be pushed directly into the FPGA. The software part is
mostly the VHDL code that deserializes the camera stream, extracts profile and
computes the deflection. Before focusing on our work to implement the phase
A field-programmable gate array (FPGA) is an integrated circuit designed to be
configured by the customer. FGPAs are composed of programmable logic components,
-called configurable logic blocks (CLB). These blocks mainly contains look-up
+called configurable logic blocks (CLB). These blocks mainly contain look-up
tables (LUT), flip/flops (F/F) and latches, organized in one or more slices
connected together. Each CLB can be configured to perform simple (AND, XOR, ...)
-or complex combinational functions. They are interconnected by reconfigurable
-links. Modern FPGAs contain memory elements and multipliers which enable to
+or complex combinational functions. They are interconnected by reconfigurable
+links. Modern FPGAs contain memory elements and multipliers which enable to
simplify the design and to increase the performance. Nevertheless, all other
complex operations, like division, trigonometric functions, $\ldots$ are not
-available and must be done by configuring a set of CLBs. Since this
+available and must be done by configuring a set of CLBs. Since this
configuration is not obvious at all, it can be done via a framework, like
-ISE~\cite{ISE}. Such a software can synthetize a design written in a hardware
+ISE~\cite{ISE}. Such a software can synthetize a design written in a hardware
description language (HDL), map it onto CLBs, place/route them for a specific
-FPGA, and finally produce a bitstream that is used to configre the FPGA. Thus,
-from the developper point of view, the main difficulty is to translate an
-algorithm in HDL code, taking account FPGA resources and constraints like clock
-signals and I/O values that drive the FPGA.
+FPGA, and finally produce a bitstream that is used to configure the FPGA. Thus,
+from the developer's point of view, the main difficulty is to translate an
+algorithm in HDL code, taking into account FPGA resources and constraints like
+clock signals and I/O values that drive the FPGA.
Indeed, HDL programming is very different from classic languages like
C. A program can be seen as a state-machine, manipulating signals that
-evolve from state to state. By the way, HDL instructions can execute
-concurrently. Basic logic operations are used to agregate signals to
+evolve from state to state. Moreover, HDL instructions can executed
+concurrently. Basic logic operations are used to aggregate signals to
produce new states and assign it to another signal. States are mainly
-expressed as arrays of bits. Fortunaltely, libraries propose some
+expressed as arrays of bits. Fortunately, libraries propose some
higher levels representations like signed integers, and arithmetic
operations.
Furthermore, even if FPGAs are cadenced more slowly than classic
processors, they can perform pipeline as well as parallel
-operations. A pipeline consists in cutting a process in sequence of
+operations. A pipeline consists in cutting a process in a sequence of
small tasks, taking the same execution time. It accepts a new data at
each clock top, thus, after a known latency, it also provides a result
at each clock top. However, using a pipeline consumes more logics
since the components of a task are not reusable by another
one. Nevertheless it is probably the most efficient technique on
FPGA. Because of its architecture, it is also very easy to process
-several data concurrently. When it is possible, the best performance
+several data concurrently. Whenever possible, the best performance
is reached using parallelism to handle simultaneously several
pipelines in order to handle multiple data streams.
\subsection{The board}
-The board we use is designed by the Armadeus compagny, under the name
+The board we use is designed by the Armadeus company, under the name
SP Vision. It consists in a development board hosting a i.MX27 ARM
processor (from Freescale). The board includes all classical
connectors: USB, Ethernet, ... A Flash memory contains a Linux kernel
original one, proposed by A. Meister and M. Favre, is based on
interpolation by splines. It allows to compute frequency and
phase. The second one, detailed in this article, is based on a
-classical least square method but suppose that frequency is already
+classical least square method which supposes that the frequency is already
known.
\subsubsection{Spline algorithm (SPL)}
\label{sec:algo-spline}
-Let consider a profile $P$, that is a segment of $M$ pixels with an
-intensity in gray levels. Let call $I(x)$ the intensity of profile in $x
+Let us consider a profile $P$, that is a segment of $M$ pixels with an
+intensity in gray levels. Let us call $I(x)$ the intensity of profile in $x
\in [0,M[$.
At first, only $M$ values of $I$ are known, for $x = 0, 1,
-\ldots,M-1$. A normalisation allows to scale known intensities into
-$[-1,1]$. We compute splines that fit at best these normalised
+\ldots,M-1$. A normalization allows to scale known intensities into
+$[-1,1]$. We compute splines that fit at best these normalized
intensities. Splines are used to interpolate $N = k\times M$ points
-(typically $k=4$ is sufficient), within $[0,M[$. Let call $x^s$ the
+(typically $k=4$ is sufficient), within $[0,M[$. Let $x^s$ be the
coordinates of these $N$ points and $I^s$ their intensities.
In order to have the frequency, the mean line $a.x+b$ (see equation \ref{equ:profile}) of $I^s$ is
-computed. Finding intersections of $I^s$ and this line allow to obtain
-the period thus the frequency.
+computed. Finding intersections of $I^s$ and this line allows us to obtain
+the period and thus the frequency.
The phase is computed via the equation:
\begin{equation}
\begin{itemize}
\item the frequency could also be obtained using the derivates of
spline equations, which only implies to solve quadratic equations.
-\item frequency of each profile is computed a single time, before the
+\item frequency of each profile is computed only once, before the
acquisition loop. Thus, $sin(2\pi f x^s_i)$ and $cos(2\pi f x^s_i)$
- could also be computed before the loop, which leads to a much faster
+ could also be computed before the loop, which would lead to a much faster
computation of $\theta$.
\end{itemize}
ending with a convergence criterion, it is obvious that it is not
particularly adapted to our design goals.
-Fortunatly, it is quite simple to reduce the number of parameters to
+Fortunately, it is quite simple to reduce the number of parameters to
only $\theta$. Let $x^p$ be the coordinates of pixels in a segment of
size $M$. Thus, $x^p = 0, 1, \ldots, M-1$. Let $I(x^p)$ be their
intensity. Firstly, we "remove" the slope by computing:
\[\sum_{i=0}^{M-1} \left[ cos(2\pi f.i + \theta) - \frac{I^{corr}(i)}{A} \right]^2\]
-It is equivalent to derivate this expression and to solve the following equation:
+It is equivalent to derivating this expression and to solving the following equation:
+
+
+%\begin{eqnarray*}{l}
+$$2\left[ cos\theta \sum_{i=0}^{M-1} I^{corr}(i).sin(2\pi f.i) \right.$$
+$$\left. + sin\theta \sum_{i=0}^{M-1} I^{corr}(i).cos(2\pi f.i)\right]- $$
+$$ A\left[ cos2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] = 0$$
+%\end{eqnarray*}
-\begin{eqnarray*}
-2\left[ cos\theta \sum_{i=0}^{M-1} I^{corr}(i).sin(2\pi f.i) + sin\theta \sum_{i=0}^{M-1} I^{corr}(i).cos(2\pi f.i)\right] \\
-- A\left[ cos2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] = 0
-\end{eqnarray*}
Several points can be noticed:
\begin{itemize}
\item The simplest method to find the good $\theta$ is to discretize
$[-\pi,\pi]$ in $nb_s$ steps, and to search which step leads to the
- result closest to zero. By the way, three other lookup tables can
+ result closest to zero. Hence, three other lookup tables can
also be computed before the loop:
\[ sin \theta, cos \theta, \]
\end{itemize}
-Finally, the whole summarizes in an algorithm (called LSQ in the following) in two parts, one before and one during the acquisition loop:
+Finally, this is synthetized in an algorithm (called LSQ in the following) in two parts, one before and one during the acquisition loop:
\begin{algorithm}[htbp]
\caption{LSQ algorithm - before acquisition loop.}
\label{alg:lsq-before}
$M \leftarrow $ number of pixels of the profile\\
- I[] $\leftarrow $ intensities of pixels\\
+ I[] $\leftarrow $ intensity of pixels\\
$f \leftarrow $ frequency of the profile\\
$s4i \leftarrow \sum_{i=0}^{M-1} sin(4\pi f.i)$\\
$c4i \leftarrow \sum_{i=0}^{M-1} cos(4\pi f.i)$\\
We compared the two algorithms on the base of three criteria:
\begin{itemize}
-\item precision of results on a cosinus profile, distorted with noise,
+\item precision of results on a cosines profile, distorted by noise,
\item number of operations,
-\item complexity to implement an FPGA version.
+\item complexity of implementating an FPGA version.
\end{itemize}
For the first item, we produced a matlab version of each algorithm,
phase ($\in [-3.1 , 3.1]$, step = 0.062) and slope ($\in [-2 , 2]$,
step = 0.4). For LSQ, $nb_s = 1024$, which leads to a maximal error of
$\frac{\pi}{1024}$ on phase computation. Current A. Meister and
-M. Favre experiments show a ratio of 50 between variation of phase and
+M. Favre's experiments show a ratio of 50 between the variation of a phase and
the deflection of a lever. Thus, the maximal error due to
-discretization correspond to an error of 0.15nm on the lever
+discretization corresponds to an error of 0.15nm on the lever
deflection, which is smaller than the best precision they achieved,
i.e. 0.3nm.
(NB: it should be noticed that picking a new value for each pixel does
not distort enough the profile). The absolute error on the result is
evaluated by comparing the difference between the reference and
-computed phase, out of $2\pi$, expressed in percents. That is: $err =
+computed phase, out of $2\pi$, expressed in percentage. That is: $err =
100\times \frac{|\theta_{ref} - \theta_{comp}|}{2\pi}$.
-Table \ref{tab:algo_prec} gives the maximum and average error for the two algorithms and increasing values of $N$.
+Table \ref{tab:algo_prec} gives the maximum and average error for both
+algorithms and increasing values of $N$.
\begin{table}[ht]
\begin{center}
30 & 17.06 & 2.6 & 13.94 & 2.45 \\ \hline
\end{tabular}
-\caption{Error (in \%) for cosinus profiles, with noise.}
+\caption{Error (in \%) for cosines profiles, with noise.}
\label{tab:algo_prec}
\end{center}
\end{table}
These results show that the two algorithms are very close, with a
-slight advantage for LSQ. Furthemore, both behave very well against
+slight advantage for LSQ. Furthermore, both behave very well against
noise. Assuming the experimental ratio of 50 (see above), an error of
-1 percent on phase correspond to an error of 0.5nm on the lever
+1 percent on the phase corresponds to an error of 0.5nm on the lever
deflection, which is very close to the best precision.
Obviously, it is very hard to predict which level of noise will be
profiles. Nevertheless, we can see on figure \ref{fig:noise20} the
profile with $N=10$ that leads to the biggest error. It is a bit
distorted, with pikes and straight/rounded portions, and relatively
-close to most of that come from experiments. Figure \ref{fig:noise60}
-shows a sample of worst profile for $N=30$. It is completly distorted,
+close to experiments. Figure \ref{fig:noise60}
+shows a sample of worst profile for $N=30$. It is completely distorted,
largely beyond the worst experimental ones.
\begin{figure}[ht]
The second criterion is relatively easy to estimate for LSQ and harder
for SPL because of $atan$ operation. In both cases, it is proportional
-to numbers of pixels $M$. For LSQ, it also depends on $nb_s$ and for
+to the numbers of pixels $M$. For LSQ, it also depends on $nb_s$ and for
SPL on $N = k\times M$, i.e. the number of interpolated points.
We assume that $M=20$, $nb_s=1024$, $k=4$, all possible parts are
already in lookup tables and a limited set of operations (+, -, *, /,
-$<$, $>$) is taken account. Translating the two algorithms in C code, we
-obtain about 430 operations for LSQ and 1550 (plus few tenth for
+$<$, $>$) is taken into account. Translating both algorithms in C code, we
+obtain about 430 operations for LSQ and 1,550 (plus a few tenth for
$atan$) for SPL. This result is largely in favor of LSQ. Nevertheless,
considering the total number of operations is not really pertinent for
an FPGA implementation: it mainly depends on the type of operations
ordering. The final decision is thus driven by the third criterion.\\
The Spartan 6 used in our architecture has a hard constraint: it has no built-in
-floating point units. Obviously, it is possible to use some existing
-"black-boxes" for double precision operations. But they have a quite long
+floating point units. Obviously, it is possible to use some existing
+"black-boxes" for double precision operations. But they have quite a long
latency. It is much simpler to exclusively use integers, with a quantization of
all double precision values. Obviously, this quantization should not decrease
too much the precision of results. Furthermore, it should not lead to a design
with a huge latency because of operations that could not complete during a
-single or few clock cycles. Divisions are in this case and, moreover, they need
-a varying number of clock cycles to complete. Even multiplications can be a
-problem: DSP48 take inputs of 18 bits maximum. For larger multiplications,
+single or few clock cycles. Divisions fall into that category and, moreover,
+they need a varying number of clock cycles to complete. Even multiplications can
+be a problem: a DSP48 takes inputs of 18 bits maximum. For larger multiplications,
several DSP must be combined, increasing the latency.
Nevertheless, the hardest constraint does not come from the FPGA characteristics
-but from the algorithms. Their VHDL implentation will be efficient only if they
-can be fully (or near) pipelined. By the way, the choice is quickly done: only a
-small part of SPL can be. Indeed, the computation of spline coefficients
-implies to solve a tridiagonal system $A.m = b$. Values in $A$ and $b$ can be
-computed from incoming pixels intensity but after, the back-solve starts with
-the lastest values, which breaks the pipeline. Moreover, SPL relies on
-interpolating far more points than profile size. Thus, the end of SPL works on a
-larger amount of data than the beginning, which also breaks the pipeline.
+but from the algorithms. Their VHDL implementation will be efficient only if
+they can be fully (or near) pipelined. Thus, the choice is quickly made: only a
+small part of SPL can be pipelined. Indeed, the computation of spline
+coefficients implies to solve a tridiagonal system $A.m = b$. Values in $A$ and
+$b$ can be computed from incoming pixels intensity but after, the back-solve
+starts with the latest values, which breaks the pipeline. Moreover, SPL relies
+on interpolating far more points than profile size. Thus, the end of SPL works
+on a larger amount of data than at the beginning, which also breaks the pipeline.
LSQ has not this problem: all parts except the dichotomial search work on the
same amount of data, i.e. the profile size. Furthermore, LSQ needs less
best candidate for phase computation. Nevertheless, obtaining a fully pipelined
version supposes that operations of different parts complete in a single clock
cycle. It is the case for simulations but it completely fails when mapping and
-routing the design on the Spartan6. By the way, extra-latency is generated and
+routing the design on the Spartan6. Thus, extra-latency is generated and
there must be idle times between two profiles entering into the pipeline.
%%Before obtaining the least bitstream, the crucial question is: how to
\section{Experimental tests}
-In this section we explain what we have done yet. Until now, we could not perform
-real experiments since we just have received the FGPA board. Nevertheless, we
-will include real experiments in the final version of this paper.
+%In this section we explain what we have done yet. Until now, we could not perform
+%real experiments since we just have received the FGPA board. Nevertheless, we
+%will include real experiments in the final version of this paper.
\subsection{VHDL implementation}
integer values. We use a very simple quantization by multiplying
double precision values by a power of two, keeping the integer
part. For example, all values stored in lut$_s$, lut$_c$, $\ldots$ are
-scaled by 1024. Since LSQ also computes average, variance, ... to
-remove the slope, the result of implied euclidian divisions may be
+scaled by 1,024. Since LSQ also computes average, variance, ... to
+remove the slope, the result of implied Euclidean divisions may be
relatively wrong. To avoid that, we also scale the pixel intensities
-by a power of two. Futhermore, assuming $nb_s$ is fixed, these
+by a power of two. Furthermore, assuming $nb_s$ is fixed, these
divisions have a known denominator. Thus, they can be replaced by
their multiplication/shift counterpart. Finally, all other
multiplications or divisions by a power of two have been replaced by
-left or right bit shifts. By the way, the code only contains
-additions, substractions and multiplications of signed integers, which
-is perfectly adapted to FGPAs.
-
-As said above, hardware constraints have a great influence on the VHDL
-implementation. Consequently, we searched the maximum value of each
-variable as a function of the different scale factors and the size of
-profiles, which gives their maximum size in bits. That size determines
-the maximum scale factors that allow to use the least possible RAMs
-and DSPs. Actually, we implemented our algorithm with this maximum
-size but current works study the impact of quantization on the results
-precision and design complexity. We have compared the result of the
-LSQ version using integers and doubles and observed that the precision
-of both were similar.
+left or right bit shifts. Thus, the code only contains
+additions, subtractions and multiplications of signed integers, which
+are perfectly adapted to FGPAs.
+
+As mentioned above, hardware constraints have a great influence on the VHDL
+implementation. Consequently, we searched the maximum value of each variable as
+a function of the different scale factors and the size of profiles, which gives
+their maximum size in bits. That size determines the maximum scale factors that
+allow to use the least possible RAMs and DSPs. Actually, we implemented our
+algorithm with this maximum size but current works study the impact of
+quantization on the results precision and design complexity. We have compared
+the result of the LSQ version using integers and doubles and observed that the
+precision of both were similar.
Then we built two versions of VHDL codes: one directly by hand coding
and the other with Matlab using the Simulink HDL coder
feature~\cite{HDLCoder}. Although the approach is completely different
we obtained VHDL codes that are quite comparable. Each approach has
advantages and drawbacks. Roughly speaking, hand coding provides
-beautiful and much better structured code while Simulink allows to
+beautiful and much better structured code while Simulink enables us to
produce a code faster. In terms of throughput and latency,
-simulations shows that the two approaches are close with a slight
+simulations show that the two approaches are close with a slight
advantage for hand coding. We hope that real experiments will confirm
that.
Before experimental tests on the board, we simulated our two VHDL
codes with GHDL and GTKWave (two free tools with linux). For that, we
-build a testbench based on profiles taken from experimentations and
-compare the results to values given by the SPL algorithm. Both
+built a testbench based on profiles taken from experimentations and
+compared the results to values given by the SPL algorithm. Both
versions lead to correct results.
-Our first code were highly optimized : the pipeline could compute a
+Our first codes were highly optimized : the pipeline could compute a
new phase each 33 cycles and its latency was equal to 95 cycles. Since
the Spartan6 is clocked at 100MHz, it implies that estimating the
deflection of 100 cantilevers would take about $(95 + 200\times 33).10
-= 66.95\mu$s, i.e. nearly 15000 estimations by second.
+= 66.95\mu$s, i.e. nearly 15,000 estimations by second.
\subsection{Bitstream creation}
extended with a component that keeps profiles in RAM, flushes them in
the phase computation component and stores its output in another
RAM. We also added a wishbone : a component that can "drive" signals
-to communicate between i.MX and others components. It is mainly used
+to communicate between i.MX and other components. It is mainly used
to start to flush profiles and to retrieve the computed phases in RAM.
-Unfortunatly, the first designs could not be placed and route with ISE
-on the Spartan6 with a 100MHz clock. The main problems came from
-routing values from RAMs to DSPs and obtaining a result under 10ns. By
-the way, we needed to decompose some parts of the pipeline, which adds
-some cycles. For example, some delays have been introduced between
-RAMs output and DSPs. Finally, we obtained a bitstream that has a
-latency of 112 cycles and computes a new phase every 40 cycles. For
-100 cantilevers, it takes $(112 + 200\times 40).10 = 81.12\mu$s to
-compute their deflection.
+Unfortunately, the first designs could not be placed and route with ISE on the
+Spartan6 with a 100MHz clock. The main problems came from routing values from
+RAMs to DSPs and obtaining a result under 10ns. So, we needed to decompose some
+parts of the pipeline, which adds some cycles. For example, some delays have
+been introduced between RAMs output and DSPs. Finally, we obtained a bitstream
+that has a latency of 112 cycles and computes a new phase every 40 cycles. For
+100 cantilevers, it takes $(112 + 200\times 40).10 = 81.12\mu$s to compute their
+deflection.
-This bitstream has been successfully tested on the board TODAY ! YEAAHHHHH
+This bitstream has been successfully tested on the board.
profile image very quickly. Currently we have performed simulations
and real tests on a Spartan6 FPGA.
-In future work, we want to couple our algorithm with a high speed camera
-and we plan to control the whole AFM system.
+In future work, we plan to study the quantization. Then we want to couple our
+algorithm with a high speed camera and we plan to control the whole AFM system.
\bibliographystyle{plain}
\bibliography{biblio}