X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/dmems12.git/blobdiff_plain/378804e7fc26018b70fc8beafbb03b16a0e536f2..e73cda1143a6ba97c0904b1811130d22ba7daaa8:/dmems12.tex?ds=sidebyside diff --git a/dmems12.tex b/dmems12.tex index 806e509..f559661 100644 --- a/dmems12.tex +++ b/dmems12.tex @@ -1,7 +1,9 @@ -\documentclass[12pt]{article} + +\documentclass[10pt, conference, compsocconf]{IEEEtran} %\usepackage{latex8} %\usepackage{times} -\usepackage[latin1]{inputenc} +\usepackage[utf8]{inputenc} +%\usepackage[cyr]{aeguill} %\usepackage{pstricks,pst-node,pst-text,pst-3d} %\usepackage{babel} \usepackage{amsmath} @@ -16,34 +18,45 @@ \usepackage{fullpage} \usepackage{fancybox} +\usepackage[ruled,lined,linesnumbered]{algorithm2e} + %%%%%%%%%%%%%%%%%%%%%%%%%%%% LyX specific LaTeX commands. \newcommand{\noun}[1]{\textsc{#1}} \newcommand{\tab}{\ \ \ } -%%%%%%%%%%%%%%%%%%%%%%%%%%%% my bib path. + + +\begin{document} + + +%% \author{\IEEEauthorblockN{Authors Name/s per 1st Affiliation (Author)} +%% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\ +%% line 2: name of organization, acronyms acceptable\\ +%% line 3: City, Country\\ +%% line 4: Email: name@xyz.com} +%% \and +%% \IEEEauthorblockN{Authors Name/s per 2nd Affiliation (Author)} +%% \IEEEauthorblockA{line 1 (of Affiliation): dept. name of organization\\ +%% line 2: name of organization, acronyms acceptable\\ +%% line 3: City, Country\\ +%% line 4: Email: name@xyz.com} +%% } + \title{Using FPGAs for high speed and real time cantilever deflection estimation} +\author{\IEEEauthorblockN{Raphaël Couturier\IEEEauthorrefmark{1}, Stéphane Domas\IEEEauthorrefmark{1}, Gwenhaël Goavec-Merou\IEEEauthorrefmark{2} and Michel Lenczner\IEEEauthorrefmark{2}} +\IEEEauthorblockA{\IEEEauthorrefmark{1}FEMTO-ST, DISC, University of Franche-Comte, Belfort, France\\ +\{raphael.couturier,stephane.domas\}@univ-fcomte.fr} +\IEEEauthorblockA{\IEEEauthorrefmark{2}FEMTO-ST, Time-Frequency, University of Franche-Comte, Besançon, France\\ +\{michel.lenczner@utbm.fr,gwenhael.goavec@trabucayre.com} +} + + -\author{ Raphaël COUTURIER\\ -Laboratoire d'Informatique -de l'Universit\'e de Franche-Comt\'e, \\ -BP 527, \\ -90016~Belfort CEDEX, France\\ - \and Stéphane Domas\\ -Laboratoire d'Informatique -de l'Universit\'e de Franche-Comt\'e, \\ -BP 527, \\ -90016~Belfort CEDEX, France\\ - \and Gwenhaël Goavec\\ -?? -?? \\ -??, \\ -??\\} -\begin{document} \maketitle @@ -58,34 +71,51 @@ BP 527, \\ \section{Introduction} -%% blabla + +Cantilevers are used inside atomic force microscope which provides high +resolution images of surfaces. Several technics have been used to measure the +displacement of cantilevers in litterature. For example, it is possible to +determine accurately the deflection with optic +interferometer~\cite{CantiOptic89}, pizeoresistor~\cite{CantiPiezzo01} or +capacitive sensing~\cite{CantiCapacitive03}. In this paper our attention is +focused on a method based on interferometry to measure cantilevers' +displacements. In this method cantilevers are illiminated by an optic +source. The interferometry produces fringes on each cantilevers which enables to +compute the cantilever displacement. In order to analyze the fringes a high +speed camera is used. Images need to be processed quickly and then a estimation +method is required to determine the displacement of each cantilever. +In~\cite{AFMCSEM11} {\bf verifier ref}, the authors have used an algorithm based +on spline to estimate the cantilevers' positions. The overall process gives +accurate results but all the computation are performed on a standard computer +using labview. Consequently, the main drawback of this implementation is that +the computer is a bootleneck in the overall process. In this paper we propose to +use a method based on least square and to implement all the computation on a +FGPA. + +The remainder of the paper is organized as follows. Section~\ref{sec:measure} +describes more precisely the measurement process. Our solution based on the +least square method and the implementation on FPGA is presented in +Section~\ref{sec:solus}. Experimentations are described in +Section~\ref{sec:results}. Finally a conclusion and some perspectives are +presented. + + + %% quelques ref commentées sur les calculs basés sur l'interférométrie -\section{Measurement architecture} -\label{sec:measure-archi} +\section{Measurement principles} +\label{sec:measure} +\subsection{Architecture} +\label{sec:archi} %% description de l'architecture générale de l'acquisition d'images %% avec au milieu une unité de traitement dont on ne précise pas ce %% qu'elle est. %% image tirée des expériences. -\section{Design goals} -\label{sec:goals} - -%% objectifs en terme de rapidité et de précision, avec en vue l'ajout -%% du contrôle => l'unité de traitement qui s'impose est un FPGA => -%% algo adapté au FPGA. - -%% peut etre que cette section peut être déplacée en intro ... à voir. - -\section{Proposed solution} -\label{sec:solus} - \subsection{Cantilever deflection estimation} +\label{sec:deflest} -%% => faire de l'interpolation de signal sinusoidal -%% descriptif rapide des deux méthodes : splines et moindres carrés As shown on image \ref{img:img-xp}, each cantilever is covered by interferometric fringes. The fringes will distort when cantilevers are deflected. Estimating the deflection is done by computing this @@ -113,19 +143,49 @@ where $x$ is the position of a pixel in its associated segment. The global method consists in two main sequences. The first one aims to determin the frequency $f$ of each profile with an algorithm based -on spline interpolation (see below). It also computes the coefficient -used for unwrapping the phase. The second one is the acquisition loop, -while which images are taken at regular time steps. For each image, -the phase $\theta$ of all profiles is computed to obtain, after -unwrapping, the deflection of cantilevers. - -This phase computation is obviously the bottle-neck of the whole -process. For example, if we consider the camera actually in use, an -exposition time of 2.5ms for $1024\times 1204$ pixels seems the -minimum that can be reached. For a $10\times 10$ cantilever array, if -we neglect the time to extract pixels, it implies that computing the -deflection of a single cantilever should take less than 25$µ$s, which is -quite small. +on spline interpolation (see section \ref{algo-spline}). It also +computes the coefficient used for unwrapping the phase. The second one +is the acquisition loop, while which images are taken at regular time +steps. For each image, the phase $\theta$ of all profiles is computed +to obtain, after unwrapping, the deflection of cantilevers. + +\subsection{Design goals} +\label{sec:goals} + +If we put aside some hardware issues like the speed of the link +between the camera and the computation unit, the time to deserialize +pixels and to store them in memory, ... the phase computation is +obviously the bottle-neck of the whole process. For example, if we +consider the camera actually in use, an exposition time of 2.5ms for +$1024\times 1204$ pixels seems the minimum that can be reached. For a +$10\times 10$ cantilever array, if we neglect the time to extract +pixels, it implies that computing the deflection of a single +cantilever should take less than 25$\mu$s, thus 12.5$\mu$s by phase.\\ + +In fact, this timing is a very hard constraint. Let consider a very +small programm that initializes twenty million of doubles in memory +and then does 1000000 cumulated sums on 20 contiguous values +(experimental profiles have about this size). On an intel Core 2 Duo +E6650 at 2.33GHz, this program reaches an average of 155Mflops. It +implies that the phase computation algorithm should not take more than +$240\times 12.5 = 1937$ floating operations. For integers, it gives +$3000$ operations. + +%% to be continued ... + +%% � faire : timing de l'algo spline en C avec atan et tout le bordel. + + + + +\section{Proposed solution} +\label{sec:solus} + + +\subsection{FPGA constraints} + +%% contraintes imposées par le FPGA : algo pipeline/parallele, pas d'op math complexe, ... + \subsection{Considered algorithms} @@ -137,7 +197,7 @@ classical least square method but suppose that frequency is already known. \subsubsection{Spline algorithm} - +\label{sec:algo-spline} Let consider a profile $P$, that is a segment of $M$ pixels with an intensity in gray levels. Let call $I(x)$ the intensity of profile in $x \in [0,M[$. @@ -146,7 +206,7 @@ At first, only $M$ values of $I$ are known, for $x = 0, 1, \ldots,M-1$. A normalisation allows to scale known intensities into $[-1,1]$. We compute splines that fit at best these normalised intensities. Splines are used to interpolate $N = k\times M$ points -(typically $k=3$ is sufficient), within $[0,M[$. Let call $x^s$ the +(typically $k=4$ is sufficient), within $[0,M[$. Let call $x^s$ the coordinates of these $N$ points and $I^s$ their intensities. In order to have the frequency, the mean line $a.x+b$ (see equation \ref{equ:profile}) of $I^s$ is @@ -158,12 +218,15 @@ The phase is computed via the equation : \theta = atan \left[ \frac{\sum_{i=0}^{N-1} sin(2\pi f x^s_i) \times I^s(x^s_i)}{\sum_{i=0}^{N-1} cos(2\pi f x^s_i) \times I^s(x^s_i)} \right] \end{equation} -Two things can be noticed. Firstly, the frequency could also be -obtained using the derivates of spline equations, which only implies -to solve quadratic equations. Secondly, frequency of each profile is -computed a single time, before the acquisition loop. Thus, $sin(2\pi f -x^s_i)$ and $cos(2\pi f x^s_i)$ could also be computed before the loop, which leads to a -much faster computation of $\theta$. +Two things can be noticed : +\begin{itemize} +\item the frequency could also be obtained using the derivates of + spline equations, which only implies to solve quadratic equations. +\item frequency of each profile is computed a single time, before the + acquisition loop. Thus, $sin(2\pi f x^s_i)$ and $cos(2\pi f x^s_i)$ + could also be computed before the loop, which leads to a much faster + computation of $\theta$. +\end{itemize} \subsubsection{Least square algorithm} @@ -218,30 +281,208 @@ Several points can be noticed : computed. \item The simplest method to find the good $\theta$ is to discretize - $[-\pi,\pi]$ in $N$ steps, and to search which step leads to the + $[-\pi,\pi]$ in $nb_s$ steps, and to search which step leads to the result closest to zero. By the way, three other lookup tables can also be computed before the loop : -\[ sin \theta, cos \theta, \left[ cos 2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin 2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] \] +\[ sin \theta, cos \theta, \] -\item This search can be very fast using a dichotomous process in $log_2(N)$ +\[ \left[ cos 2\theta \sum_{i=0}^{M-1} sin(4\pi f.i) + sin 2\theta \sum_{i=0}^{M-1} cos(4\pi f.i)\right] \] -\end{itemize} +\item This search can be very fast using a dichotomous process in $log_2(nb_s)$ -\subsubsection{Comparison} - -\subsection{FPGA constraints} - -%% contraintes imposées par le FPGA : algo pipeline/parallele, pas d'op math complexe, ... +\end{itemize} -\subsection{Least square algorithm} +Finally, the whole summarizes in an algorithm (called LSQ in the following) in two parts, one before and one during the acquisition loop : +\begin{algorithm}[h] +\caption{LSQ algorithm - before acquisition loop.} +\label{alg:lsq-before} + + $M \leftarrow $ number of pixels of the profile\\ + I[] $\leftarrow $ intensities of pixels\\ + $f \leftarrow $ frequency of the profile\\ + $s4i \leftarrow \sum_{i=0}^{M-1} sin(4\pi f.i)$\\ + $c4i \leftarrow \sum_{i=0}^{M-1} cos(4\pi f.i)$\\ + $nb_s \leftarrow $ number of discretization steps of $[-\pi,\pi]$\\ + + \For{$i=0$ to $nb_s $}{ + $\theta \leftarrow -\pi + 2\pi\times \frac{i}{nb_s}$\\ + lut$_s$[$i$] $\leftarrow sin \theta$\\ + lut$_c$[$i$] $\leftarrow cos \theta$\\ + lut$_A$[$i$] $\leftarrow cos 2 \theta \times s4i + sin 2 \theta \times c4i$\\ + lut$_{sfi}$[$i$] $\leftarrow sin (2\pi f.i)$\\ + lut$_{cfi}$[$i$] $\leftarrow cos (2\pi f.i)$\\ + } +\end{algorithm} + +\begin{algorithm}[ht] +\caption{LSQ algorithm - during acquisition loop.} +\label{alg:lsq-during} + + $\bar{x} \leftarrow \frac{M-1}{2}$\\ + $\bar{y} \leftarrow 0$, $x_{var} \leftarrow 0$, $xy_{covar} \leftarrow 0$\\ + \For{$i=0$ to $M-1$}{ + $\bar{y} \leftarrow \bar{y} + $ I[$i$]\\ + $x_{var} \leftarrow x_{var} + (i-\bar{x})^2$\\ + } + $\bar{y} \leftarrow \frac{\bar{y}}{M}$\\ + \For{$i=0$ to $M-1$}{ + $xy_{covar} \leftarrow xy_{covar} + (i-\bar{x}) \times (I[i]-\bar{y})$\\ + } + $slope \leftarrow \frac{xy_{covar}}{x_{var}}$\\ + $start \leftarrow y_{moy} - slope\times \bar{x}$\\ + \For{$i=0$ to $M-1$}{ + $I[i] \leftarrow I[i] - start - slope\times i$\\ + } + + $I_{max} \leftarrow max_i(I[i])$, $I_{min} \leftarrow min_i(I[i])$\\ + $amp \leftarrow \frac{I_{max}-I_{min}}{2}$\\ + + $Is \leftarrow 0$, $Ic \leftarrow 0$\\ + \For{$i=0$ to $M-1$}{ + $Is \leftarrow Is + I[i]\times $ lut$_{sfi}$[$i$]\\ + $Ic \leftarrow Ic + I[i]\times $ lut$_{cfi}$[$i$]\\ + } + + $\delta \leftarrow \frac{nb_s}{2}$, $b_l \leftarrow 0$, $b_r \leftarrow \delta$\\ + $v_l \leftarrow -2.I_s - amp.$lut$_A$[$b_l$]\\ + + \While{$\delta >= 1$}{ + + $v_r \leftarrow 2.[ Is.$lut$_c$[$b_r$]$ + Ic.$lut$_s$[$b_r$]$ ] - amp.$lut$_A$[$b_r$]\\ + + \If{$!(v_l < 0$ and $v_r >= 0)$}{ + $v_l \leftarrow v_r$ \\ + $b_l \leftarrow b_r$ \\ + } + $\delta \leftarrow \frac{\delta}{2}$\\ + $b_r \leftarrow b_l + \delta$\\ + } + \uIf{$!(v_l < 0$ and $v_r >= 0)$}{ + $v_l \leftarrow v_r$ \\ + $b_l \leftarrow b_r$ \\ + $b_r \leftarrow b_l + 1$\\ + $v_r \leftarrow 2.[ Is.$lut$_c$[$b_r$]$ + Ic.$lut$_s$[$b_r$]$ ] - amp.$lut$_A$[$b_r$]\\ + } + \Else { + $b_r \leftarrow b_l + 1$\\ + } + + \uIf{$ abs(v_l) < v_r$}{ + $b_{\theta} \leftarrow b_l$ \\ + } + \Else { + $b_{\theta} \leftarrow b_r$ \\ + } + $\theta \leftarrow \pi\times \left[\frac{2.b_{ref}}{nb_s}-1\right]$\\ + +\end{algorithm} -%% description précise -%% avantage sur FPGA +\subsubsection{Comparison} -\subsection{VDHL design paradigms} +We compared the two algorithms on the base of three criterions : +\begin{itemize} +\item precision of results on a cosinus profile, distorted with noise, +\item number of operations, +\item complexity to implement an FPGA version. +\end{itemize} -\subsection{VDHL implementation} +For the first item, we produced a matlab version of each algorithm, +running with double precision values. The profile was generated for +about 34000 different values of period ($\in [3.1, 6.1]$, step = 0.1), +phase ($\in [-3.1 , 3.1]$, step = 0.062) and slope ($\in [-2 , 2]$, +step = 0.4). For LSQ, $nb_s = 1024$, which leads to a maximal error of +$\frac{\pi}{1024}$ on phase computation. Current A. Meister and +M. Favre experiments show a ratio of 50 between variation of phase and +the deflection of a lever. Thus, the maximal error due to +discretization correspond to an error of 0.15nm on the lever +deflection, which is smaller than the best precision they achieved, +i.e. 0.3nm. + +For each test, we add some noise to the profile : each group of two +pixels has its intensity added to a random number picked in $[-N,N]$ +(NB: it should be noticed that picking a new value for each pixel does +not distort enough the profile). The absolute error on the result is +evaluated by comparing the difference between the reference and +computed phase, out of $2\pi$, expressed in percents. That is : $err = +100\times \frac{|\theta_{ref} - \theta_{comp}|}{2\pi}$. + +Table \ref{tab:algo_prec} gives the maximum and average error for the two algorithms and increasing values of $N$. + +\begin{table}[ht] + \begin{center} + \begin{tabular}{|c|c|c|c|c|} + \hline + & \multicolumn{2}{c|}{SPL} & \multicolumn{2}{c|}{LSQ} \\ \cline{2-5} + noise & max. err. & aver. err. & max. err. & aver. err. \\ \hline + 0 & 2.46 & 0.58 & 0.49 & 0.1 \\ \hline + 2.5 & 2.75 & 0.62 & 1.16 & 0.22 \\ \hline + 5 & 3.77 & 0.72 & 2.47 & 0.41 \\ \hline + 7.5 & 4.72 & 0.86 & 3.33 & 0.62 \\ \hline + 10 & 5.62 & 1.03 & 4.29 & 0.81 \\ \hline + 15 & 7.96 & 1.38 & 6.35 & 1.21 \\ \hline + 30 & 17.06 & 2.6 & 13.94 & 2.45 \\ \hline + +\end{tabular} +\caption{Error (in \%) for cosinus profiles, with noise.} +\label{tab:algo_prec} +\end{center} +\end{table} + +These results show that the two algorithms are very close, with a +slight advantage for LSQ. Furthemore, both behave very well against +noise. Assuming the experimental ratio of 50 (see above), an error of +1 percent on phase correspond to an error of 0.5nm on the lever +deflection, which is very close to the best precision. + +Obviously, it is very hard to predict which level of noise will be +present in real experiments and how it will distort the +profiles. Nevertheless, we can see on figure \ref{fig:noise20} the +profile with $N=10$ that leads to the biggest error. It is a bit +distorted, with pikes and straight/rounded portions, and relatively +close to most of that come from experiments. Figure \ref{fig:noise60} +shows a sample of worst profile for $N=30$. It is completly distorted, +largely beyond the worst experimental ones. + +\begin{figure}[ht] +\begin{center} + \includegraphics[width=9cm]{intens-noise20-spl} +\end{center} +\caption{Sample of worst profile for N=10} +\label{fig:noise20} +\end{figure} + +\begin{figure}[ht] +\begin{center} + \includegraphics[width=9cm]{intens-noise60-lsq} +\end{center} +\caption{Sample of worst profile for N=30} +\label{fig:noise60} +\end{figure} + +The second criterion is relatively easy to estimate for LSQ and harder +for SPL because of $atan$ operation. In both cases, it is proportional +to numbers of pixels $M$. For LSQ, it also depends on $nb_s$ and for +SPL on $N = k\times M$, i.e. the number of interpolated points. + +We assume that $M=20$, $nb_s=1024$, $k=4$, all possible parts are +already in lookup tables and only arithmetic operations (+, -, *, /) +are taken account. Translating the two algorithms in C code, we obtain +about 400 operations for LSQ and 1340 (plus the unknown for $atan$) +for SPL. Even if the result is largely in favor of LSQ, we can notice +that executing the C code (compiled with \tt{-O3}) of SPL on an +2.33GHz Core 2 Duo only takes 6.5µs in average, which is under our +desing goals. The final decision is thus driven by the third criterion.\\ + +The Spartan 6 used in our architecture has hard constraint : it has no +floating point units. Thus, all computations have to be done with +integers. + + + +\subsection{VHDL design paradigms} + +\subsection{VHDL implementation} \section{Experimental results} \label{sec:results}