2 \chapterauthor{Gleb Beliakov and Shaowu Liu}{School of Information Technology, Deakin University, Burwood 3125, Australia}
3 %\chapterauthor{Shaowu Liu}{School of Information Technology, Deakin University, Burwood 3125, Australia}
6 \chapter{Parallel monotone spline interpolation and approximation on GPUs}
8 \section{Introduction} \label{ch11:Introduction}
10 Monotonicity preserving interpolation and approximation have received substantial attention in the last thirty years because of their numerous applications in computer aided-design, statistics, and machine learning \cite{Dierckx1995_book,Kvasov2000_book,deboor2001_book}. Constrained splines \index{spline}\index{constrained splines}\index{monotonicity} are particularly popular because of their flexibility in modeling different geometrical shapes, sound theoretical properties, and availability of numerically stable algorithms \cite{Dierckx1995_book,Schumaker1981_book,deboor2001_book}.
11 % It is surprising though that few parallel spline algorithms are available.
12 In this work we examine parallelization and adaptation for GPUs of a few algorithms of monotone spline interpolation and data smoothing, which arose in the context of estimating probability distributions.
14 Estimating Cumulative Probability distribution Functions (CDF) from data is quite common in data analysis. In our particular case we faced this problem in the context of partitioning univariate data with the purpose of efficient sorting. It was necessary to partition large data sets into chunks of approximately equal size, so that these chunks could be sorted independently and subsequently concatenated. In order to do that, empirical CDF of the data was used to find the quantiles, which served to partition the data. CDF was estimated from the data based on a number of pairs $(x_i,y_i), i=1,\ldots,n$, where $y_i$ was the proportion of data no larger than $x_i$. As data could come from a variety of distributions, a distribution-free nonparametric fitting procedure was required to interpolate the above pairs. Needless to say the whole process was aimed at GPU, and hence the use of CPU for invoking serial algorithms had to be minimized.
22 \includegraphics[angle=0,width=9cm]{Chapters/chapter11/gregory1_plot1.pdf}
23 \caption[Cubic spline (solid) and monotone quadratic spline (dashed) interpolating monotone data]{Cubic spline (solid) and monotone quadratic spline (dashed) interpolating monotone data from \cite{Gregory1982}. Cubic spline fails to preserve monotonicity of the data.}
28 \includegraphics[angle=00,width=9cm]{Chapters/chapter11/gregory1_plot2_b.pdf}
29 \caption[Hermite cubic spline (solid) and Hermite rational spline interpolating monotone data]{Hermite cubic spline (solid) and Hermite rational spline interpolating monotone data from \cite{Gregory1982} with nonnegative prescribed slopes. Despite nonnegative slopes, the Hermite cubic spline is not monotone.}
34 The above mentioned application is one of many examples (e.g., mass spectrography \cite{Kearsley_2006} and global warming data \cite{Yohai}) where univariate data needs to be fitted by monotonicity preserving interpolants. Of course, CDF is a monotone increasing function, whose inverse, called quantile function, can be used to calculate the quantiles. Spline interpolation would be the most suitable nonparametric method to fit the CDF, except that polynomial splines do not preserve monotonicity of the data, as illustrated on Figure \ref{ch11:fig1}.
37 The failure of splines to preserve monotonicity has prompted fundamental research in this area since the 1960s. One of the first methods to remedy this problem was splines in tension by Schweikert \cite{Sch}, where a tension parameter controlled the shape of exponential splines \cite{Spath1969}. Later on several monotonicity preserving polynomial spline algorithms were proposed \cite{Schumaker1983,PasRoul1977,AndElf1987,Andersson1991_JAT,McAllister1981_ACM}. These algorithms typically rely on introducing additional spline knots between the abscissae of the data. Algorithmic developments are active to this day; see, for example, \cite{Kvasov2000_book,Abbas2011}.
39 When in addition to the pairs $(x_i, y_i)$ the slopes of the function are available, i.e., the data comes in triples $(x_i, y_i, p_i)$, the interpolation problem is called Hermite, and the Hermite splines are used. However, even when the sequence $y_i$ is increasing and the slopes $p_i$ are nonnegative, cubic Hermite splines may still fail to be monotone, as illustrated in Figure \ref{ch11:fig2}. Thus, monotone Hermite splines are needed \cite{Gregory1982}. \index{Hermite splines}
41 Another issue with monotone approximation is noisy data. In this case inaccuracies in the data make the input sequence $y_i$ itself nonmonotone; and hence monotone spline interpolation algorithms will fail. Monotone spline smoothing algorithms are available, e.g., \cite{Andersson1991_JAT,Elfving1989_NM}. Such algorithms are based on solving a quadratic (or another convex) programming problem numerically, and have not yet been adapted to parallel processing.
43 In this work we examine several monotone spline fitting algorithms, and select the ones that we believe are most suitable for parallelization on GPUs. We pay attention to numerical efficiency in terms of numerical calculations and memory access pattern, and favor one-pass algorithms. We also look at smoothing noisy data and developed a parallel version of the Minimum Lower Sets (MLS) algorithm for the isotonic regression problem \cite{Best1990, Robertson_book}.
44 \index{isotone regression}
46 The rest of the chapter is organized as follows. Section \ref{ch11:splines} discusses monotone spline interpolation methods and presents two parallel algorithms. Section \ref{ch11:smoothing} deals with the smoothing problem. It presents the isotonic regression problem and discusses the Pool Adjacent Violators (PAV) and MLS algorithms. Combined with monotone spline interpolation, the parallel MLS method makes it possible to build a monotone spline approximation to noisy data entirely on GPU. Section \ref{ch11:conc} concludes.
49 \section{Monotone splines} \label{ch11:splines}
51 \index{constrained splines} \index{monotonicity}
52 Splines are piecewise continuous functions very popular in numerical approximation and computer-aided design \cite{deboor2001_book,Dierckx1995_book}. An example of a spline is the broken line interpolation. Typically, polynomial splines are used, and the first (and often second) derivatives of the polynomial pieces are required to match at the knots. The knots of the splines are usually the abscissae of the input data, although this condition is not always required (e.g., splines with free knots \cite{Jupp_1978,Dierckx1995_book,Beliakov2003_amc}).
54 Polynomial splines are often represented in the B-spline basis, in which case their coefficients are computed from the input data by solving a banded system of linear equations \cite{Lyche1973, Dierckx1995_book, deboor2001_book}. Tridiagonal systems arise in cubic spline interpolation, while pentadiagonal systems arise in cubic spline smoothing \cite{Lyche1973}. Splines possess important extremal properties \cite{Holladay1957,Lyche1973}, in particular splines of degree $2m-1$ are the most ``smooth" functions that interpolate (or approximate, in the least squares sense) the data. The smoothness term is Tihkonov regularization functional, the $L_2$ norm of the $m$th derivative of the interpolant \cite{Lyche1973}.
56 When the data are known to come from a monotone function, the interpolant needs to be monotone as well. Even if the sequence of data ordinates $y_i, i=1,\ldots,n$ is nondecreasing, cubic (and higher degree) interpolating splines are not necessarily monotone; an example is shown in Figure \ref{ch11:fig1}. To deal with the problem of extraneous inflection points, Schweikert \cite{Sch} proposed splines in tension, which are piecewise exponential functions. Splines in tension have been further explored in \cite{Spath1969, SapKak1988, SapKakLouk1988} and many subsequent works.
58 \subsection{Monotone quadratic splines}
60 For polynomial splines, monotone or otherwise constrained splines were developed in \cite{Schumaker1983,AndElf1987,Andersson1991_JAT,McAllister1981_ACM,PasRoul1977}. Two monotone quadratic spline algorithms were published in the early 1980s \cite{McAllister1981_ACM, Schumaker1983}. Both algorithms are based on introducing additional interpolation knots under certain conditions, to facilitate preservation of monotonicity of the data. McAllister and Roulier's algorithm \cite{McAllister1981_ACM} introduces at most two extra knots between two neighbouring data, while Schumaker's algorithm \cite{Schumaker1983} introduces only one extra knot. In addition, Schumaker's algorithm is one pass, which is particularly suited for parallelization, as no system of equations needs to be solved. While parallel tridiagonal linear systems solvers have been developed for GPUs \cite{tridiag_GPU}, the obvious advantage of a one-pass algorithm is the speed.
61 Because of that, we chose Schumaker's algorithm for GPU parallelization.
63 Let us formally describe Schumaker's algorithm, with Butland's slopes \cite{Butland1980}.
64 The spline is a piecewise quadratic polynomial in the form
66 s(x)=\alpha_i+\beta_i(x-t_i)+\gamma_i(x-t_i)^2, \quad x \in [t_i,t_{i+1}], i=1,\ldots,T,
68 where the knot sequence ${t_i}$ is obtained from the data ${x_i}$ by inserting at most one additional knot per subinterval. Let $\delta_i=(y_{i+1}-y_i)/(x_{i+1}-x_i), i=1,\ldots,n-1$. We define Butland's slopes for $i=2,\ldots,n-1$ as
70 d_i=\left\{\begin{array}{ll}
71 \frac{2\delta_{i-1}\delta_i}{\delta_{i-1}+\delta_i}, & \mbox{if } \delta_{i-1}\delta_i>0, \\
76 The first and the last Butland's slopes are
78 d_1=\left\{\begin{array}{ll}
79 2\delta_{1}-d_2, & \mbox{if } \delta_{1}(2\delta_1-d_2)>0, \\
85 d_n=\left\{\begin{array}{ll}
86 2\delta_{n-1}-d_{n-1}, & \mbox{if } \delta_{n-1}(2\delta_{n-1}-d_{n-1})>0,\\
92 When $d_i+d_{i+1}=2\delta_i$, then a single quadratic polynomial interpolates the data on $[x_i,x_{i+1}]$ and $t_i=x_i$, $\alpha_i=y_i, \beta_i=d_i$, and $ \gamma_i=\frac{d_{i+1}-d_i}{2(x_{i+1}-x_i)}$. Otherwise an additional knot $t_i$ is required, and
94 \alpha_{i}&=&y_{i}, \beta_{i}=d_{i}, \gamma_{i}=\frac{(\bar{d}_{i}-d_{i})}{2(t_{i}-x_{i})}, x\in\left [ x_{i},t_{i} \right ],\\
95 \bar{\alpha}_{i}&=&y_{i}+d_{i}(t_{i}-x_{i})+\frac{(\bar{d}_{i}-d_{i})}{2(t_{i}-x_{i})}, \bar{\beta}_{i}=\bar{d}_{i}, \bar{\gamma}_{i}=\frac{(d_{i+1}-\bar{d}_{i})}{2(x_{i+1}-t_{i})}, x\in\left [ t_{i},x_{i+1} \right ],
99 \bar{d}_{i}=(2{\delta}_{i}-d_{i+1})+\frac{(d_{i+1}-d_{i})(t_{i}-x_{i})}{(x_{i+1}-x_{i})}.
102 The position of the additional knot is selected as follows
106 x_{i+1}+\frac{(d_{i}-{\delta}_{i})(x_{i+1}-x_{i})}{(d_{i+1}-d_{i})},& \text{ if } (d_{i+1}-{\delta}_{i})(d_{i}-{\delta}_{i})< 0, \\
107 \frac{1}{2}(x_{i+1}+x_{i}) & \text{ otherwise. }
111 It is almost straightforward to parallelize this scheme for GPUs, by processing each subinterval $[x_i,x_{i+1}]$ independently in a separate thread. However, it is not known in advance whether an extra knot $t_i$ needs to be inserted, and therefore calculation of the position of the knot in the output sequence of knots ${t_i}$ is problematic for parallel implementation (for a sequential algorithm no such issue arises). To avoid serialization, we decided to insert an additional knot in every interval $[x_i,x_{i+1}]$, but set $t_i=x_i$ when the extra knot is not actually needed. This way we know in advance the position of the output knots and the length of this sequence is $2(n-1)$, and therefore all calculations can now be performed independently. The price we pay is that some of the spline knots can coincide. However, this does not affect spline evaluation, as one of the coinciding knots is simply disregarded, and the spline coefficients are replicated (so for a double knot $t_i=t_{i+1}$, we have $\alpha_i=\alpha_{i+1}$, $\beta_i=\beta_{i+1}$, $\gamma_i=\gamma_{i+1}$). Our implementation is presented in Listings \ref{ch11:algcoef1}-\ref{ch11:algcoef}.
113 \lstinputlisting[label=ch11:algcoef1,caption=calculation of monotone spline knots and coefficients.]{Chapters/chapter11/code2.cu}
116 At the spline evaluation stage we need to compute $s(z_k)$ for a sequence of query values ${z_k}, k=1,\ldots,K$. For each $z_k$ we locate the interval $[t_i,t_{i+1}]$ containing $z_k$, using the bisection algorithm presented in Listing \ref{ch11:algeval}, and then apply the appropriate coefficients of the quadratic function. This is also done in parallel.
117 The bisection algorithm could be implemented using texture memory (to cache the array \texttt{z}), but this is not shown in Listing \ref{ch11:algeval}.
120 \lstinputlisting[label=ch11:algcoef,caption=implementation of the kernel for calculating spline knots and coefficients; function fmax is used to avoid division by zero for data with coinciding abscissae.]{Chapters/chapter11/code1.cu}
123 %% \begin{figure}[!hp]
124 %% \renewcommand{\baselinestretch}{1}
127 %% \begin{minipage}{13cm}\small
129 %% template<typename Tx, typename Ty>
130 %% \_\_global\_\_ void CalculateCoefficientsKnots( Tx *u, Ty *v, double *b, double *c,
131 %% double *t, double *alpha, double *beta, double *gamma, int N )
133 %% int tid = threadIdx.x + blockIdx.x * blockDim.x;
137 %% // decide whether an additional knot is necessary
138 %% if(fabs(c[tid]+c[tid+1]- 2*b[tid])<=0.1e-5) // tolerance
139 %% \{ //no additional knot
140 %% h[s]=h[s+1]=u[tid];
141 %% alpha[s]=alpha[s+1]=v[tid];
142 %% beta[s]=beta[s+1]=c[tid];
143 %% gamma[s]=gamma[s+1]=(c[tid+1]-c[tid])/(2*(fmax(1e-10,u[tid+1]-u[tid])));
144 %% \} else \{ //adding a knot
146 %% //determine the position of the knot
147 %% if((c[tid+1] - b[tid])*(c[tid] - b[tid])<0)
148 %% h[s+1]=u[tid+1] + (c[tid] - b[tid])*(fmax(1e-10,u[tid+1]-u[tid]))/
149 %% fmax(1e-10,(c[tid+1] - c[tid]));
151 %% h[s+1]=0.5*(u[tid+1] + u[tid]);
152 %% //calculate coefficients
153 %% double dtemp = (2*b[tid] - c[tid+1])+((c[tid+1] - c[tid])*(h[s+1] - u[tid]))/
154 %% fmax(1e-10,(u[tid+1] - u[tid]));
155 %% alpha[s]=v[tid]; beta[s]=c[tid];
156 %% gamma[s]=(dtemp - c[tid])/(2*fmax(1e-10,(h[s+1] - u[tid])));
157 %% alpha[s+1]=v[tid] + c[tid]*(h[s+1] - u[tid]) +
158 %% (dtemp - c[tid])*(h[s+1] - u[tid])/2;
159 %% gamma[s+1]=(c[tid+1] - dtemp)/(2*fmax(1e-10,(u[tid+1] - h[s+1])));
162 %% tid += blockDim.x * gridDim.x; s = tid*2;
164 %% \_\_syncthreads();
165 %% // Select a single thread to perform the last operation
166 %% if((threadIdx.x ) == 0) \{
167 %% s = (N-1) * 2; h[s]=u[N-1];
169 %% \_\_syncthreads();
174 %% \caption{Implementation of the kernel for calcuating spline knots and coefficients. Function fmax is used to avoid division by zero for data with coinciding abscissae.}
175 %% \label{ch11:algcoef}
176 %% \renewcommand{\baselinestretch}{2}
181 %% \begin{figure}[!hp]
182 %% \renewcommand{\baselinestretch}{1}
185 %% \begin{minipage}{13cm}\small
187 %% template<typename Tx, typename Ty>
188 %% \_\_global\_\_ void CalculateBeta(Tx *u, Ty *v, double *b, int N)
190 %% int tid = threadIdx.x + blockIdx.x * blockDim.x;
191 %% while(tid<=(N-2)) \{
192 %% b[tid]=(v[tid+1]-v[tid])/fmax(1e-20,double(u[tid+1]-u[tid]));
193 %% tid += blockDim.x * gridDim.x;
195 %% \_\_syncthreads();
197 %% \_\_global\_\_ void CalculateDGeneral( double *b, double *c, int N)
199 %% int tid = threadIdx.x + blockIdx.x * blockDim.x;
200 %% while(tid<=(N-2)) \{
201 %% if((b[tid-1]*b[tid])<=0) c[tid]=0;
202 %% else c[tid]=(2*b[tid-1]*b[tid])/(b[tid-1]+b[tid]);
204 %% tid += blockDim.x * gridDim.x;
206 %% \_\_syncthreads();
208 %% \_\_global\_\_ void CalculateD( double *b, double *c, int N )
210 %% if((b[0]*(2*b[0]-c[1]))<=0) c[0]=0;
211 %% else c[0]=2*b[0] - c[1];
212 %% if((b[N-2]*(2*b[N-2]-c[N-2]))<=0) c[N-1]=0;
213 %% else c[N-1]=2*b[N-2] - c[N-2];
214 %% \_\_syncthreads();
216 %% template<typename Tx, typename Ty>
217 %% int BuildMonotonSpline(Tx *d_X, Ty *d_Y, int N,
218 %% double *t, double *alpha, double *beta, double *gamma)
220 %% int T = (N-1)*2+1; // length of the output array
221 %% double *b, *c; // temp variables
222 %% cudaMalloc( (void**)&b, 1*N*sizeof(double) );
223 %% cudaMalloc( (void**)&c, 2*N*sizeof(double) );
225 %% int blocks = (N-1)/threads + 1;
226 %% CalculateBeta<<<blocks,threads>>>(d_X,d_Y,b,N);
227 %% CalculateDGeneral<<<blocks,threads>>>(b,c,N);
228 %% CalculateD<<<1,1>>>(b,c,NN); // calculate d_1 and d_N
229 %% CalculateCoefficientsKnots<<<blocks,threads>>>(d_X,
230 %% d_Y,b,c,h,alpha,beta,gamma,N);
231 %% cudaFree(b); cudaFree(c);
237 %% \caption{Calculation of monotone spline knots and coefficients.}
238 %% \label{ch11:algcoef1}
239 %% \renewcommand{\baselinestretch}{2}
242 %% \begin{figure}[!hp]
243 %% \renewcommand{\baselinestretch}{1}
246 %% \begin{minipage}{13cm}\small
248 %% template<typename T>
249 %% \_\_device\_\_ void Bisection\_device(T z, T* t, int mi, int ma, int* l)
254 %% if(z >= t[i]) mi=i+1;
261 %% /* Kernel to evaluates monotone spline for a sequence of query points
262 %% residing in the array z of size m
264 %% template<typename Tx, typename Ty>
265 %% \_\_global\_\_ void d\_MonSplineValue(Tx* z, int K, double* t,
266 %% double * alpha, double * beta, double * gamma, int T, Ty *value)
268 %% int tid = threadIdx.x + blockIdx.x * blockDim.x;
269 %% int mi=0, ma=T, i=0;
273 %% Bisection\_device(z[tid], t, mi, ma, &i);
275 %% r= alpha[i] + r*(beta[i] + gamma[i]*r);
277 %% tid += blockDim.x * gridDim.x;
279 %% \_\_syncthreads();
282 %% template<typename Tx, typename Ty>
283 %% void MonotoneSplineValue(Tx *z, int K, double* t,
284 %% double * alpha, double * beta, double * gamma, int T, Ty* result)
286 %% int blocks,threads=256;
287 %% blocks=(K-1)/threads+1;
288 %% d\_MonSplineValue<<<blocks,threads>>>(z,K,t,alpha,beta,gamma,T,result);
293 %% \caption{Implementation of the spline evaluation algorithm for GPU.}
294 %% \label{ch11:algeval}
295 %% \renewcommand{\baselinestretch}{2}
298 \lstinputlisting[label=ch11:algeval,caption=implementation of the spline evaluation algorithm for GPU.]{Chapters/chapter11/code3.cu}
300 \subsection{Monotone Hermite splines}
301 \index{Hermite splines} \index{monotonicity}
302 In this section, in addition to the points $(x_i,y_i)$ we have the slopes $p_i$, and hence, we consider monotone Hermite interpolation. In our motivating application of CDF estimation, the values $p_i$ are easily obtained together with $y_i$, and their use may help to build a more accurate interpolant.
303 Of course, for monotone nondecreasing functions we must have $p_i\geq 0$. However, this does not guarantee that the spline interpolant is monotone, as can be seen in Figure \ref{ch11:fig2}. Fritsch and Carlson \cite{Fritsch1980} showed that nonnegative $p_i$ is not a sufficient condition to guarantee monotonicity, and designed a process for modification of derivatives, so that the necessary and sufficient conditions for monotonicity of a piecewise cubic are met. Hence, the values $p_i$ are not matched exactly. In contrast, Gregory and Delbourgo \cite{Gregory1982} designed piecewise rational quadratic spline, for which the nonnegativity of $p_i$ is both a necessary and sufficient condition.
305 The rational quadratic spline in \cite{Gregory1982} is constructed as
307 s(x)=\left\{ \begin{array}{ll}
308 \frac{P_i(\theta)}{Q_i(\theta)}, & \mbox{if } \Delta_{i}\neq 0, \\
309 y_i & \mbox{otherwise},
315 \theta=(x-x_i)/(x_{i+1}-x_i), \quad \Delta_i=(y_{i+1}-y_i)/(x_{i+1}-x_i),
319 P_i(\theta)/Q_i(\theta)=y_i+\frac{(y_{i+1}-y_i)[\Delta_i \theta^2+p_i \theta (1-\theta)]}{\Delta_i+(p_{i+1}+p_i-2\Delta_i)\theta(1-\theta)}.
322 This rational spline is continuously differentiable and interpolates both the values $y_i$ and the derivatives $p_i$. Its derivative is given by
324 s'(x)=\Delta_i^2[p_{i+1}\theta^2+2\Delta_i \theta(1-\theta)+p_i (1-\theta)^2]/Q_i(\theta)^2,
328 Q_i(\theta)= \Delta_i+(p_{i+1}+p_i-2\Delta_i)\theta(1-\theta),
330 provided $\Delta_i \neq 0$ ($s'(x)=0$ otherwise), and this expression is nonnegative.
332 It is clear that Gregory and Delbourgo's Hermite interpolant \cite{Gregory1982} is trivially parallel, and the parameters $h_i=x_{i+1}-x_i$ and $\Delta_i$ are easily computed in a simple kernel. Evaluation of the spline and its derivative is accomplished by locating the interval containing the query point $x$ using bisection, as in Listing \ref{ch11:algeval}, and applying the above-mentioned formulas.
335 \section{Smoothing noisy data via parallel isotone regression} \label{ch11:smoothing}
338 Inaccuracies in the data are common in practice and need to be accounted for during the spline approximation process. Smoothing polynomial splines were presented in \cite{Lyche1973}, where the data were fitted in the least squares sense while also minimizing the $L_2$ norm of the $m$th derivative of the spline. Monotone smoothing splines have been dealt with in several works, in particular we mention \cite{Andersson1991_JAT,Elfving1989_NM}. The presented algorithms rely on solving quadratic programming problems. Monotone approximating splines with fixed knots distinct form the data have been presented in \cite{Beliakov2000_ata}, where an instance of a quadrating programming problem is solved as well.
340 \index{isotone regression} \index{monotonicity}
341 Another approach consists of monotonizing the data, so that the sequence $y_i$ becomes monotone. This approach is known as isotone regression \cite{Best1990, Robertson_book}. It is different from monotone spline smoothing, as the regularization term controlling the $L_2$ norm of the $m$th derivative is not taken into account. Usually the data is monotonized by minimizing the squared differences to the inputs. It becomes a quadratic programming problem, usually solved by active sets methods \cite{Best1990}.
342 A popular PAV algorithm (PAVA) is one method that provides efficient numerical solution.
343 \index{PAV algorithm}
345 PAVA consists of the following steps. The sequence ${y_i}$ is processed from the start. If violation of monotonicity $y_i>y_{i+1}$ is found, both values $y_i$ and $y_{i+1}$ are replaced with their average $y'_i$, and both values form a block. Since the new value $y'_i$ is smaller than $y_i$, monotonicity may become violated with respect to $y_{i-1}$. If this is the case, the $i-1$st, $i$th, and $i+1$st data are merged into a block and their values are replaced with their average. We continue to back-average as needed to get monotonicity.
347 Various serial implementations of the PAVA exist. It is noted \cite{Kearsley_2006} that in PAVA, which is based on the ideas from convex analysis, a decomposition theorem holds, namely, performing PAVA separately on two contiguous subsets of data and then performing PAVA on the result produces isotonic regression on the whole data set. Thus, isotonic regression is parallelizable, and the divide-and-conquer approach, decomposing the original problem into two smaller subproblems, can be implemented on multiple processors. However, to our knowledge, no parallel PAVA for many-core systems such as GPUs exist.
349 \index{MLS (minimum lower sets) algorithm}
350 Another approach to isotonic regression is called the MLS algorithm
351 \cite{Best1990, Robertson_book}. It provides the same solution as the PAVA, but works differently. For each datum (or block), MLS selects the largest contiguous block of subsequent data with the smallest average. If this average is smaller than that of the preceding block, the blocks are merged, and the data in the block are replaced with their average. MLS is also an active set method \cite{Best1990}, but its complexity is $O(n^2)$ as opposed to $O(n)$ of the PAVA, and of another active set algorithm proposed in \cite{Best1990} by the name of Algorithm A.
353 In terms of GPU parallelization, neither PAVA nor Algorithm A appears to be suitable, as the techniques that achieve $O(n)$ complexity are inheritably serial.
354 In this work we focus on parallelizing MLS. First, we precompute the values
359 using the parallel partial sum algorithm (\texttt{scan} algorithm from Thrust \cite{Thrust} library).
360 From these values we can compute the averages of the blocks of data with the indices $\{i,i+1,\ldots,j\}$
361 \begin{equation} \label{ch11:eq1}
362 P_{ij}=\frac{1}{j-i+1}\sum_{k=i}^j y_k = \frac{1}{j-i+1} (z_i-z_{j+1}).
365 As per MLS algorithm, for each fixed $i$ from 1 to $n$, we compute the smallest $P_{ij}$ starting from $j=i+1$ and fix the index $j^*$. If $y_i>P_{ij^*}$, we replace the values $y_i,\ldots,y_{j^*}$ with their average $P_{ij^*}$; otherwise we keep the value $y_i$. In case of replacement, we advance $i$ to position $j^*+1$. We check the condition $y_i>P_{i,j^*}$ to form a block, which is equivalent to $y_i>P_{i+1,j^*}$ as $P_{ij}=\frac{1}{j-i+1}((j-i) P_{i+1,j}+y_i)$, from which we deduce that both inequalities hold simultaneously.
367 %Also we note that $y_{j^*}\leq P_{ij^*}$, which means we
369 %First let us see that the above algorithm computes the same output as PAVA, and then look at its parallelisation. Note that $y_i\leq P_{ij} \leq P_{ik}, k=i+1,\ldots,n$, implies $y_i\leq y_k, k=i+1,\ldots,n$ and hence $y_i$ does not violate monotonicity with respect to neither the data $y_k$ nor their potential averages (at the back-averaging step), and hence needs no replacement. Then, consider step $j$ of the PAVA. It back-averages the values $y_k, k=j, j-1,\ldots, i$, for which $y_i>P_{i+1,j}$, and replaces them with $P_{i,j}$. This is exactly what we do in our algorithm, although we invoke this replacement from $i$, not $j$.
373 %Finally, in PAVA the value $y_k, i \leq k\leq j$ can be replaced several times at different back-averaging steps, the latest being $P_{ij}$ corresponding to the smallest $i$ for which $y_i>P_{i+1,j}$. In our version we start with the smallest $i$, and replace $y_k$ only once, and then advance to position $j$, so they are not overwritten. Therefore, our algorithm performs the same replacements as PAVA but in a different order, and the outputs are the same. Below we show that the order does not really matter if we perform replacements with the max operation.
376 Now the presented algorithm can be parallelized for GPUs: each datum $y_i$ is treated in its own thread. Calculation of the smallest $P_{ij}$ is performed serially within the $i$th thread, or in parallel by starting children threads.
377 Replacing the values $y_i,\ldots,y_{j^*}$ with $P_{ij^*}$ leads to potential clashes, as several threads can perform this operation on the same elements $y_k$. This can be circumvented by using max operation, i.e., $y_k\leftarrow \max(y_k,P_{ij})$. Note that thread $i$ replaces the value $y_k$, $k\geq i$ if $P_{ij}<y_i$. Now, if two threads $i_1$ and $i_2$ need to replace $y_k$, and $i_1<i_2$, we must have $P_{i_1 j_1}\geq P_{i_2 j_2}$, as formalized in the following.
379 \begin{proposition} If partial averages $P_{ij}$ are defined by (\ref{ch11:eq1}) and $i_1<i_2$, $j_1,j_2 \geq i_1,i_2$, where $j_1, j_2$ denote the minimizers of $P_{i_1 j}$ over $j\geq i_1$ (respectively $P_{i_2 j}$ over $j\geq i_2$ ), then $P_{i_1 j_1}\geq P_{i_2 j_2}$.
383 Because the average satisfies
385 \frac{1}{k} \sum_{i=1}^k y_i \geq \frac{1}{n-k} \sum_{i=k+1}^{n} y_i \Rightarrow
386 \frac{1}{k} \sum_{i=1}^k y_i \geq \frac{1}{n} \sum_{i=1}^{n} y_i \geq \frac{1}{n-k} \sum_{i=k+1}^{n} y_i,
388 we must have $P_{i_1 i_2-1} \geq P_{i_1 j_1} \geq P_{i_2 j_1}$. At the same time we have
389 $P_{i_2 j_1} \geq P_{i_2 j_2}$, which implies $P_{i_1 j_1}\geq P_{i_2 j_2}$.
392 %In the serial PAVA, step $i_1$ is executed after $i_2$, so $P_{i_1 j_1}$ overrides $P_{i_2 j_2}$, but this order is not preserved in parallel computations.
394 The order in which the steps are performed is not guaranteed in parallel computations.
395 By the proposition above, $P_{i_2 j_2}\leq P_{i_1 j_1}<y_{i_1}$ whenever the value $y_{i_1}$ needs replacement by the average of its block, which leads to overriding all $y_k, i_1 \leq k \leq j_1$ with $P_{i_1 j_1}$, which is no smaller than $P_{i_2 j_2}$. Thus, in the serial algorithm $y_k$ may only be replaced with a larger value as the algorithm progresses. Therefore,
396 the max operation in the parallel algorithm ensures that $y_k$ is replaced with the same value as in the serial algorithm, regardless of the order of the steps.
398 We present the source code of the parallel MLS in Listing \ref{ch11:algMLS}. Here we reduce the number of writes to the global memory by using an indexing array \texttt{keys\_d} to encode blocks and subsequently performing a \texttt{scan} operation with the maximum operator and indexed by \texttt{keys\_d}, so that maximum is taken within each block.
400 \lstinputlisting[label=ch11:algMLS,caption=fragments of implementation of a parallel version of the MLS algorithm using Thrust library.]{Chapters/chapter11/code4.cu}
405 As we mentioned, the complexity of the MLS algorithm is $O(n^2)$, due to the fact that for each datum, the smallest average $P_{ij}$ of the blocks of subsequent data is needed. Thus, each thread needs to perform $O(n)$ comparisons (the averages themselves are precomputed in $O(n)$ operations using the partial sum algorithm). It is interesting to compare the runtime of the PAVA algorithm on CPU and parallel MLS on GPU to establish for which $n$ parallel MLS is preferable. We performed such experiments on Tesla 2050 device connected to a four-core Intel i7 CPU with 4 GB RAM clocked at 2.8 GHz, running Linux (Fedora 16).
407 First we compared the serial versions of PAV and MLS algorithms. For this we used two packages in R environment, \texttt{stats} and \texttt{fdrtool}. The package \texttt{stats} offers function \texttt{isoreg}, which implements the MLS algorithm in C language, whereas package \texttt{fdrtool} offers PAVA, also implemented in C. Overheads of R environment can be neglected, as the input data are simply passed to C code, so we can compare the running time of both algorithms head to head. We generated input data of varying lengths $n$ from $10^4$ to $ 5 \times 10^7$ randomly, using $y_i=f(x_i)+\varepsilon_i$, where $f$ is a monotone test function and $\varepsilon$ is random noise. We also tried completely ordered isotone data, and antitone data, to check the performance for adversary inputs. Subsequently, we measured the runtime of our parallel version of MLS algorithm on Tesla 2050 GPU. The results are presented in Table \ref{ch11:table1}.
409 %% %\renewcommand{\baselinestretch}{1}
412 \begin{tabular}{|r|r|r|r|}
414 Data & PAVA & MLS & GPU MLS \\ \hline
416 monotone increasing $f$ & & & \\
417 $n=5\times 10^4$ &0.01&5& 0.092\\
418 $n=10^5$ &0.03&40& 0.35\\
419 $n=5\times 10^5$ &0.4&1001&8.6 \\
420 $n=10^6$ &0.8& 5000& 38 \\
421 $n=2 \times 10^6$ & 1.6 &-- &152 \\
422 $n=10 \times 10^6$ & 2 &-- & 3500 \\
423 $n=20 \times 10^6$ & 4.5&-- & --\\
424 $n=50 \times 10^6$ & 12 &-- & --\\
427 constant or decreasing $f$ & & & \\
428 $n=10^6$ &0.2&0.1& 38\\
429 $n=10 \times 10^6$ &1.9& 1.9& 3500 \\
430 $n=20 \times 10^6$ &3.5& 4.0&-- \\
431 $n=50 \times 10^6$ &11& 11& -- \\
435 \caption{The average CPU time (sec) of the serial PAVA, MLS, and parallel MLS algorithms. }
440 As expected, the runtimes of both methods differed significantly, as shown in Table \ref{ch11:table1}, and clearly linear PAVA was superior to serial MLS algorithm. Even though for some special cases, e.g., test function $f=const$, both serial methods gave the same running time; this can be explained by the fact that large blocks of data allowed MLS to skip the majority of tests. This did not happen in the parallel version of MLS, where for each datum the smallest value of $P_{ij^*}$ was computed (in parallel), so the average CPU times were the same for all data.
442 From the results in Table \ref{ch11:table1} we conclude that serial PAVA is superior to MLS for $n>10^4$. While it is possible to transfer data from GPU to CPU and run PAVA there, it is warranted only for sufficiently large data $n\geq 5 \times 10^5$, for otherwise the data transfer overheads will dominate CPU time. For smaller $n$, isotone regression is best performed on GPU.
445 We also see that the use of GPU accelerated MLS by a factor of at least 100, except for antitone data. The cost of serial MLS is prohibitive for $n>10^6$.
447 We should mention that not all isotone regression problems allow a PAV-like algorithm linear in time. When the data may contain large outliers, monotonizing the data is better done not in the least squares sense, but using other cost functionals, such as by minimizing the sum of absolute deviations \cite{Wang} or using M-estimators \cite{Yohai}, which are less sensitive to outliers. It is interesting than in all such cases the solution to an isotone regression problem can be found by solving maximin problem
449 u_i=\max_{k\leq i} \min_{l \geq i} \hat y(k,l),
451 with $\hat y(k,l)$ being the unrestricted maximum likelihood estimator of $y_k\ldots,y_l$. For the quadratic cost function $\hat y(k,l)$ corresponds to the mean of these data (as in PAV and MLS algorithms), for the absolute deviations $\hat y(k,l)$ corresponds to the median, and for other cost functions it corresponds to an M-estimator of location. The MLS algorithm can be applied to such isotone regression problems with very little modification. However, we are unaware of other algorithms for solving the modified problem that linear in time. Our parallel MLS algorithm will be valuable in such cases.
454 %% %\renewcommand{\baselinestretch}{2}
457 %% \begin{figure}[!hp]
460 %% \begin{minipage}{13cm}\small
461 %% template<typename Tx>
462 %% __device__ Tx Aver(Tx z,int i,int j, Tx *z) \{return (z-z[j+1])/(j-i+1);\}
464 %% template<typename Tx>
465 %% __global__ void monotonizekernel(Tx *y, Tx *z, Tx *u, int *key, int n)
466 %% \{ int i = threadIdx.x + blockIdx.x * blockDim.x;
468 %% int smallestJ = i;
469 %% Tx curP, smallestP, curz=z[i];
470 %% smallestP=Aver(curz,i,i,z);
471 %% for(int j = i+1; j < n; j++) \{
472 %% curP=Aver(curz,i,j,z);
473 %% if(smallestP>curP) \{
479 %% if(curP > smallestP) t=smallestP;
486 %% template< typename Tx >
487 %% void MonotonizeData(Tx *y, int n, Tx *u) \{
488 %% thrust::less_equal<int> binary_pred;
489 %% thrust::maximum<Tx> binary_op2;
490 %% thrust::device_vector<Tx> z_d(n+1);
491 %% thrust::device_vector<int> keys_d(n);
492 %% thrust::device_ptr<Tx> y_d(y), u_d(u);
493 %% thrust::fill(u_d, u_d+n, -1e100);
494 %% thrust::fill(keys_d.begin(), keys_d.end(), 0);
496 %% thrust::reverse_iterator< typename thrust::device_vector<Tx>::iterator >
497 %% y_reverse_b(y_d+n), y_reverse_end(y_d), z_reverse_b(z_d.end());
499 %% thrust::inclusive_scan(y_reverse_b, y_reverse_end, z_reverse_b+1);
501 %% monotonizekernel<<<grid, block>>>(y, thrust::raw_pointer_cast(&z_d[0]),
502 %% u, thrust::raw_pointer_cast(&keys_d[0]), n );
504 %% thrust::sort(keys_d.begin(), keys_d.end());
505 %% thrust::inclusive_scan_by_key(keys_d.begin(), keys_d.end(),
506 %% u_d, u_d, binary_pred, binary_op2);
511 %% \caption{Fragments of implementation of a parallel version of the MLS algorithm using Thrust library.}
512 %% \label{ch11:algMLS}
517 \section{Conclusion} \label{ch11:conc}
519 We presented three GPU-based parallel algorithms for approximating monotone data: monotone quadratic spline, monotone Hermite rational spline, and minimum lower sets algorithm for monotonizing noisy data. These tools are valuable in a number of applications that involve large data sets modeled by monotone nonlinear functions.
520 The source code of the package monospline is available from \texttt{www.deakin.edu.au/$\sim$gleb/monospline.html }
524 \putbib[Chapters/chapter11/biblio11]