\section{Introduction} \label{ch11:Introduction}
-Monotonicity preserving interpolation and approximation have received substantial attention in the last thirty years because of their numerous applications in computer aided design, statistics and machine learning \cite{Dierckx1995_book,Kvasov2000_book,deboor2001_book}. Constrained splines are particularly popular because of their flexibility in modelling different geometrical shapes, sound theoretical properties and availability of numerically stable algorithms \cite{Dierckx1995_book,Schumaker1981_book,deboor2001_book}.
+Monotonicity preserving interpolation and approximation have received substantial attention in the last thirty years because of their numerous applications in computer aided design, statistics and machine learning \cite{Dierckx1995_book,Kvasov2000_book,deboor2001_book}. Constrained splines are particularly popular because of their flexibility in modelling different geometrical shapes, sound theoretical properties and availability of numerically stable algorithms \cite{Dierckx1995_book,Schumaker1981_book,deboor2001_book}.
% It is surprising though that few parallel spline algorithms are available.
In this work we examine parallelisation and adaptation for GPUs of a few algorithms of monotone spline interpolation and data smoothing, which arose in the context of estimating probability distributions.
2\delta_{1}-d_2, & \mbox{if } \delta_{1}(2\delta_1-d_2)>0, \\
0 & \mbox{otherwise},
\end{array}
- \right. \;
+ \right.
+ $$
+ $$
d_n=\left\{\begin{array}{ll}
2\delta_{n-1}-d_{n-1}, & \mbox{if } \delta_{n-1}(2\delta_{n-1}-d_{n-1})>0,\\
0 & \mbox{otherwise}.
From the results in Table \ref{ch11:table1} we conclude that serial PAVA is superior to MLS for $n>10^4$. While it is possible to transfer data from GPU to CPU and run PAVA there, it is warranted only for sufficiently large data $n\geq 5 \times 10^5$ , for otherwise the data transfer overheads will dominate CPU time. For smaller $n$, isotone regression is best performed on GPU.
-We also see that the use of GPU accelerated MLS by a factor of at least 100. The cost of serial MLS is prohibitive for $n>10^6$.
+We also see that the use of GPU accelerated MLS by a factor of at least 100. The cost of serial MLS is prohibitive for $n>10^6$.
We should mention that not all isotone regression problems allow a PAV-like algorithm linear in time. When the data may contain large outliers, monotonizing the data is better done not in the least squares sense, but using other cost functionals, such as by minimizing the sum of absolute deviations \cite{Wang} or using M-estimators \cite{Yohai}, which are less sensitive to outliers. It is interesting than in all such cases the solution to isotone regression problem can be found by solving maximin problem
$$
-u_i=\max_{k\leq i} \min_{l \geq i} \hat y(k,l),
+u_i=\max_{k\leq i} \min_{l \geq i} \hat y(k,l),
$$
with $\hat y(k,l)$ being the unrestricted maximum likelihood estimator of $y_k\ldots,y_l$. For quadratic cost function $\hat y(k,l)$ is the mean, as in PAV and MLS algorithms, for the absolute deviations it becomes the median, and for other cost functions an M-estimator of location. The MLS algorithm can be applied to such isotone regression problems with very little modification, while linear in time algorithm may not be available. Our parallel MLS algorithm will be valuable in such cases.
%% \begin{alltt}
%% \begin{center}
%% \begin{minipage}{13cm}\small
-%% template<typename Tx>
+%% template<typename Tx>
%% __device__ Tx Aver(Tx z,int i,int j, Tx *z) \{return (z-z[j+1])/(j-i+1);\}
%% template<typename Tx>
-%% __global__ void monotonizekernel(Tx *y, Tx *z, Tx *u, int *key, int n)
+%% __global__ void monotonizekernel(Tx *y, Tx *z, Tx *u, int *key, int n)
%% \{ int i = threadIdx.x + blockIdx.x * blockDim.x;
%% if(i<n) \{
%% int smallestJ = i;
%% thrust::inclusive_scan(y_reverse_b, y_reverse_end, z_reverse_b+1);
-%% monotonizekernel<<<grid, block>>>(y, thrust::raw_pointer_cast(&z_d[0]),
+%% monotonizekernel<<<grid, block>>>(y, thrust::raw_pointer_cast(&z_d[0]),
%% u, thrust::raw_pointer_cast(&keys_d[0]), n );
%% thrust::sort(keys_d.begin(), keys_d.end());
-%% thrust::inclusive_scan_by_key(keys_d.begin(), keys_d.end(),
+%% thrust::inclusive_scan_by_key(keys_d.begin(), keys_d.end(),
%% u_d, u_d, binary_pred, binary_op2);
%% \}
%% \end{minipage}
We presented three GPU-based parallel algorithms for approximating monotone data: monotone quadratic spline, monotone Hermite rational spline and minimum lower sets algorithm for monotonizing noisy data. These tools are valuable in a number of applications that involve large data sets modeled by monotone nonlinear functions.
The source code of the package monospline is available from \texttt{www.deakin.edu.au/$\sim$gleb/monospline.html }
+
+
\putbib[Chapters/chapter11/biblio11]
else
h[s+1]=0.5*(u[tid+1] + u[tid]);
//calculate coefficients
- double dtemp = (2*b[tid] - c[tid+1])+((c[tid+1] - c[tid])*(h[s+1] - u[tid]))/fmax(1e-10,(u[tid+1] - u[tid]));
+ double dtemp = (2*b[tid] - c[tid+1])+((c[tid+1] - c[tid])*
+ (h[s+1] - u[tid]))/ fmax(1e-10,(u[tid+1] - u[tid]));
alpha[s]=v[tid]; beta[s]=c[tid];
gamma[s]=(dtemp - c[tid])/(2*fmax(1e-10,(h[s+1] - u[tid])));
alpha[s+1]=v[tid] + c[tid]*(h[s+1] - u[tid]) +
(dtemp - c[tid])*(h[s+1] - u[tid])/2;
- gamma[s+1]=(c[tid+1] - dtemp)/(2*fmax(1e-10,(u[tid+1] - h[s+1])));
+ gamma[s+1]=(c[tid+1]-dtemp)/(2*fmax(1e-10,(u[tid+1]-h[s+1])));
beta[s+1]=dtemp;
}
tid += blockDim.x * gridDim.x; s = tid*2;
template<typename Tx>
-__device__ Tx Aver(Tx z,int i,int j, Tx *z) {return (z-z[j+1])/(j-i+1);}
+__device__ Tx Aver(Tx z,int i,int j, Tx *z)
+ { return (z-z[j+1])/(j-i+1); }
template<typename Tx>
__global__ void monotonizekernel(Tx *y, Tx *z, Tx *u, int *key, int n)
-{ int i = threadIdx.x + blockIdx.x * blockDim.x;
+{
+ int i = threadIdx.x + blockIdx.x * blockDim.x;
if(i<n) {
int smallestJ = i;
Tx curP, smallestP, curz=z[i];
}
template< typename Tx >
-void MonotonizeData(Tx *y, int n, Tx *u) {
+void MonotonizeData(Tx *y, int n, Tx *u)
+{
thrust::less_equal<int> binary_pred;
thrust::maximum<Tx> binary_op2;
thrust::device_vector<Tx> z_d(n+1);
thrust::fill(u_d, u_d+n, -1e100);
thrust::fill(keys_d.begin(), keys_d.end(), 0);
- thrust::reverse_iterator< typename thrust::device_vector<Tx>::iterator > y_reverse_b(y_d+n), y_reverse_end(y_d), z_reverse_b(z_d.end());
+ thrust::reverse_iterator< typename thrust::device_vector<Tx>::iterator> y_reverse_b(y_d+n), y_reverse_end(y_d), z_reverse_b(z_d.end());
- thrust::inclusive_scan(y_reverse_b, y_reverse_end, z_reverse_b+1);
+ thrust::inclusive_scan(y_reverse_b,y_reverse_end,z_reverse_b+1);
monotonizekernel<<<grid, block>>>(y, thrust::raw_pointer_cast(&z_d[0]), u, thrust::raw_pointer_cast(&keys_d[0]), n );