X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/ThesisAli.git/blobdiff_plain/0a4831505b7eaf36f78bc512ac5c62f033fe0d59..refs/heads/master:/CHAPITRE_03.tex?ds=inline diff --git a/CHAPITRE_03.tex b/CHAPITRE_03.tex index 8d7ebc2..ab74629 100644 --- a/CHAPITRE_03.tex +++ b/CHAPITRE_03.tex @@ -23,7 +23,7 @@ Optimization solvers dedicated to specific resolution methods (meta-heuristics, %In this dissertation, we use linear programming because we used integer programs to optimize the coverage and the lifetime in WSNs. In this dissertation, we optimize the coverage and the lifetime in WSNs and the problem is formulated through linear programming. These kind of models allow to obtain optimal solutions satisfying an objective (for instance, maximize coverage) under constraints (for instance, available remaining energy). -The remainder of this chapter is organized as follows. The next section is devoted to the evaluation tools for evaluating and validating the performance of proposed algorithms and protocols. Section \ref{ch3:3} gives the most popular free and commercial linear optimization solvers to solve the linear programming problems. Finally, we give concluding remarks in section \ref{ch3:4}. +The remainder of this chapter is organized as follows. The next section is devoted to the evaluation tools for evaluating and validating the performance of proposed algorithms and protocols. Section \ref{ch3:3} gives the most popular free and commercial linear optimization solvers to solve linear programming problems. Finally, we give concluding remarks in section \ref{ch3:4}. \section{Evaluation Tools} \label{ch3:2} @@ -49,7 +49,8 @@ MoteLab~\cite{ref181,ref182} is a WSN testbed developed at the electrical and co \item \textbf{WISEBED:} -The WISEBED~\cite{ref183} is a large-scale WSN testbed with a hierarchical architecture that consists of four major parts: wireless sensor nodes, gateways, portal server, and overlay network. The lowest level of the hierarchy corresponds to the WSN in which a set of these sensor nodes is connected to the gateway to provide access to the other remaining sensor nodes. The gateways are connected to a portal server, which not only supervises the WSN, but also allows user interactions with the testbed. Each WISBED site includes separate portal server. The major goal of this testbed is to provide a multi-level infrastructure of interconnected testbeds of large scale wireless sensor networks for research purposes. It provides an interdisciplinary approach which integrates the hardware, software, algorithms, and data. WISEBED provides heterogeneous large-scale structure because it brings several heterogeneous small-scale devices together to form a large-scale network structure. It provides the research with different quality networks due to the heterogeneous structure of the network. +The WISEBED~\cite{ref183} is a large-scale WSN testbed with a hierarchical architecture that consists of four major parts: wireless sensor nodes, gateways, portal server, and overlay network. The lowest level of the hierarchy corresponds to the WSN in which a set of these sensor nodes is connected to the gateway to provide access to the other remaining sensor nodes. The gateways are connected to a portal server, which not only supervises the WSN, but also allows user interactions with the testbed. Each WISBED site includes separate portal server. The major goal of this testbed is to provide a multi-level infrastructure of interconnected testbeds of large scale wireless sensor networks for research purposes. It provides an interdisciplinary approach which integrates the hardware, software, algorithms, and data. WISEBED provides heterogeneous large-scale structure because it brings several heterogeneous small-scale devices together to form a large-scale network structure. +%It provides the research with different quality networks due to the heterogeneous structure of the network. @@ -81,13 +82,13 @@ Realistic WSN context & Deployment in target environments \\ \hline Scalability and performance in WSNs & simulations and hybrid testbeds, large network size \\ \hline Sensor node design & Development of sensor nodes, monitoring, measurement, debugging \\ \hline Interoperability and platform support & Support for different platforms, heterogeneity \\ \hline -Application and protocol design & Tool support for development,debugging and deployment \\ \hline +Application and protocol design & Tool support for development, debugging and deployment \\ \hline \end{tabular} } \end{table} -Several sensor nodes testbeds are available to support the research efforts in WSNs, but only a few of them provide common evaluation goals for a large number of users~\cite{ref187,ref181}. However, all the WSN testbeds have usually many common properties, such as a typical number of sensors of the order of hundreds and sometimes only tens of nodes; the deployment of the sensor nodes in a static grid topology; metrics and debug information are obtained via wired connections. Therefore, the WSN testbeds impose strong limitations on the WSNs in terms of size and topology. Moreover, the cost of performing an experiment on a testbed is much higher than through a simulation because setting up the experiments, instrumenting the nodes, gathering the metrics on the performance, and so on is so expensive. For evaluating the systems and protocols on a large sensor networks, the simulation tools are the better choice due to the costs for hardware and maintenance~\cite{ref186}. Hence, the simulation tools stay the most practical tools to obtain a feedback on the performance of a new solution~\cite{ref180}. +Several sensor nodes testbeds are available to support the research efforts in WSNs, but only a few of them provide common evaluation goals for a large number of users~\cite{ref187,ref181}. However, all the WSN testbeds have usually many common properties, such as a typical number of sensors of the order of hundreds and sometimes only tens of nodes; the deployment of the sensor nodes in a static grid topology; metrics and debug information are obtained via wired connections. Therefore, the WSN testbeds impose strong limitations on the WSNs in terms of size and topology. Moreover, the cost of performing an experiment on a testbed is much higher than through a simulation because setting up the experiments, instrumenting the nodes, gathering the metrics on the performance, and so on is so expensive. For evaluating the systems and protocols on large sensor networks, the simulation tools are the better choice due to the costs for hardware and maintenance~\cite{ref186}. Hence, the simulation tools stay the most practical tools to obtain a feedback on the performance of a new solution~\cite{ref180}. \subsection{Simulation Tools} @@ -273,9 +274,12 @@ The Gurobi Optimizer~\cite{ref219,ref220,ref211} is a commercial optimization so \end{enumerate} -B. Meindl and M. Templ~\cite{ref212} studied the efficiency of above optimization solvers. They used a set of instances of difficult optimization problems called the attacker problems in order to achieve a performance comparison of GLPK, lp$\_$solve, CLP, GUROBI, and CPLEX optimization solvers. They considered a total of 200 problem instances for this study, 100 of these problem instances are based on problems with two dimensions, and 100 problem instances are three-dimensional. +B. Meindl and M. Templ~\cite{ref212} studied the efficiency of above optimization solvers. They used a set of instances of a difficult optimization problem called the Attacker problems (formulated as a LP) \cite{ref240} in order to achieve a performance comparison of GLPK, lp$\_$solve, CLP, GUROBI, and CPLEX optimization solvers. %They considered a total of 200 problem instances for this study, 100 of these problem instances are based on problems with two dimensions, and 100 problem instances are three-dimensional. -In tables~\ref{my-label1}, \ref{my-label2}, and \ref{my-label3} we report the result of their comparisons the running times of the five linear program solvers to find solutions to the 200 two-dimensional, 200 three-dimensional, and all 400 problem instances. In order to solve the attacker’s problem for a given problem instance, it is needed to both minimize and maximize any given problem. Therefore, a total of 400 problem instances had been solved when only 200 problem instances have been generated. The running time of the fastest solver has been scaled to one and the running times of the other linear solvers were scaled to reflect this scaling. +In tables~\ref{my-label1} and \ref{my-label2}, we report the result of their comparisons the running times of the five linear program solvers to find solutions for instances related to the two-dimensional problem, and for instances related to the three-dimensional problem (with more variables and constraints). +%to the 200 two-dimensional, 200 three-dimensional, and all 400 problem instances. +%In order to solve the attacker’s problem for a given problem instance, it is needed to both minimize and maximize any given problem. Therefore, a total of 400 problem instances had been solved when only 200 problem instances have been generated. +The running time of the fastest solver has been scaled to one and the running times of the other linear solvers were scaled to reflect this scaling. \begin{table}[h] @@ -306,21 +310,21 @@ In tables~\ref{my-label1}, \ref{my-label2}, and \ref{my-label3} we report the re \end{table} -\begin{table}[h] -\caption{Total (in seconds) and scaled running times for all problems (results of B. Meindl and M. Templ~\cite{ref212})} -\label{my-label3} -\resizebox{\textwidth}{!}{% -\begin{tabular}{|c|c|c|c|c|c|} -\hline -\textbf{Optimization Solvers} & \textbf{GLPK} & \textbf{lp\_solve} & \textbf{CLP} & \textbf{Gurobi} & \textbf{CPLEX} \\ \hline -\textbf{Total Running Time} & 48585 & 982737 & 667066 & 38708 & 257 \\ \hline -\textbf{Scaled Running Time} & 189 & 3822 & 2594 & 151 & 1 \\ \hline -\end{tabular} -} -\end{table} +%\begin{table}[h] +%\caption{Total (in seconds) and scaled running times for all problems (results of B. Meindl and M. Templ~\cite{ref212})} +%\label{my-label3} +%\resizebox{\textwidth}{!}{% +%\begin{tabular}{|c|c|c|c|c|c|} +%\hline +%\textbf{Optimization Solvers} & \textbf{GLPK} & \textbf{lp\_solve} & \textbf{CLP} & \textbf{Gurobi} & \textbf{CPLEX} \\ \hline +%\textbf{Total Running Time} & 48585 & 982737 & 667066 & 38708 & 257 \\ \hline +%\textbf{Scaled Running Time} & 189 & 3822 & 2594 & 151 & 1 \\ \hline +%\end{tabular} +%} +%\end{table} -The results in tables~\ref{my-label1}, \ref{my-label2}, and \ref{my-label3} indicate that open source solvers perform worse than standard commercial solvers when applied to instances of the attacker’s problem. The GLPK outperforms the other free and open source solvers, but is still slower than CPLEX and GUROBI. We have decided to use the GLPK as an optimization solver in this dissertation to solve the proposed integer programs during the decision phase of the nodes. We motivate the use of the GLPK optimization solver for many reasons, including: +The results in tables~\ref{my-label1} and \ref{my-label2} indicate that open source solvers perform worse than standard commercial solvers when applied to instances of the attacker’s problem. The GLPK outperforms the other free and open source solvers, but is still slower than CPLEX and GUROBI. We have decided to use the GLPK as an optimization solver in this dissertation to solve the proposed integer programs during the decision phase of the nodes. We motivate the use of the GLPK optimization solver for many reasons, including: \begin{enumerate} [(i)]