rounds during the sensing phase, to ensure coverage so as to maximize the
lifetime of WSN. The decision process is carried out by a leader node, which
solves an integer program to produce the best representative sets to be used
-during the rounds of the sensing phase. Compared with some existing protocols,
+during the rounds of the sensing phase. \textcolor{red}{The integer program is solved by either GLPK solver or Genetic Algorithm (GA)}. Compared with some existing protocols,
simulation results based on multiple criteria (energy consumption, coverage
ratio, and so on) show that the proposed protocol can prolong efficiently the
network lifetime and improve the coverage performance.
possibility of dividing the sensing phase into multiple rounds and we also add
an improved model of energy consumption to assess the efficiency of our
approach. In fact, in this paper we make a multiround optimization, while it was
-a single round optimization in our previous work.
+a single round optimization in our previous work. \textcolor{red}{In addition, a metaheuristic based GA is proposed to solve our multiround optimization}.
\iffalse
\textcolor{red}{The modeling language for Mathematical Programming (AMPL)~\cite{AMPL} is employed to generate the integer program instance in a standard format, which is then read and solved by the optimization solver GLPK (GNU linear Programming Kit available in the public domain) \cite{glpk} through a Branch-and-Bound method. We named the protocol which is based on GLPK solver in the decision phase as MuDiLCO.}
-%\textcolor{red}{\textbf{\textsc{Answer:} ali }}
-\subsection{\textcolor{red}{Genetic Algorithm (GA) for Multiround Lifetime Coverage Optimization}}
+\subsection{\textcolor{red}{Genetic Algorithm for Multiround Lifetime Coverage Optimization}}
\label{GA}
-\textcolor{red}{Metaheuristics are a generic search strategies for exploring search spaces for solving the complex problems. These strategies have to dynamically balance between the exploitation of the accumulated search experience and the exploration of the search space. On one hand, this balance can find regions in the search space with high-quality solutions. On the other hand, it prevents waste too much time in regions of the search space which are either already explored or don’t provide high-quality solutions. Therefore, metaheuristic provides an enough good solution to an optimization problem, especially with incomplete information or limited computation capacity \cite{bianchi2009survey}. Genetic Algorithm (GA) is one of the population-based metaheuristic methods that simulates the process of natural selection \cite{hassanien2015applications}. GA starts with a population of random candidate solutions (called individuals or phenotypes) . GA uses genetic operators inspired by natural evolution, such as selection, mutation, evaluation, crossover, and replacement so as to improve the initial population of candidate solutions. This process repeated until a stopping criterion is satisfied. Compared to GLPK optimization solver, GA provides a near optimal solution with acceptible execution time, while GLPK provides optimal solution but it requires high execution time for large problem.}
+\textcolor{red}{Metaheuristics are a generic search strategies for exploring search spaces for solving the complex problems. These strategies have to dynamically balance between the exploitation of the accumulated search experience and the exploration of the search space. On one hand, this balance can find regions in the search space with high-quality solutions. On the other hand, it prevents waste too much time in regions of the search space which are either already explored or don’t provide high-quality solutions. Therefore, metaheuristic provides an enough good solution to an optimization problem, especially with incomplete information or limited computation capacity \cite{bianchi2009survey}. Genetic Algorithm (GA) is one of the population-based metaheuristic methods that simulates the process of natural selection \cite{hassanien2015applications}. GA starts with a population of random candidate solutions (called individuals or phenotypes) . GA uses genetic operators inspired by natural evolution, such as selection, mutation, evaluation, crossover, and replacement so as to improve the initial population of candidate solutions. This process repeated until a stopping criterion is satisfied. In comparison with GLPK optimization solver, GA provides a near optimal solution with acceptable execution time, as well as it requires a less amount of memory especially for large size problems. GLPK provides optimal solution, but it requires higher execution time and amount of memory for large problem.}
\textcolor{red}{In this section, we present a metaheuristic based GA to solve our multiround lifetime coverage optimization problem. The proposed GA provides a near optimal sechedule for multiround sensing per period. The proposed GA is based on the mathematical model which is presented in Section \ref{oa}. Algorithm \ref{alg:GA} shows the proposed GA to solve the coverage lifetime optimization problem. We named the new protocol which is based on GA in the decision phase as GA-MuDiLCO. The proposed GA can be explained in more details as follow:}
\begin{enumerate} [I)]
-\item \textcolor{red}{\textbf{Representation:} Since the proposed GA's goal is to find the optimal schedule of the sensor nodes which take the responsibility of monitoring the subregion for $T$ rounds in the next phase, the chromosome is defined as a schedule for alive sensors and each chromosome contains $T$ rounds. The proposed GA uses binary representation, where each round in the schedule includes J genes, the total alive sensors in the subregion. Therefore, the gene of such a chromosome is a schedule of a sensor. In other words, The genes corresponding to active nodes have the value of one, the others are zero. Figure \ref{chromo} shows solution representation in the proposed GA.}
+\item \textcolor{red}{\textbf{Representation:} Since the proposed GA's goal is to find the optimal schedule of the sensor nodes which take the responsibility of monitoring the subregion for $T$ rounds in the sensing phase, the chromosome is defined as a schedule for alive sensors and each chromosome contains $T$ rounds. The proposed GA uses binary representation, where each round in the schedule includes J genes, the total alive sensors in the subregion. Therefore, the gene of such a chromosome is a schedule of a sensor. In other words, The genes corresponding to active nodes have the value of one, the others are zero. Figure \ref{chromo} shows solution representation in the proposed GA.}
%[scale=0.3]
\begin{figure}[h!]
\centering
-\item \textcolor{red}{\textbf{Initialize Population:} The initial population is randomly generated and each chromosome in the GA population represents a possible sensors schedule solution to cover the entire subregion for $T$ rounds during current period. Each sensor in the chromosome is given a random value (0 or 1) for all rounds. If the random value is 1, the remaining energy of this sensor should be adequate to activate this sensor during current round. Otherwise, the value is set to 0. The energy constraint is applied for each sensor during all rounds. }
+\item \textcolor{red}{\textbf{Initialize Population:} The initial population is randomly generated and each chromosome in the GA population represents a possible sensors schedule solution to cover the entire subregion for $T$ rounds during current period. Each sensor in the chromosome is given a random value (0 or 1) for all rounds. If the random value is 1, the remaining energy of this sensor should be adequate to activate this sensor during the current round. Otherwise, the value is set to 0. The energy constraint is applied for each sensor during all rounds. }
\item \textcolor{red}{\textbf{Update O-U-Coverage:}
\item \textcolor{red}{\textbf{Evaluate Population:}
-After creating the initial population, each individual is evaluated and assigned a fitness value according to the fitness function is illustrated in Eq. \eqref{eqf}. In the proposed GA, the optimal (or near optimal) candidate solution, is the one with the minimum value for the fitness function. The lower the fitness values been assigned to an individual, the better opportunity it get survived. In our works, the function rewards the decrease in the sensor nodes which cover the same primary point and penalizes the decrease to zero in the sensor nodes which cover the primary point. }
+After creating the initial population, each individual is evaluated and assigned a fitness value according to the fitness function is illustrated in Eq. \eqref{eqf}. In the proposed GA, the optimal (or near optimal) candidate solution, is the one with the minimum value for the fitness function. The lower the fitness values been assigned to an individual, the better opportunity it gets survived. In our works, the function rewards the decrease in the sensor nodes which cover the same primary point and penalizes the decrease to zero in the sensor nodes which cover the primary point. }
\begin{equation}
F^{ind} \leftarrow \sum_{t=1}^{T} \sum_{p=1}^{P} \left(W_{\theta}* \Theta_{t,p} + W_{U} * U_{t,p} \right) \label{eqf}
\end{equation}
-\item \textcolor{red}{\textbf{Selection:} In order to generate a new generation, a portion of the existing population is elected based on a fitness function that ranks the fitness of each candidate solution and preferentially select the best solutions. Two parents should be selected to the mating pool. In the proposed GA-MuDiLCO algorithm, the first parent is selected by using binary tournament selection to select one of the parents \cite{goldberg1991comparative}. In this method, two individuals are chosen at random from population and the better of the two
+\item \textcolor{red}{\textbf{Selection:} In order to generate a new generation, a portion of the existing population is elected based on a fitness function that ranks the fitness of each candidate solution and preferentially select the best solutions. Two parents should be selected to the mating pool. In the proposed GA-MuDiLCO algorithm, the first parent is selected by using binary tournament selection to select one of the parents \cite{goldberg1991comparative}. In this method, two individuals are chosen at random from the population and the better of the two
individuals is selected. If they have similar fitness values, one of them will be selected randomly. The best individual in the population is selected as a second parent.}
\item \textcolor{red}{\textbf{Mutation:}
-Mutation is a divergence operation which introduces random modifications. The purpose of the mutation is to maintain diversity within the population and prevent premature convergence. Mutation is used to add new genetic information (divergence) in order to achieve a global search over the solution search space and avoid to fall in local optima. The mutation oprator in the proposed GA-MuDiLCO works as follow: If mutation probability $P_m$ is 100$\%$, then the mutation operation takes place on the the new individual. The round number is selected randomly within (1..T) in the schedule solution. After that one sensor within this round is selected randomly within (1..J). If the sensor is scheduled as active "1", it should be rescheduled to sleep "0". If the sensor is scheduled as sleep, it rescheduled to active only if it has adequate remaining energy.}
+Mutation is a divergence operation which introduces random modifications. The purpose of the mutation is to maintain diversity within the population and prevent premature convergence. Mutation is used to add new genetic information (divergence) in order to achieve a global search over the solution search space and avoid to fall in local optima. The mutation operator in the proposed GA-MuDiLCO works as follow: If mutation probability $P_m$ is 100$\%$, then the mutation operation takes place on the new individual. The round number is selected randomly within (1..T) in the schedule solution. After that one sensor within this round is selected randomly within (1..J). If the sensor is scheduled as active "1", it should be rescheduled to sleep "0". If the sensor is scheduled as sleep, it rescheduled to active only if it has adequate remaining energy.}
\item \textcolor{red}{\textbf{Update O-U-Coverage for children:}
-Before evalute each new individual, Algorithm \ref{OU} is called for each new individual to compute the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters. }
+Before evaluating each new individual, Algorithm \ref{OU} is called for each new individual to compute the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters. }
\item \textcolor{red}{\textbf{Evaluate New Individuals:}
Each new individual is evaluated using Eq. \ref{eqf} but with using the new undercoverage $Ch.U$ and overcoverage $Ch.\Theta$ parameters of the new children.}
\item \textcolor{red}{\textbf{Replacement:}
-After evaluatation of new children, Triple Tournament Replacement (TTR) will be applied for each new individual. In TTR strategy, three individuals are selected
+After evaluation of new children, Triple Tournament Replacement (TTR) will be applied for each new individual. In TTR strategy, three individuals are selected
randomly from the population. Find the worst from them and then check its fitness with the new individual fitness. If the fitness of the new individual is better than the fitness of the worst individual, replace the new individual with the worst individual. Otherwise, the replacement is not done. }