+ \item \textbf{Topological entropy}. The desire to formulate an equivalency of the thermodynamics entropy
+has emerged both in the topological and statistical fields. Another time, a similar objective has led to two different
+rewritten of an entropy based disorder: the famous Shannon definition of entropy is approximated in the statistical approach,
+whereas topological entropy is defined as follows.
+$x,y \in \mathcal{X}$ are $\varepsilon-$\emph{separated in time $n$} if there exists $k \leqslant n$ such that $d\left(f^{(k)}(x),f^{(k)}(y)\right)>\varepsilon$. Then $(n,\varepsilon)-$separated sets are sets of points that are all $\varepsilon-$separated in time $n$, which
+leads to the definition of $s_n(\varepsilon,Y)$, being the maximal cardinality of all $(n,\varepsilon)-$separated sets. Using these notations,
+the topological entropy is defined as follows: $$h_{top}(\mathcal{X},f) = \displaystyle{\lim_{\varepsilon \rightarrow 0} \Big[ \limsup_{n \rightarrow +\infty} \dfrac{1}{n} \log s_n(\varepsilon,\mathcal{X})\Big]}.$$
+This value measures the average exponential growth of the number of distinguishable orbit segments.
+In this sense, it measures complexity of the topological dynamical system, whereas
+the Shannon approach is in mind when defining the following test~\cite{Nist10}: