X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/chaos1.git/blobdiff_plain/a1b71af0bb07232355e939a1e1a6a7aedd40b2d2..a1c7d633b750e7417e0e331d9f2df61936c76de7:/main.tex?ds=sidebyside diff --git a/main.tex b/main.tex index b72eb6d..8e7f34b 100644 --- a/main.tex +++ b/main.tex @@ -58,6 +58,11 @@ IUT de Belfort-Montb\'eliard, BP 527, \\ \date{\today} \newcommand{\CG}[1]{\begin{color}{red}\textit{#1}\end{color}} +\newcommand{\JFC}[1]{\begin{color}{blue}\textit{#1}\end{color}} + + + + \begin{abstract} %% Text of abstract @@ -134,7 +139,9 @@ which is usually assessed through the computation of the Lyapunov exponent. An alternative approach is to consider a well-known neural network architecture: the MultiLayer Perceptron (MLP). These networks are suitable to model nonlinear relationships between data, due to -their universal approximator capacity. Thus, this kind of networks can +their universal approximator capacity. +\JFC{Michel, peux-tu donner une ref la dessus} +Thus, this kind of networks can be trained to model a physical phenomenon known to be chaotic such as Chua's circuit \cite{dalkiran10}. Sometimes, a neural network which is build by combining transfer functions and initial conditions that are both @@ -508,7 +515,9 @@ Fig.~\ref{Fig:perceptron}). The behavior of the neural network is such that when the initial state is $x^0~\in~\mathds{B}^n$ and a sequence $(S^t)^{t \in \Nats}$ is -given as outside input, then the sequence of successive published +given as outside input, +\JFC{en dire davantage sur l'outside world} + then the sequence of successive published output vectors $\left(x^t\right)^{t \in \mathds{N}^{\ast}}$ is exactly the one produced by the chaotic iterations formally described in Eq.~(\ref{eq:CIs}). It means that mathematically if we use similar @@ -539,8 +548,8 @@ without any convincing mathematical proof. We propose an approach to overcome this drawback for a particular category of multilayer perceptrons defined below, and for the Devaney's formulation of chaos. In spite of this restriction, we think that this approach can be -extended to a large variety of neural networks. We plan to study a -generalization of this approach in a future work. +extended to a large variety of neural networks. + We consider a multilayer perceptron of the following form: inputs are $n$ binary digits and one integer value, while outputs are $n$ @@ -556,6 +565,7 @@ connection to an input one. compute the new output one $\left(x^{t+1}_1,\dots,x^{t+1}_n\right)$. While the remaining input receives a new integer value $S^t \in \llbracket1;n\rrbracket$, which is provided by the outside world. +\JFC{en dire davantage sur l'outside world} \end{itemize} The topological behavior of these particular neural networks can be @@ -563,11 +573,15 @@ proven to be chaotic through the following process. Firstly, we denote by $F: \llbracket 1;n \rrbracket \times \mathds{B}^n \rightarrow \mathds{B}^n$ the function that maps the value $\left(s,\left(x_1,\dots,x_n\right)\right) \in \llbracket 1;n -\rrbracket \times \mathds{B}^n$ into the value +\rrbracket \times \mathds{B}^n$ +\JFC{ici, cela devait etre $S^t$ et pas $s$, nn ?} + into the value $\left(y_1,\dots,y_n\right) \in \mathds{B}^n$, where $\left(y_1,\dots,y_n\right)$ is the response of the neural network after the initialization of its input layer with -$\left(s,\left(x_1,\dots, x_n\right)\right)$. Secondly, we define $f: +$\left(s,\left(x_1,\dots, x_n\right)\right)$. +\JFC{ici, cela devait etre $S^t$ et pas $s$, nn ?} +Secondly, we define $f: \mathds{B}^n \rightarrow \mathds{B}^n$ such that $f\left(x_1,x_2,\dots,x_n\right)$ is equal to \begin{equation} @@ -614,9 +628,12 @@ if and only if, for any pair of disjoint open sets $U$,$V \neq \emptyset$, we can find some $n_0 \in \mathds{N}$ such that for any $n$, $n\geq n_0$, we have $f^n(U) \cap V \neq \emptyset$. \end{definition} +\JFC{Donner un sens à ces definitions} + -As proven in Ref.~\cite{gfb10:ip}, chaotic iterations are expansive -and topologically mixing when $f$ is the vectorial negation $f_0$. +It has been proven in Ref.~\cite{gfb10:ip}, that chaotic iterations +are expansive and topologically mixing when $f$ is the +vectorial negation $f_0$. Consequently, these properties are inherited by the CI-MLP($f_0$) recurrent neural network previously presented, which induce a greater unpredictability. Any difference on the initial value of the input @@ -624,8 +641,10 @@ layer is in particular magnified up to be equal to the expansivity constant. Let us then focus on the consequences for a neural network to be chaotic -according to Devaney's definition. First of all, the topological -transitivity property implies indecomposability. +according to Devaney's definition. Intuitively, the topological +transitivity property implies indecomposability, which is formally defined +as follows: + \begin{definition} \label{def10} A dynamical system $\left( \mathcal{X}, f\right)$ is @@ -784,7 +803,9 @@ such functions into a model amenable to be learned by an ANN. This section presents how (not) chaotic iterations of $G_f$ are translated into another model more suited to artificial neural -networks. Formally, input and output vectors are pairs~$((S^t)^{t \in +networks. +\JFC{détailler le more suited} +Formally, input and output vectors are pairs~$((S^t)^{t \in \Nats},x)$ and $\left(\sigma((S^t)^{t \in \Nats}),F_{f}(S^0,x)\right)$ as defined in~Eq.~(\ref{eq:Gf}). @@ -965,7 +986,7 @@ configuration is always expressed as a natural number, whereas in the first one the number of inputs follows the increase of the boolean vectors coding configurations. In this latter case, the coding gives a finer information on configuration evolution. - +\JFC{Je n'ai pas compris le paragraphe precedent. Devrait être repris} \begin{table}[b] \caption{Prediction success rates for configurations expressed with Gray code} \label{tab2}