From: couchot Date: Wed, 21 Sep 2011 13:21:26 +0000 (+0200) Subject: maj X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/chaos1.git/commitdiff_plain/10496d652add2d6fa789a21237cb3cdeb972d257?ds=sidebyside maj --- diff --git a/main.tex b/main.tex index cee8840..78454a7 100644 --- a/main.tex +++ b/main.tex @@ -75,7 +75,7 @@ Devaney and a particular class of neural networks. On the one hand we show how to build such a network, on the other hand we provide a method to check if a neural network is a chaotic one. Finally, the ability of classical feedforward multilayer -perceptrons to learn sets of data obtained from a chaotic dynamical +perceptrons to learn sets of data obtained from a dynamical system is regarded. Various Boolean functions are iterated on finite states. Iterations of some of them are proven to be chaotic as it is defined by @@ -97,7 +97,7 @@ appealing properties of deterministic chaos (unpredictability, sensitivity, and so on). However, such networks are often claimed chaotic without any rigorous mathematical proof. Therefore, in this work a theoretical framework based on the Devaney's definition of -chaos is introduced. Starting with a relationship between chaotic +chaos is introduced. Starting with a relationship between discrete iterations and Devaney's chaos, we firstly show how to build a recurrent neural networks that is equivalent to a chaotic map and secondly a way to check whether an already available network, is @@ -112,9 +112,6 @@ is far more difficult than non chaotic behaviors. \section{Introduction} \label{S1} -REVOIR TOUT L'INTRO et l'ABSTRACT en fonction d'asynchrone, chaotic - - Several research works have proposed or run chaotic neural networks these last years. The complex dynamics of such a networks leads to various potential application areas: associative @@ -156,13 +153,13 @@ using a theoretical framework based on the Devaney's definition of chaos qualitative and quantitative tools to evaluate the complex behavior of a dynamical system: ergodicity, expansivity, and so on. More precisely, in this paper, which is an extension of a previous work -\cite{bgs11:ip}, we establish the equivalence between asynchronous +\cite{bgs11:ip}, we establish the equivalence between chaotic iterations and a class of globally recurrent MLP. The investigation the converse problem is the second contribution: we indeed study the ability for classical MultiLayer Perceptrons to learn a particular family of -discrete chaotic dynamical systems. This family, called chaotic -iterations, is defined by a Boolean vector, an update function, and a +discrete chaotic dynamical systems. This family +is defined by a Boolean vector, an update function, and a sequence giving which component to update at each iteration. It has been previously established that such dynamical systems is chaotically iterated (as it is defined by Devaney) when the chosen function has @@ -574,7 +571,10 @@ $f\left(x_1,x_2,\dots,x_n\right)$ is equal to \left(F\left(1,\left(x_1,x_2,\dots,x_n\right)\right),\dots, F\left(n,\left(x_1,x_2,\dots,x_n\right)\right)\right) \enspace . \end{equation} -Then $F=F_f$. If this recurrent neural network is seeded with +Thus, for any $j$, $1 \le j \le n$, we have +$f_j\left(x_1,x_2,\dots,x_n\right) = +F\left(j,\left(x_1,x_2,\dots,x_n\right)\right)$. +If this recurrent neural network is seeded with $\left(x_1^0,\dots,x_n^0\right)$ and $S \in \llbracket 1;n \rrbracket^{\mathds{N}}$, it produces exactly the same output vectors than the