networks. On the one hand we show how to build such a network, on the
other hand we provide a method to check if a neural network is a
chaotic one. Finally, the ability of classical feedforward multilayer
-perceptrons to learn sets of data obtained from a chaotic dynamical
+perceptrons to learn sets of data obtained from a dynamical
system is regarded. Various Boolean functions are iterated on finite
states. Iterations of some of them are proven to be chaotic
as it is defined by
sensitivity, and so on). However, such networks are often claimed
chaotic without any rigorous mathematical proof. Therefore, in this
work a theoretical framework based on the Devaney's definition of
-chaos is introduced. Starting with a relationship between chaotic
+chaos is introduced. Starting with a relationship between discrete
iterations and Devaney's chaos, we firstly show how to build a
recurrent neural networks that is equivalent to a chaotic map and
secondly a way to check whether an already available network, is
\section{Introduction}
\label{S1}
-REVOIR TOUT L'INTRO et l'ABSTRACT en fonction d'asynchrone, chaotic
-
-
Several research works have proposed or run chaotic neural networks
these last years. The complex dynamics of such a networks leads to
various potential application areas: associative
qualitative and quantitative tools to evaluate the complex behavior of
a dynamical system: ergodicity, expansivity, and so on. More
precisely, in this paper, which is an extension of a previous work
-\cite{bgs11:ip}, we establish the equivalence between asynchronous
+\cite{bgs11:ip}, we establish the equivalence between chaotic
iterations and a class of globally recurrent MLP.
The investigation the converse problem is the second contribution:
we indeed study the ability for
classical MultiLayer Perceptrons to learn a particular family of
-discrete chaotic dynamical systems. This family, called chaotic
-iterations, is defined by a Boolean vector, an update function, and a
+discrete chaotic dynamical systems. This family
+is defined by a Boolean vector, an update function, and a
sequence giving which component to update at each iteration. It has
been previously established that such dynamical systems is
chaotically iterated (as it is defined by Devaney) when the chosen function has
\left(F\left(1,\left(x_1,x_2,\dots,x_n\right)\right),\dots,
F\left(n,\left(x_1,x_2,\dots,x_n\right)\right)\right) \enspace .
\end{equation}
-Then $F=F_f$. If this recurrent neural network is seeded with
+Thus, for any $j$, $1 \le j \le n$, we have
+$f_j\left(x_1,x_2,\dots,x_n\right) =
+F\left(j,\left(x_1,x_2,\dots,x_n\right)\right)$.
+If this recurrent neural network is seeded with
$\left(x_1^0,\dots,x_n^0\right)$ and $S \in \llbracket 1;n
\rrbracket^{\mathds{N}}$, it produces exactly the
same output vectors than the