\date{\today}
\newcommand{\CG}[1]{\begin{color}{red}\textit{#1}\end{color}}
+\newcommand{\JFC}[1]{\begin{color}{blue}\textit{#1}\end{color}}
+
+
+
+
\begin{abstract}
%% Text of abstract
exponent. An alternative approach is to consider a well-known neural
network architecture: the MultiLayer Perceptron (MLP). These networks
are suitable to model nonlinear relationships between data, due to
-their universal approximator capacity. Thus, this kind of networks can
+their universal approximator capacity.
+\JFC{Michel, peux-tu donner une ref la dessus}
+Thus, this kind of networks can
be trained to model a physical phenomenon known to be chaotic such as
Chua's circuit \cite{dalkiran10}. Sometimes, a neural network which
is build by combining transfer functions and initial conditions that are both
The behavior of the neural network is such that when the initial state
is $x^0~\in~\mathds{B}^n$ and a sequence $(S^t)^{t \in \Nats}$ is
-given as outside input, then the sequence of successive published
+given as outside input,
+\JFC{en dire davantage sur l'outside world}
+ then the sequence of successive published
output vectors $\left(x^t\right)^{t \in \mathds{N}^{\ast}}$ is exactly
the one produced by the chaotic iterations formally described in
Eq.~(\ref{eq:CIs}). It means that mathematically if we use similar
overcome this drawback for a particular category of multilayer
perceptrons defined below, and for the Devaney's formulation of chaos.
In spite of this restriction, we think that this approach can be
-extended to a large variety of neural networks. We plan to study a
-generalization of this approach in a future work.
+extended to a large variety of neural networks.
+
We consider a multilayer perceptron of the following form: inputs
are $n$ binary digits and one integer value, while outputs are $n$
compute the new output one $\left(x^{t+1}_1,\dots,x^{t+1}_n\right)$.
While the remaining input receives a new integer value $S^t \in
\llbracket1;n\rrbracket$, which is provided by the outside world.
+\JFC{en dire davantage sur l'outside world}
\end{itemize}
The topological behavior of these particular neural networks can be
by $F: \llbracket 1;n \rrbracket \times \mathds{B}^n \rightarrow
\mathds{B}^n$ the function that maps the value
$\left(s,\left(x_1,\dots,x_n\right)\right) \in \llbracket 1;n
-\rrbracket \times \mathds{B}^n$ into the value
+\rrbracket \times \mathds{B}^n$
+\JFC{ici, cela devait etre $S^t$ et pas $s$, nn ?}
+ into the value
$\left(y_1,\dots,y_n\right) \in \mathds{B}^n$, where
$\left(y_1,\dots,y_n\right)$ is the response of the neural network
after the initialization of its input layer with
-$\left(s,\left(x_1,\dots, x_n\right)\right)$. Secondly, we define $f:
+$\left(s,\left(x_1,\dots, x_n\right)\right)$.
+\JFC{ici, cela devait etre $S^t$ et pas $s$, nn ?}
+Secondly, we define $f:
\mathds{B}^n \rightarrow \mathds{B}^n$ such that
$f\left(x_1,x_2,\dots,x_n\right)$ is equal to
\begin{equation}
\emptyset$, we can find some $n_0 \in \mathds{N}$ such that for any $n$,
$n\geq n_0$, we have $f^n(U) \cap V \neq \emptyset$.
\end{definition}
+\JFC{Donner un sens à ces definitions}
+
-As proven in Ref.~\cite{gfb10:ip}, chaotic iterations are expansive
-and topologically mixing when $f$ is the vectorial negation $f_0$.
+It has been proven in Ref.~\cite{gfb10:ip}, that chaotic iterations
+are expansive and topologically mixing when $f$ is the
+vectorial negation $f_0$.
Consequently, these properties are inherited by the CI-MLP($f_0$)
recurrent neural network previously presented, which induce a greater
unpredictability. Any difference on the initial value of the input
constant.
Let us then focus on the consequences for a neural network to be chaotic
-according to Devaney's definition. First of all, the topological
-transitivity property implies indecomposability.
+according to Devaney's definition. Intuitively, the topological
+transitivity property implies indecomposability, which is formally defined
+as follows:
+
\begin{definition} \label{def10}
A dynamical system $\left( \mathcal{X}, f\right)$ is
This section presents how (not) chaotic iterations of $G_f$ are
translated into another model more suited to artificial neural
-networks. Formally, input and output vectors are pairs~$((S^t)^{t \in
+networks.
+\JFC{détailler le more suited}
+Formally, input and output vectors are pairs~$((S^t)^{t \in
\Nats},x)$ and $\left(\sigma((S^t)^{t \in \Nats}),F_{f}(S^0,x)\right)$
as defined in~Eq.~(\ref{eq:Gf}).
first one the number of inputs follows the increase of the boolean
vectors coding configurations. In this latter case, the coding gives a
finer information on configuration evolution.
-
+\JFC{Je n'ai pas compris le paragraphe precedent. Devrait être repris}
\begin{table}[b]
\caption{Prediction success rates for configurations expressed with Gray code}
\label{tab2}