chaos is introduced. Starting with a relationship between discrete
iterations and Devaney's chaos, we firstly show how to build a
recurrent neural network that is equivalent to a chaotic map and
chaos is introduced. Starting with a relationship between discrete
iterations and Devaney's chaos, we firstly show how to build a
recurrent neural network that is equivalent to a chaotic map and
chaotic or not. We also study different topological properties of
these truly chaotic neural networks. Finally, we show that the
learning, with neural networks having a classical feedforward
chaotic or not. We also study different topological properties of
these truly chaotic neural networks. Finally, we show that the
learning, with neural networks having a classical feedforward
various potential application areas: associative
memories~\cite{Crook2007267} and digital security tools like hash
functions~\cite{Xiao10}, digital
various potential application areas: associative
memories~\cite{Crook2007267} and digital security tools like hash
functions~\cite{Xiao10}, digital
their universal approximator capacity
\cite{Cybenko89,DBLP:journals/nn/HornikSW89}. Thus, this kind of
networks can be trained to model a physical phenomenon known to be
their universal approximator capacity
\cite{Cybenko89,DBLP:journals/nn/HornikSW89}. Thus, this kind of
networks can be trained to model a physical phenomenon known to be
-chaotic such as Chua's circuit \cite{dalkiran10}. Sometimes, a neural
-network which is build by combining transfer functions and initial
+chaotic such as Chua's circuit \cite{dalkiran10}. Sometime a neural
+network, which is build by combining transfer functions and initial
conditions that are both chaotic, is itself claimed to be chaotic
\cite{springerlink:10.1007/s00521-010-0432-2}.
conditions that are both chaotic, is itself claimed to be chaotic
\cite{springerlink:10.1007/s00521-010-0432-2}.
precisely, in this paper, which is an extension of a previous work
\cite{bgs11:ip}, we establish the equivalence between chaotic
iterations and a class of globally recurrent MLP. The second
precisely, in this paper, which is an extension of a previous work
\cite{bgs11:ip}, we establish the equivalence between chaotic
iterations and a class of globally recurrent MLP. The second
ability of classical multiLayer perceptrons to learn a particular
family of discrete chaotic dynamical systems. This family is defined
by a Boolean vector, an update function, and a sequence defining which
ability of classical multiLayer perceptrons to learn a particular
family of discrete chaotic dynamical systems. This family is defined
by a Boolean vector, an update function, and a sequence defining which