Xin Zhou},\r
title = {A Novel Wavelet Image Watermarking Scheme Combined with\r
Chaos Sequence and Neural Network},\r
- booktitle = {ISNN (2)},\r
year = {2004},\r
pages = {663-668},\r
ee = {http://springerlink.metapress.com/openurl.asp?genre=article{\&}issn=0302-9743{\&}volume=3174{\&}spage=663},\r
editor = {Fuliang Yin and\r
Jun Wang and\r
Chengan Guo},\r
- title = {Advances in Neural Networks - ISNN 2004, International Symposium\r
+ title = {Advances in Neural Networks - International Symposium\r
on Neural Networks, Dalian, China, August 19-21, 2004, Proceedings,\r
Part II},\r
- booktitle = {ISNN (2)},\r
publisher = {Springer},\r
series = {Lecture Notes in Computer Science},\r
volume = {3174},\r
year = {2004},\r
- isbn = {3-540-22843-8},\r
bibsource = {DBLP, http://dblp.uni-trier.de}\r
}\r
\r
booktitle = {Proceedings of the 2009 international joint conference on Neural Networks},\r
series = {IJCNN'09},\r
year = {2009},\r
- isbn = {978-1-4244-3549-4},\r
location = {Atlanta, Georgia, USA},\r
pages = {2723--2728},\r
numpages = {6},\r
pages={ 587 - 590},\r
keywords={ authentication; chaos based spread spectrum image steganography; chaotic encryption; chaotic modulation; covert communication; digital security schemes; home-office environment; in-band captioning; large-scale proliferation; tamperproofing; wireless products; chaotic communication; cryptography; data encapsulation; image processing; message authentication; modulation; spread spectrum communication;},\r
doi={10.1109/TCE.2004.1309431},\r
-ISSN={0098-3063},}\r
+}\r
\r
@article{Zhang2005759,\r
title = "An image encryption approach based on chaotic maps",\r
@article{10.1109/CIMSiM.2010.36,\r
author = {Jiri Holoska and Zuzana Oplatkova and Ivan Zelinka and Roman Senkerik},\r
title = {Comparison between Neural Network Steganalysis and Linear Classification Method Stegdetect},\r
-journal ={Computational Intelligence, Modelling and Simulation, International Conference on},\r
+journal ={Computational Intelligence, Modelling and Simulation, International Conference on.},\r
volume = {0},\r
-isbn = {978-0-7695-4262-1},\r
year = {2010},\r
pages = {15-20},\r
doi = {http://doi.ieeecomputersociety.org/10.1109/CIMSiM.2010.36},\r
number={},\r
pages={3352 -3357},\r
keywords={Fisher linear discriminant;JPEG images;discrete cosine transforms;expanded Markov features;feature reduction;feature selection;polynomial fitting;principal component analysis;singular value decomposition;steganalysis;Markov processes;discrete cosine transforms;image coding;principal component analysis;singular value decomposition;steganography;},\r
-doi={10.1109/IJCNN.2008.4634274},\r
-ISSN={1098-7576},}\r
+doi={10.1109/IJCNN.2008.4634274}\r
+}\r
\r
@INPROCEEDINGS{guyeux10ter,\r
author = {Bahi, Jacques and Guyeux, Christophe},\r
title = {Neural network based steganalysis in still images},\r
journal ={Multimedia and Expo, IEEE International Conference on},\r
volume = {2},\r
-isbn = {0-7803-7965-9},\r
year = {2003},\r
pages = {509-512},\r
doi = {http://doi.ieeecomputersociety.org/10.1109/ICME.2003.1221665},\r
\date{\today}
\newcommand{\CG}[1]{\begin{color}{red}\textit{#1}\end{color}}
+\newcommand{\JFC}[1]{\begin{color}{blue}\textit{#1}\end{color}}
+
+
+
+
\begin{abstract}
%% Text of abstract
exponent. An alternative approach is to consider a well-known neural
network architecture: the MultiLayer Perceptron (MLP). These networks
are suitable to model nonlinear relationships between data, due to
-their universal approximator capacity. Thus, this kind of networks can
+their universal approximator capacity.
+\JFC{Michel, peux-tu donner une ref la dessus}
+Thus, this kind of networks can
be trained to model a physical phenomenon known to be chaotic such as
Chua's circuit \cite{dalkiran10}. Sometimes, a neural network which
is build by combining transfer functions and initial conditions that are both
The behavior of the neural network is such that when the initial state
is $x^0~\in~\mathds{B}^n$ and a sequence $(S^t)^{t \in \Nats}$ is
-given as outside input, then the sequence of successive published
+given as outside input,
+\JFC{en dire davantage sur l'outside world}
+ then the sequence of successive published
output vectors $\left(x^t\right)^{t \in \mathds{N}^{\ast}}$ is exactly
the one produced by the chaotic iterations formally described in
Eq.~(\ref{eq:CIs}). It means that mathematically if we use similar
overcome this drawback for a particular category of multilayer
perceptrons defined below, and for the Devaney's formulation of chaos.
In spite of this restriction, we think that this approach can be
-extended to a large variety of neural networks. We plan to study a
-generalization of this approach in a future work.
+extended to a large variety of neural networks.
+
We consider a multilayer perceptron of the following form: inputs
are $n$ binary digits and one integer value, while outputs are $n$
compute the new output one $\left(x^{t+1}_1,\dots,x^{t+1}_n\right)$.
While the remaining input receives a new integer value $S^t \in
\llbracket1;n\rrbracket$, which is provided by the outside world.
+\JFC{en dire davantage sur l'outside world}
\end{itemize}
The topological behavior of these particular neural networks can be
by $F: \llbracket 1;n \rrbracket \times \mathds{B}^n \rightarrow
\mathds{B}^n$ the function that maps the value
$\left(s,\left(x_1,\dots,x_n\right)\right) \in \llbracket 1;n
-\rrbracket \times \mathds{B}^n$ into the value
+\rrbracket \times \mathds{B}^n$
+\JFC{ici, cela devait etre $S^t$ et pas $s$, nn ?}
+ into the value
$\left(y_1,\dots,y_n\right) \in \mathds{B}^n$, where
$\left(y_1,\dots,y_n\right)$ is the response of the neural network
after the initialization of its input layer with
-$\left(s,\left(x_1,\dots, x_n\right)\right)$. Secondly, we define $f:
+$\left(s,\left(x_1,\dots, x_n\right)\right)$.
+\JFC{ici, cela devait etre $S^t$ et pas $s$, nn ?}
+Secondly, we define $f:
\mathds{B}^n \rightarrow \mathds{B}^n$ such that
$f\left(x_1,x_2,\dots,x_n\right)$ is equal to
\begin{equation}
\emptyset$, we can find some $n_0 \in \mathds{N}$ such that for any $n$,
$n\geq n_0$, we have $f^n(U) \cap V \neq \emptyset$.
\end{definition}
+\JFC{Donner un sens à ces definitions}
+
-As proven in Ref.~\cite{gfb10:ip}, chaotic iterations are expansive
-and topologically mixing when $f$ is the vectorial negation $f_0$.
+It has been proven in Ref.~\cite{gfb10:ip}, that chaotic iterations
+are expansive and topologically mixing when $f$ is the
+vectorial negation $f_0$.
Consequently, these properties are inherited by the CI-MLP($f_0$)
recurrent neural network previously presented, which induce a greater
unpredictability. Any difference on the initial value of the input
constant.
Let us then focus on the consequences for a neural network to be chaotic
-according to Devaney's definition. First of all, the topological
-transitivity property implies indecomposability.
+according to Devaney's definition. Intuitively, the topological
+transitivity property implies indecomposability, which is formally defined
+as follows:
+
\begin{definition} \label{def10}
A dynamical system $\left( \mathcal{X}, f\right)$ is
This section presents how (not) chaotic iterations of $G_f$ are
translated into another model more suited to artificial neural
-networks. Formally, input and output vectors are pairs~$((S^t)^{t \in
+networks.
+\JFC{détailler le more suited}
+Formally, input and output vectors are pairs~$((S^t)^{t \in
\Nats},x)$ and $\left(\sigma((S^t)^{t \in \Nats}),F_{f}(S^0,x)\right)$
as defined in~Eq.~(\ref{eq:Gf}).
first one the number of inputs follows the increase of the boolean
vectors coding configurations. In this latter case, the coding gives a
finer information on configuration evolution.
-
+\JFC{Je n'ai pas compris le paragraphe precedent. Devrait être repris}
\begin{table}[b]
\caption{Prediction success rates for configurations expressed with Gray code}
\label{tab2}