From: guyeux Date: Mon, 10 Oct 2011 13:33:01 +0000 (+0200) Subject: Fin de la relecture X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/chaos1.git/commitdiff_plain/3df586d673bc4f3b32fa0dd1cb46796256744772?ds=inline;hp=486652fa99fdbcf0992d32f910b729e648e33644 Fin de la relecture --- diff --git a/main.tex b/main.tex index 016243a..5bd42bc 100644 --- a/main.tex +++ b/main.tex @@ -526,7 +526,6 @@ to an input one. compute the new output one $\left(x^{t+1}_1,\dots,x^{t+1}_n\right)$. While the remaining input receives a new integer value $S^t \in \llbracket1;n\rrbracket$, which is provided by the outside world. -\JFC{en dire davantage sur l'outside world} \end{itemize} The topological behavior of these particular neural networks can be @@ -557,7 +556,7 @@ condition $\left(S,(x_1^0,\dots, x_n^0)\right) \in \llbracket 1;n \rrbracket^{\mathds{N}} \times \mathds{B}^n$. Theoretically speaking, such iterations of $F_f$ are thus a formal model of these kind of recurrent neural networks. In the rest of this -paper, we will call such multilayer perceptrons CI-MLP($f$), which +paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which stands for ``Chaotic Iterations based MultiLayer Perceptron''. Checking if CI-MLP($f$) behaves chaotically according to Devaney's @@ -639,7 +638,7 @@ of the output space can be discarded when studying CI-MLPs: this space is intrinsically complicated and it cannot be decomposed or simplified. -Furthermore, those recurrent neural networks exhibit the instability +Furthermore, these recurrent neural networks exhibit the instability property: \begin{definition} A dynamical system $\left( \mathcal{X}, f\right)$ is {\bf unstable} @@ -860,8 +859,8 @@ trainings of two data sets, one of them describing chaotic iterations, are compared. Thereafter we give, for the different learning setups and data sets, -the mean prediction success rate obtained for each output. A such rate -represent the percentage of input-output pairs belonging to the test +the mean prediction success rate obtained for each output. Such a rate +represents the percentage of input-output pairs belonging to the test subset for which the corresponding output value was correctly predicted. These values are computed considering 10~trainings with random subsets construction, weights and biases initialization. @@ -879,7 +878,7 @@ hidden layer up to 40~neurons and we consider larger number of epochs. \centering {\small \begin{tabular}{|c|c||c|c|c|} \hline -\multicolumn{5}{|c|}{Networks topology: 6~inputs, 5~outputs and one hidden layer} \\ +\multicolumn{5}{|c|}{Networks topology: 6~inputs, 5~outputs, and one hidden layer} \\ \hline \hline \multicolumn{2}{|c||}{Hidden neurons} & \multicolumn{3}{c|}{10 neurons} \\ @@ -931,7 +930,7 @@ is observed (from 36.10\% for 10~neurons and 125~epochs to 70.97\% for 25~neurons and 500~epochs). We also notice that the learning of outputs~(2) and~(3) is more difficult. Conversely, for the non-chaotic case the simplest training setup is enough to predict -configurations. For all those feedforward network topologies and all +configurations. For all these feedforward network topologies and all outputs the obtained results for the non-chaotic case outperform the chaotic ones. Finally, the rates for the strategies show that the different networks are unable to learn them. @@ -949,14 +948,14 @@ configuration is always expressed as a natural number, whereas in the first one the number of inputs follows the increase of the Boolean vectors coding configurations. In this latter case, the coding gives a finer information on configuration evolution. -\JFC{Je n'ai pas compris le paragraphe precedent. Devrait être repris} + \begin{table}[b] \caption{Prediction success rates for configurations expressed with Gray code} \label{tab2} \centering \begin{tabular}{|c|c||c|c|c|} \hline -\multicolumn{5}{|c|}{Networks topology: 3~inputs, 2~outputs and one hidden layer} \\ +\multicolumn{5}{|c|}{Networks topology: 3~inputs, 2~outputs, and one hidden layer} \\ \hline \hline & Hidden neurons & \multicolumn{3}{c|}{10 neurons} \\ @@ -988,7 +987,7 @@ usually unknown. Hence, the first coding scheme cannot be used systematically. Therefore, we provide a refinement of the second scheme: each output is learned by a different ANN. Table~\ref{tab3} presents the results for this approach. In any case, whatever the -considered feedforward network topologies, the maximum epoch number +considered feedforward network topologies, the maximum epoch number, and the kind of iterations, the configuration success rate is slightly improved. Moreover, the strategies predictions rates reach almost 12\%, whereas in Table~\ref{tab2} they never exceed 1.5\%. Despite of @@ -1001,7 +1000,7 @@ appear to be an open issue. \centering \begin{tabular}{|c||c|c|c|} \hline -\multicolumn{4}{|c|}{Networks topology: 3~inputs, 1~output and one hidden layer} \\ +\multicolumn{4}{|c|}{Networks topology: 3~inputs, 1~output, and one hidden layer} \\ \hline \hline Epochs & 125 & 250 & 500 \\ @@ -1100,7 +1099,7 @@ be investigated too, to discover which tools are the most relevant when facing a truly chaotic phenomenon. A comparison between learning rate success and prediction quality will be realized. Concrete consequences in biology, physics, and computer science security fields -will be stated. +will then be stated. % \appendix{}