From: Michel Salomon Date: Mon, 10 Oct 2011 14:38:46 +0000 (+0200) Subject: Merge branch 'master' of ssh://bilbo.iut-bm.univ-fcomte.fr/chaos1 X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/chaos1.git/commitdiff_plain/c8f45a13d8b4d730baf96ab75e62c6b9b8575ea0 Merge branch 'master' of ssh://bilbo.iut-bm.univ-fcomte.fr/chaos1 Conflicts: main.tex Groumpf --- c8f45a13d8b4d730baf96ab75e62c6b9b8575ea0 diff --cc main.tex index 49564db,5bd42bc..79209de --- a/main.tex +++ b/main.tex @@@ -556,7 -556,7 +556,11 @@@ condition $\left(S,(x_1^0,\dots, x_n^ \rrbracket^{\mathds{N}} \times \mathds{B}^n$. Theoretically speaking, such iterations of $F_f$ are thus a formal model of these kind of recurrent neural networks. In the rest of this ++<<<<<<< HEAD +paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which ++======= + paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which ++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772 stands for ``Chaotic Iterations based MultiLayer Perceptron''. Checking if CI-MLP($f$) behaves chaotically according to Devaney's @@@ -860,9 -860,9 +864,13 @@@ are compared Thereafter we give, for the different learning setups and data sets, the mean prediction success rate obtained for each output. Such a rate ++<<<<<<< HEAD +represents the percentage of input-output pairs belonging to the test ++======= + represents the percentage of input-output pairs belonging to the test ++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772 subset for which the corresponding output value was correctly -predicted. These values are computed considering 10~trainings with +predicted. These values are computed considering 10~trainings with random subsets construction, weights and biases initialization. Firstly, neural networks having 10 and 25~hidden neurons are trained, with a maximum number of epochs that takes its value in @@@ -956,6 -948,7 +964,10 @@@ configuration is always expressed as first one the number of inputs follows the increase of the Boolean vectors coding configurations. In this latter case, the coding gives a finer information on configuration evolution. ++<<<<<<< HEAD ++======= + ++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772 \begin{table}[b] \caption{Prediction success rates for configurations expressed with Gray code} \label{tab2} @@@ -1008,7 -987,7 +1020,11 @@@ usually unknown. Hence, the first systematically. Therefore, we provide a refinement of the second scheme: each output is learned by a different ANN. Table~\ref{tab3} presents the results for this approach. In any case, whatever the ++<<<<<<< HEAD +considered feedforward network topologies, the maximum epoch number, ++======= + considered feedforward network topologies, the maximum epoch number, ++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772 and the kind of iterations, the configuration success rate is slightly improved. Moreover, the strategies predictions rates reach almost 12\%, whereas in Table~\ref{tab2} they never exceed 1.5\%. Despite of @@@ -1116,7 -1099,7 +1132,11 @@@ be investigated too, to discover whic when facing a truly chaotic phenomenon. A comparison between learning rate success and prediction quality will be realized. Concrete consequences in biology, physics, and computer science security fields ++<<<<<<< HEAD +will then be stated. ++======= + will then be stated. ++>>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772 % \appendix{}