From: Michel Salomon <salomon@caseb.iut-bm.univ-fcomte.fr>
Date: Mon, 10 Oct 2011 14:40:41 +0000 (+0200)
Subject: C'est de la merde ce git
X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/chaos1.git/commitdiff_plain/refs/heads/master?ds=sidebyside

C'est de la merde ce git
---

diff --git a/main.tex b/main.tex
index 79209de..49564db 100644
--- a/main.tex
+++ b/main.tex
@@ -556,11 +556,7 @@ condition  $\left(S,(x_1^0,\dots,  x_n^0)\right)  \in  \llbracket  1;n
 \rrbracket^{\mathds{N}}  \times \mathds{B}^n$.
 Theoretically  speaking, such iterations  of $F_f$  are thus  a formal
 model of these kind of recurrent  neural networks. In the rest of this
-<<<<<<< HEAD
 paper, we will call such multilayer perceptrons ``CI-MLP($f$)'', which
-=======
-paper,  we will  call such  multilayer perceptrons  ``CI-MLP($f$)'', which
->>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
 stands for ``Chaotic Iterations based MultiLayer Perceptron''.
 
 Checking  if CI-MLP($f$)  behaves chaotically  according  to Devaney's
@@ -864,11 +860,7 @@ are compared.
 
 Thereafter we give,  for the different learning setups  and data sets,
 the mean prediction success rate obtained for each output. Such a rate
-<<<<<<< HEAD
 represents the percentage of  input-output pairs belonging to the test
-=======
-represents the  percentage of input-output pairs belonging  to the test
->>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
 subset  for  which  the   corresponding  output  value  was  correctly
 predicted.   These values are  computed considering  10~trainings with
 random  subsets  construction,   weights  and  biases  initialization.
@@ -964,10 +956,6 @@ configuration is always expressed as  a natural number, whereas in the
 first one  the number  of inputs follows  the increase of  the Boolean
 vectors coding configurations. In this latter case, the coding gives a
 finer information on configuration evolution.
-<<<<<<< HEAD
-=======
-
->>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
 \begin{table}[b]
 \caption{Prediction success rates for configurations expressed with Gray code}
 \label{tab2}
@@ -1020,11 +1008,7 @@ usually  unknown.   Hence, the  first  coding  scheme  cannot be  used
 systematically.   Therefore, we  provide  a refinement  of the  second
 scheme: each  output is learned  by a different  ANN. Table~\ref{tab3}
 presents the  results for  this approach.  In  any case,  whatever the
-<<<<<<< HEAD
 considered feedforward  network topologies, the  maximum epoch number,
-=======
-considered  feedforward network topologies,  the maximum  epoch number,
->>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
 and the kind of iterations, the configuration success rate is slightly
 improved.   Moreover, the  strategies predictions  rates  reach almost
 12\%, whereas in Table~\ref{tab2} they never exceed 1.5\%.  Despite of
@@ -1132,11 +1116,7 @@ be investigated  too, to  discover which tools  are the  most relevant
 when facing a truly chaotic phenomenon.  A comparison between learning
 rate  success  and  prediction  quality will  be  realized.   Concrete
 consequences in biology, physics, and computer science security fields
-<<<<<<< HEAD
 will then be stated.
-=======
-will then be  stated.
->>>>>>> 3df586d673bc4f3b32fa0dd1cb46796256744772
 
 % \appendix{}