From: Raphaël Couturier Date: Fri, 6 Sep 2019 15:08:50 +0000 (+0200) Subject: new X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_chic.git/commitdiff_plain/ca2470ce26187c7be2c722bc2ff54011525c94f4?hp=5cbdb1c4043f8d808549175ef543cf67ab8ca12a new --- diff --git a/chap2fig8.png b/chap2fig8.png new file mode 100644 index 0000000..3238265 Binary files /dev/null and b/chap2fig8.png differ diff --git a/chapter2.tex b/chapter2.tex index 3ffdbe5..8c52f22 100644 --- a/chapter2.tex +++ b/chapter2.tex @@ -1217,10 +1217,94 @@ See, for example~\cite{Lahanierc} and the two implicit cones below (i.e. Figures \subsection{Motivation} -As soon as the number of variables becomes excessive, most of the available techniques become impractical. +As soon as the number of variables becomes excessive, most of the available techniques become impractical\footnote{This paragraph is strongly inspired by paper~\cite{Grask}.}. In particular, when an implicitive analysis is carried out by calculating association rules~\cite{Agrawal}, the number of rules discovered undergoes a combinatorial explosion with the number of variables, and quickly becomes inextricable for a decision-maker, provided that variable conjunctions are requested. In this context, it is necessary to make a preliminary reduction in the number of variables. Thus, ~\cite{Ritschard} proposed an efficient heuristic to reduce both the number of rows and columns in a table, using an association measure as a quasi-optimal criterion for controlling the heuristic. However, to our knowledge, in the various other research studies, the type of situation at the origin of the need to group rows or columns is not taken into account in the reduction criteria, whether the analyst's problem and aim are the search for similarity, dissimilarity, implication, etc., between variables. +Also, to the extent that there are very similar variables in the sense of statistical implication, it might be appropriate to substitute a single variable for these variables that would be their leader in terms of representing an equivalence class of similar variables for the implicit purpose. +We therefore propose, following the example of what is done to define the notion of quasi-implication, to define a notion of quasi-equivalence between variables, in order to build classes from which we will extract a leader. +We will illustrate this with an example. +Then, we will consider the possibility of using a genetic algorithm to optimize the choice of the representative for each quasi-equivalence class. + +\subsection{Definition of quasi-equivalence} + +Two binary variables $a$ and $b$ are logically equivalent for the SIA when the two quasi-implications $a \Rightarrow b$ and $b \Rightarrow a$ are simultaneously satisfied at a given threshold. +We have developed criteria to assess the quality of a quasi-involvement: one is the statistical surprise based on the likelihood of~\cite{Lerman} relationship, the other is the entropic form of quasi-inclusion~\cite{Grash2} which is presented in this chapter (§7). + +According to the first criterion, we could say that two variables $a$ and $b$ are almost equivalent when the intensity of involvement $\varphi(a,b)$ of $a\Rightarrow b$ is little different from that of $b \Rightarrow a$. However, for large groups (several thousands), this criterion is no longer sufficiently discriminating to validate inclusion. + +According to the second criterion, an entropic measure of the imbalance between the numbers $n_{a \wedge b}$ (individuals who satisfy $a$ and $b$) and $n_{a \wedge \overline{b}} $ (individuals who satisfy $a$ and $\neg b$, counter-examples to involvement $a\Rightarrow b$) is used to indicate the quality of involvement $a\Rightarrow b$, on the one hand, and the numbers $n_{a \wedge b}$ and $n_{\overline{a} \wedge b}$ to assess the quality of mutual implication $b\Rightarrow a$, on the other. + + +Here we will use a method comparable to that used in Chapter 3 to define the entropic implication index. + +By posing $n_a$ and $n_b$, respectively effective of $a$ and $b$, the imbalance of the rule $a\Rightarrow b$ is measured by a conditional entropy $K(b \mid a=1)$, and that of $b\Rightarrow a$ by $K(a \mid b=1)$ with: + + +\begin{eqnarray*} + K(b\mid a=1) = - \left( 1- \frac{n_{a\wedge b}}{n_a}\right) log_2 \left( 1- \frac{n_{a\wedge b}}{n_a}\right) - \frac{n_{a\wedge b}}{n_a}log_2 \frac{n_{a\wedge b}}{n_a} & \quad if \quad \frac{n_{a \wedge b}}{n_a} > 0.5\\ + K(b\mid a=1) = 1 & \quad if \quad \frac{n_{a \wedge b}}{n_a} \leq 0.5\\ + K(a\mid b=1) = - \left( 1- \frac{n_{a\wedge b}}{n_b}\right) log_2 \left( 1- \frac{n_{a\wedge b}}{n_b}\right) - \frac{n_{a\wedge b}}{n_b}log_2 \frac{n_{a\wedge b}}{n_b} & \quad if \quad \frac{n_{a \wedge b}}{n_b} > 0.5\\ + K(a\mid b=1) = 1 & \quad if \quad \frac{n_{a \wedge b}}{n_b} \leq 0.5 +\end{eqnarray*} + +These two entropies must be low enough so that it is possible to bet on $b$ (resp. $a$) with a good certainty when $a$ (resp. $b$) is achieved. Therefore their respective complements to 1 must be simultaneously strong. + +\begin{figure}[htbp] + \centering +\includegraphics[scale=0.5]{chap2fig8.png} +\caption{Illustration of the functions $K$ et $1-K^2$ on $[0; 1]$ .} + +\label{chap2fig7} +\end{figure} + + +\definition A first entropic index of equivalence is given by: +$$e(a,b) = \left (\left[ 1 - K^2(b \mid a = 1)\right ]\left[ 1 - K^2(a \mid b = 1) \right]\right)^{\frac{1}{4}}$$ + +When this index takes values in the neighbourhood of $1$, it reflects a good quality of a double implication. +In addition, in order to better take into account $a \wedge b$ (the examples), we integrate this parameter through a similarity index $s(a,b)$ of the variables, for example in the sense of I.C. Lerman~\cite{Lermana}. +The quasi-equivalence index is then constructed by combining these two concepts. + +\definition A second entropic equivalence index is given by the formula + +$$\sigma(a,b)= \left [ e(a,b).s(a,b)\right ]^{\frac{1}{2}}$$ + +From this point of view, we then set out the quasi-equivalence criterion that we use. + +\definition The pair of variables $\{a,b\}$ is said to be almost equivalent for the selected quality $\beta$ if $\sigma(a,b) \geq \beta$. +For example, a value $\beta=0.95$ could be considered as a good quasi-equivalence between $a$ and $b$. + +\subsection{Algorithm of construction of quasi-equivalence classes} + +Let us assume a set $V = \{a,b,c,...\}$ of $v$ variables with a valued relationship $R$ induced by the measurement of quasi-equivalence on all pairs of $V$. +We will assume the pairs of variables classified in a decreasing order of quasi-equivalence. +If we have set the quality threshold for quasi-equivalence at $\beta$, only the first of the pairs $\{a,b\}$ checking for inequality $\sigma(a,b)\ge \beta$ will be retained. +In general, only a part $V'$, of cardinal $v'$, of the variables of $V$ will verify this inequality. +If this set $V'$ is empty or too small, the user can reduce his requirement to a lower threshold value. +The relationship being symmetrical, we will have at most pairs to study. +As for $V-V'$, it contains only non-reducible variables. + +We propose to use the following greedy algorithm: +\begin{enumerate} +\item A first potential class $C_1^0= \{e,f\}$ is constituted such that $\sigma(e,f)$ represents the largest of the $\beta$-equivalence values. + If possible, this class is extended to a new class $C_1$ by taking from $V'$ all the elements $x$ such that any pair of variables within this class allows a quasi-equivalence greater than or equal to $\beta$; + +\item We continue with: + \begin{enumerate} + \item If $o$ and $k$ forming the pair $(o,k)$ immediately below $(e,f)$ according to the index $\sigma$, belong to $C_1$, then we move to the pair immediately below (o,k) and proceed as in 1.; + \item If $o$ and $k$ do not belong to $C_1$, proceed as in 1. from the pair they constitute by forming the basis of a new class; + \item If $o$ or $k$ does not belong to $C_1$, one of these two variables can either form a singleton class or belong to a future class. On this one, we will of course practice as above. + \end{enumerate} + \end{enumerate} + +After a finite number of iterations, a partition of $V$ is available in $r$ classes of $\sigma$-equivalence: $\{C_1, C_2,..., C_r\}$. +The quality of the reduction may be assessed by a gross or proportional index of $\beta^{\frac{p}{k}}$. +However, we prefer the criterion defined below, which has the advantage of integrating the choice of representative. + +In addition, $k$ variables representing the $k$ classes of $\sigma$-equivalence could be selected on the basis of the following elementary criterion: the quality of connection of this variable with those of its class. +However, this criterion does not optimize the reduction since the choice of representative is relatively arbitrary and may be a sign of triviality of the variable. + diff --git a/references.tex b/references.tex index c98a3d5..d8cae26 100644 --- a/references.tex +++ b/references.tex @@ -138,7 +138,8 @@ \bibitem{Grasg} Gras R., Briand H., Peter P. (1996) Structuration sets with implication intensity", Proceedings of the International Conference on Ordinal and Symbolic Data Analysis - OSDA 95, E.Diday, Y.Chevallier, Otto Opitz, Eds., Springer, Paris, 147-156. \bibitem{Grash} Gras R., Briand H., Peter P., Philippé J. (1997) Implicative statistical analysis, Proceedings of International Congress I.F.C.S., 96, Kobé, Springer-Verlag, Tokyo , 1997, 412-419 -Gras R., Kuntz P., Couturier R. et Guillet F., (2001), Une version entropique de l'intensité d'implication pour les corpus volumineux,.Extraction des Connaissances et Apprentissage (ECA), vol. 1, n° 1-2, 69-80. Hermès Science Publication. + +\bibitem{Grash2} Gras R., Kuntz P., Couturier R. et Guillet F., (2001), Une version entropique de l'intensité d'implication pour les corpus volumineux,.Extraction des Connaissances et Apprentissage (ECA), vol. 1, n° 1-2, 69-80. Hermès Science Publication. \bibitem{Grasi} Gras R., Kuntz P. et Briand H. (2001) Les fondements de l'analyse statistique implicative et quelques prolongements pour la fouille de