X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_chic.git/blobdiff_plain/5cbdb1c4043f8d808549175ef543cf67ab8ca12a..HEAD:/chapter2.tex?ds=sidebyside diff --git a/chapter2.tex b/chapter2.tex index 3ffdbe5..3b35fe8 100644 --- a/chapter2.tex +++ b/chapter2.tex @@ -1153,12 +1153,10 @@ By varying the threshold of intensity of implication, it is obvious that the num The relationship defined by statistical implication, if it is reflexive and not symmetrical, is obviously not transitive, as is induction and, on the contrary, deduction. However, we want it to model the partial relationship between two variables (the successes in our initial example). -By convention, if $a \Rightarrow b$ and $b \Rightarrow c$, we will accept the transitive closure $a \Rightarrow c$ only if $\psi(a,c) \geq 0.5$, i.e. if the implicit relationship of $a$ to $c$ is better than neutrality by emphasizing the dependence between $a$ and $c$. - - -{\bf VERIFIER PHI PSI}\\ +By convention, if $a \Rightarrow b$ and $b \Rightarrow c$, we will accept the transitive closure $a \Rightarrow c$ only if $\varphi(a,c) \geq 0.5$, i.e. if the implicit relationship of $a$ to $c$ is better than neutrality by emphasizing the dependence between $a$ and $c$. \\ -{\bf Proposal:} By convention, if $a \Rightarrow b$ and $b \Rightarrow c$, there is a transitive closure $a \Rightarrow c$ if and only if $\psi(a,c) \geq 0.5$, i.e. if the implicit relationship of $a$ over $c$, which reflects a certain dependence between $a$ and $c$, is better than its refutation. + +{\bf Proposal:} By convention, if $a \Rightarrow b$ and $b \Rightarrow c$, there is a transitive closure $a \Rightarrow c$ if and only if $\varphi(a,c) \geq 0.5$, i.e. if the implicit relationship of $a$ over $c$, which reflects a certain dependence between $a$ and $c$, is better than its refutation. Note that for any pair of variables $(x;~ y)$, the arc $x \rightarrow y$ is weighted by the intensity of involvement (x,y). \\ Let us take a formal example by assuming that between the 5 variables $a$, $b$, $c$, $d$, and $e$ exist, at the threshold above $0.5$, the following rules: $c \Rightarrow a$, $c \Rightarrow e$, $c \Rightarrow b$, $d \Rightarrow a$, $d \Rightarrow e$, $a \Rightarrow b$ and $a \Rightarrow e$. @@ -1217,10 +1215,234 @@ See, for example~\cite{Lahanierc} and the two implicit cones below (i.e. Figures \subsection{Motivation} -As soon as the number of variables becomes excessive, most of the available techniques become impractical. +As soon as the number of variables becomes excessive, most of the available techniques become impractical\footnote{This paragraph is strongly inspired by paper~\cite{Grask}.}. In particular, when an implicitive analysis is carried out by calculating association rules~\cite{Agrawal}, the number of rules discovered undergoes a combinatorial explosion with the number of variables, and quickly becomes inextricable for a decision-maker, provided that variable conjunctions are requested. In this context, it is necessary to make a preliminary reduction in the number of variables. Thus, ~\cite{Ritschard} proposed an efficient heuristic to reduce both the number of rows and columns in a table, using an association measure as a quasi-optimal criterion for controlling the heuristic. However, to our knowledge, in the various other research studies, the type of situation at the origin of the need to group rows or columns is not taken into account in the reduction criteria, whether the analyst's problem and aim are the search for similarity, dissimilarity, implication, etc., between variables. +Also, to the extent that there are very similar variables in the sense of statistical implication, it might be appropriate to substitute a single variable for these variables that would be their leader in terms of representing an equivalence class of similar variables for the implicit purpose. +We therefore propose, following the example of what is done to define the notion of quasi-implication, to define a notion of quasi-equivalence between variables, in order to build classes from which we will extract a leader. +We will illustrate this with an example. +Then, we will consider the possibility of using a genetic algorithm to optimize the choice of the representative for each quasi-equivalence class. + +\subsection{Definition of quasi-equivalence} + +Two binary variables $a$ and $b$ are logically equivalent for the SIA when the two quasi-implications $a \Rightarrow b$ and $b \Rightarrow a$ are simultaneously satisfied at a given threshold. +We have developed criteria to assess the quality of a quasi-involvement: one is the statistical surprise based on the likelihood of~\cite{Lerman} relationship, the other is the entropic form of quasi-inclusion~\cite{Grash2} which is presented in this chapter (§7). + +According to the first criterion, we could say that two variables $a$ and $b$ are almost equivalent when the intensity of involvement $\varphi(a,b)$ of $a\Rightarrow b$ is little different from that of $b \Rightarrow a$. However, for large groups (several thousands), this criterion is no longer sufficiently discriminating to validate inclusion. + +According to the second criterion, an entropic measure of the imbalance between the numbers $n_{a \wedge b}$ (individuals who satisfy $a$ and $b$) and $n_{a \wedge \overline{b}} $ (individuals who satisfy $a$ and $\neg b$, counter-examples to involvement $a\Rightarrow b$) is used to indicate the quality of involvement $a\Rightarrow b$, on the one hand, and the numbers $n_{a \wedge b}$ and $n_{\overline{a} \wedge b}$ to assess the quality of mutual implication $b\Rightarrow a$, on the other. + + +Here we will use a method comparable to that used in Chapter 3 to define the entropic implication index. + +By posing $n_a$ and $n_b$, respectively effective of $a$ and $b$, the imbalance of the rule $a\Rightarrow b$ is measured by a conditional entropy $K(b \mid a=1)$, and that of $b\Rightarrow a$ by $K(a \mid b=1)$ with: + + +\begin{eqnarray*} + K(b\mid a=1) = - \left( 1- \frac{n_{a\wedge b}}{n_a}\right) log_2 \left( 1- \frac{n_{a\wedge b}}{n_a}\right) - \frac{n_{a\wedge b}}{n_a}log_2 \frac{n_{a\wedge b}}{n_a} & \quad if \quad \frac{n_{a \wedge b}}{n_a} > 0.5\\ + K(b\mid a=1) = 1 & \quad if \quad \frac{n_{a \wedge b}}{n_a} \leq 0.5\\ + K(a\mid b=1) = - \left( 1- \frac{n_{a\wedge b}}{n_b}\right) log_2 \left( 1- \frac{n_{a\wedge b}}{n_b}\right) - \frac{n_{a\wedge b}}{n_b}log_2 \frac{n_{a\wedge b}}{n_b} & \quad if \quad \frac{n_{a \wedge b}}{n_b} > 0.5\\ + K(a\mid b=1) = 1 & \quad if \quad \frac{n_{a \wedge b}}{n_b} \leq 0.5 +\end{eqnarray*} + +These two entropies must be low enough so that it is possible to bet on $b$ (resp. $a$) with a good certainty when $a$ (resp. $b$) is achieved. Therefore their respective complements to 1 must be simultaneously strong. + +\begin{figure}[htbp] + \centering +\includegraphics[scale=0.5]{chap2fig8.png} +\caption{Illustration of the functions $K$ et $1-K^2$ on $[0; 1]$ .} + +\label{chap2fig8} +\end{figure} + + +\definition A first entropic index of equivalence is given by: +$$e(a,b) = \left (\left[ 1 - K^2(b \mid a = 1)\right ]\left[ 1 - K^2(a \mid b = 1) \right]\right)^{\frac{1}{4}}$$ + +When this index takes values in the neighbourhood of $1$, it reflects a good quality of a double implication. +In addition, in order to better take into account $a \wedge b$ (the examples), we integrate this parameter through a similarity index $s(a,b)$ of the variables, for example in the sense of I.C. Lerman~\cite{Lermana}. +The quasi-equivalence index is then constructed by combining these two concepts. + +\definition A second entropic equivalence index is given by the formula + +$$\sigma(a,b)= \left [ e(a,b).s(a,b)\right ]^{\frac{1}{2}}$$ + +From this point of view, we then set out the quasi-equivalence criterion that we use. + +\definition The pair of variables $\{a,b\}$ is said to be almost equivalent for the selected quality $\beta$ if $\sigma(a,b) \geq \beta$. +For example, a value $\beta=0.95$ could be considered as a good quasi-equivalence between $a$ and $b$. + +\subsection{Algorithm of construction of quasi-equivalence classes} + +Let us assume a set $V = \{a,b,c,...\}$ of $v$ variables with a valued relationship $R$ induced by the measurement of quasi-equivalence on all pairs of $V$. +We will assume the pairs of variables classified in a decreasing order of quasi-equivalence. +If we have set the quality threshold for quasi-equivalence at $\beta$, only the first of the pairs $\{a,b\}$ checking for inequality $\sigma(a,b)\ge \beta$ will be retained. +In general, only a part $V'$, of cardinal $v'$, of the variables of $V$ will verify this inequality. +If this set $V'$ is empty or too small, the user can reduce his requirement to a lower threshold value. +The relationship being symmetrical, we will have at most pairs to study. +As for $V-V'$, it contains only non-reducible variables. + +We propose to use the following greedy algorithm: +\begin{enumerate} +\item A first potential class $C_1^0= \{e,f\}$ is constituted such that $\sigma(e,f)$ represents the largest of the $\beta$-equivalence values. + If possible, this class is extended to a new class $C_1$ by taking from $V'$ all the elements $x$ such that any pair of variables within this class allows a quasi-equivalence greater than or equal to $\beta$; + +\item We continue with: + \begin{enumerate} + \item If $o$ and $k$ forming the pair $(o,k)$ immediately below $(e,f)$ according to the index $\sigma$, belong to $C_1$, then we move to the pair immediately below (o,k) and proceed as in 1.; + \item If $o$ and $k$ do not belong to $C_1$, proceed as in 1. from the pair they constitute by forming the basis of a new class; + \item If $o$ or $k$ does not belong to $C_1$, one of these two variables can either form a singleton class or belong to a future class. On this one, we will of course practice as above. + \end{enumerate} + \end{enumerate} + +After a finite number of iterations, a partition of $V$ is available in $r$ classes of $\sigma$-equivalence: $\{C_1, C_2,..., C_r\}$. +The quality of the reduction may be assessed by a gross or proportional index of $\beta^{\frac{r}{k}}$. +However, we prefer the criterion defined below, which has the advantage of integrating the choice of representative. + +In addition, $k$ variables representing the $k$ classes of $\sigma$-equivalence could be selected on the basis of the following elementary criterion: the quality of connection of this variable with those of its class. +However, this criterion does not optimize the reduction since the choice of representative is relatively arbitrary and may be a sign of triviality of the variable. + +\section{Conclusion} + +This overview of the development of implicit statistical analysis shows, if necessary, how a data processing theory is built step by step in response to problems presented by experts from various fields and in response to epistemological requirements that respect common sense and intuition. +It therefore appears differently than as a view of the mind since it is directly applicable to the situations that lead to its genesis. +The extensions made to the types of data processed, to the modes of representation of their structures, to the relationships between subjects, their descriptors and variables are indeed the result of the experts' greedy questions. +Its respective functions as developer and analyzer seem to operate successfully in multiple application areas. + +We will have noticed that the theoretical basis is simple, which could be the reason for its fertility. +Even if the questioning of primitive theoretical choices is not apparent here, this genesis has not been without conflicts between the expected answers, the ease of their access and therefore these answers have been sources of restoration or even redesign; often discussed within the research team. +In any case, this method of data analysis will have made it possible and will, Régis hopes, still make it possible to highlight living structures thanks to the non-symmetrical approach on which it is based. + +Among the current or future work proposed to our team, one concerns an extension of the SIA to vector variables in response to problems in proteomics. +Another is more broadly concerned with the relationship between SIA and the treatment of fuzzy sets (see Chapter 7). +The function of the "implication" fuzzy logic operator will be illustrated by new applications. +Through another subject, we will review our method to allow the SIA to solve the problem of data table vacancies, as well as the ongoing work on reducing redundant rules in SIA. +Finally, it is clear that this work will be conducted interactively with applications and, in particular, the contribution of SIA to the classification rule in the leaves of classification trees. + + + +\section{Annex1: Two models of the classical implication intensity} + +\subsection{Binomial model} + +To examine the quality of quasi-rule $a \Rightarrow b$, in the case where the variables are binary, is to measure equivalently that of the inclusion of the subset of transactions satisfying $a$ in the subset of transactions satisfying $b$. +The counter-examples relating to inclusion are indeed the same as those relating to the implication expressed by: "any satisfactory transaction $a$ has also satisfied $b$". +From this overall perspective, as soon as $n_a n_b$, the quality of the quasi-rule $a \Rightarrow b$, can only be semantically better than the one of $b \Rightarrow a$. +We will therefore assume, later on, that $n_a \leq n_b$ when studying $a \Rightarrow b$. In this case, the main population is finite and $Card~ E = n$. + +Binomial modelling was the first to be adopted chronologically (see~\cite{Grasb} chap. 2). +It was compared to other models in~\cite{Lermana}. +Let us briefly recall what the binomial model consists of. +With the adopted notations, $X$ and $Y$ are two random subsets, independently chosen from all the parts of $E$, respectively of the same cardinal $n_a$ and $n_b$ as the subsets of the realizations of $a$ and $b$. +The observed value $n_{a \wedge b}$ can be considered as the realization of a random variable $Card(X\cap Y)$ which represents the random number of counter-examples to the inclusion of $X$ in $Y$, counter-examples observed during $n$ successive independent draws. From there, $Card(X\cap \overline{Y})$ can be considered as a binomial variable of parameters $n$ and $\pi$ where $\pi$ is itself estimated by $p = \frac{n_a}{n}\frac{n_b}{n}$. Thus: + +$$Pr[Card(X\cap \overline{Y})= k]= C_n^k\left( \frac{n_an_{\overline{b}}}{n^2} \right)^k \left(1-\frac{n_a n_{\overline{b}}}{n^2} \right)^{n-k} $$ + +The estimated reduced centered variable $Q(a,~\overline{b})$ then accepts as a realization: + +$$q(a,\overline{b}) = \frac{n_{a \wedge \overline{b}}- + \frac{n_a.n_{\overline{b}}}{n}}{\sqrt{\frac{n_a.n_{\overline{b}}}{n}(1-\frac{n_a n_{\overline{b}}}{n^2})} }$$ + +As before, we obtain the estimated intensity of empirical implication: +$$\varphi(a,b)=1-Pr[Q(a,\overline{b})\leq q(a,\overline{b})] = 1 - \sum _0^{n_{a \wedge \overline{b}}} C_n^k\left (\frac{n_an_{\overline{b}}}{n^2}\right )^k\left (1-\frac{n_an_{\overline{b}}}{n^2}\right )^{n-k}$$ + + +The probability law of $Q(a,\overline{b})$ can be approximated by the one of the Laplace-Gauss law centred reduced $N(0,1)$. Generally, the intensity calculated in the Poisson model is more "severe" than the intensity derived from the binomial model in the sense that $\varphi(a,b)_{Poisson} \leq \varphi(a,b)_{Binomial}$. + +\remark We can note that the implication index is null if and only if the two variables $a$ and $b$ are independent. Indeed, we have +$$ q(a,\overline{b}) = \frac{n_{a \wedge \overline{b}}- + \frac{n_a.n_{\overline{b}}}{n}}{\sqrt{\frac{n_a.n_{\overline{b}}}{n}(1-\frac{n_a n_{\overline{b}}}{n^2})} } =0 \iff n_{a \wedge \overline{b}}- \frac{n_a.n_{\overline{b}}}{n}=0$$ + +$$q(a,\overline{b}) =0 \iff n_{a \wedge \overline{b}}=\frac{n_a.n_{\overline{b}}}{n}~ \mbox{or }~ q(a,\overline{b}) =0 \iff \frac{n_a.n_{\overline{b}}}{n}=\frac{n_a}{n}\frac{n_{\overline{b}}}{n}$$ + +This last relationship reflects the property of statistical independence. + +\subsection{Hypergeometric model} +Let us briefly recall the 3rd modelling proposed in \cite{Lermana} and \cite{Grasd}. We repeat the same approach: $A$ and $B$ are the parts of $E$ representing the individuals satisfying $a$ and $b$ respectively and whose cardinals are $card (A)=n_a$ and $card (B)=n_b$. Then let us consider, two independent random parts $X$ and $Y$ such that $card (X)=n_a$ and $card (Y)=n_b$. The random variable $Card(A \cap \overline{Y})$ represents the random number of elements of $E$ which, being in $A$ are not in $Y$. This variable follows a hypergeometric law and we have for all $kn_a$: + +$$Pr[Card(A \cap \overline{Y})=k]=\frac{C_{n_a}^k C_{n-n_a}^{n-n_b-k}}{C_n^{n-n_b}} =\frac{n_a!n_{\overline{a}}! n_b!n_{\overline{b}}! }{k!n!(n_a-k)!(n_{\overline{b}}-k)! (n_b-n_a+k)! }$$ + +$$\frac{C_{n-n_b}^k C_{n_b}^{n_a-k}}{C_n^{n_a}} = Pr[Card(X \cap \overline{B})=k]$$ + +This shows, by exchanging the role of $a$ and $b$, that the empirical implication index $Q(a,\overline{b})$ corresponding to the quasi-rule $a \Rightarrow b$, is the same as the one corresponding to the reciprocal, i.e. $Q(b,\overline{a})$ . We thus obtain the same intensity for the quasi-rule $a \Rightarrow b$ and for the reciprocal quasi-rule $b \Rightarrow a$. + +\subsection{Choice of models to evaluate the intensity of implication} +If binomial modeling remains compatible with the semantics of implication, a non-symmetric binary relationship, the same cannot be said for hypergeometric modeling since it does not distinguish the quality of a quasi-rule from that of its reciprocal and has a low pragmatic character. +Consequently, we will only retain the Poisson model and the binomial model as models adapted to the semantics of involvement between binary variables. + + +The legitimate coexistence of three different models of our problem of measuring the quality of a quasi-rule is not inconsistent: it is due to the way in which the drawing of transactions (Poisson's law) or sets of grouped transactions (binomial law or hypergeometric law) is taken into account one by one. In addition, we know that when the total number of transactions becomes very large, all three models converge on the same Gaussian model. In~\cite{Lallich}, we find, as a generalization, a parameterization of the three indices obtained by these models, which allows us to evaluate the interest of the rules obtained by comparing them to a given threshold. + +\section{Annex 2: Modelling of implication integrating confidence and surprise} + +Recently, in~\cite{Grasab}, we have assembled two statistical concepts that we believe are internal to the implicit relationship between two variables $a$ and $b$: +\begin{itemize} +\item on the one hand, the intensity of involvement $\varphi(a,b)$ measuring surprise or astonishment at the low number of counter-examples to implication between these variables +\item on the other hand, the confidence $C(b \mid a)$ measuring the conditional frequency of $b$ knowing $a$ who is involved in the majority of the other implication indices as we have seen in §2.5.4. +\end{itemize} + +So, we claim, by plagiarizing G. Vergnaud~\cite{Vergnaudd} speaking about aesthetics, that there is no data analysis without {\bf confidence} (psychological level). But there is also no data analysis without {\bf surprise}\footnote{This is also what René Thom says in~\cite{Thoma} p. 130: (translated in english) "...the problem is not to describe reality, the problem is much more to identify in it what makes sense to us, what is surprising in all the facts. If the facts do not surprise us, they do not bring any new element to the understanding of the universe: we might as well ignore them" and further on: "... which is not possible if we do not already have a theory".} (statistical level), nor without {\bf scale correction} (pragmatic level). The two concepts (confidence and intensity of implication) therefore respond to relatively distinct but not contradictory principles: confidence is based on the subordination of variable $b$ to variable $a$ while intensity of implication is based on counter-examples to the subjection relationship of $b$ by $a$. + +It is demonstrated in~\cite{Grasab} that, for any $\alpha$ that the ratio + +$$ \frac{Pr[C(b\mid a)\geq \alpha]}{Pr[\varphi(a,b)\geq \alpha]}~\mbox{is close of}~ \frac{Pr[C(b \mid a) \geq \alpha}{1-\alpha}$$ + + +Under these conditions, this ratio is a good indicator of satisfaction between confidence and intensity of implication: greater than 1, confidence is then better than intensity; less than 1, intensity is stronger. Further research could be based on this indicator. + +Finally, as we did for entropic intensity, we will take into account the contraposed by associating the two conditional frequencies of b knowing a, i.e. $C_1(a,b)$ (for direct implication $a \Rightarrow b$) and $no~ a$ knowing $no~ b$, $C_2(a,b)$ (for contraposed implication $\neg b \Rightarrow \neg a$). Finally, we choose the following formula to define a new measure of implication that we call {\bf implifiance} in French (implication + confidence): + +$$ \phi(a,b)=\varphi(a,b).\left [ C_1(a,b).C_2(a,b) \right ]^{\frac{1}{4}}$$ + +For example, if we extract a rule whose implication is equal to $0.95$, its intensity of implication is at least equal to $0.95$ and each of the $C_1$ and $C_2$ confidences is at least equal to $0.81$. If the implication is equal to $0.90$, the respective minima are $0.90$ and $0.66$, which preserves the plausibility of the rule. + +The following two figures show the respective variations in intensity of implication, entropic intensity and implifiance in ordinates as a function of the number of counter-examples in cases $n=100$ and $n=1000$ (respectively in Figures~\ref{chap2fig9} and~\ref{chap2fig10}. + +\begin{figure}[htbp] + \centering +\includegraphics[scale=1.3]{chap2fig9.png} +\caption{Example of Implifiance with $n=100$.} + +\label{chap2fig9} +\end{figure} + +\begin{figure}[htbp] + \centering +\includegraphics[scale=1.3]{chap2fig10.png} +\caption{Example of Implifiance with $n=1000$.} + +\label{chap2fig10} +\end{figure} + + +\section{Annex 3: SIA and Hempel's paradox} + +If we look at the SIA from the point of view of Knowledge Extraction, we find the main objective of the inductive establishment of rules and quasi-rules between variables $a$ and $b$ observed through instances $x$ of a set $E$ of objects or subjects. A strict rule (or theorem in this case) will be expressed in a symbolic form: $\forall x, (a(x)\Rightarrow b(x))$. A quasi-rule will present counter-examples, i.e. the following statement will be observed: $\exists x, (a(x)\wedge \overline{b(x)})$. + + +The purpose of the SIA is to provide a measure to such rules in order to estimate their quality when the frequency of the last statement above is low. +First, within the framework of the SIA, a quality index is constructed in order, like other indices, to provide a probabilistic response to this problem. +But in seeking among the rules\footnote{$n_{a \wedge \overline{b}}$} those that would express a causality, a causal relationship, or at least a causal relationship, it seemed absolutely necessary to us, as we said in point 4, to support the satisfaction of the direct rule by a measure of its contraposition: $\forall x, (\overline{b(x)} \Rightarrow \overline{a(x)})$. +Indeed, if statistically, whether with confidence measured by conditional frequency or with intensity of implication, the truth of a strict rule is also obtained with its counterpart, this is no longer necessarily the case with a quasi-rule. +We have also sought to construct in a new and original way a measure that makes it possible to overcome Hempel's paradox~\cite{Hempel} in order to obtain a measure that confirms the satisfaction of induction in terms of causality. + + +It should be recalled that, according to Carl G. Hempel, in strict logic, this paradox is linked to the irrelevance of contraposition in relation to induction, whereas empirical non-satisfaction (de facto) with premise $a$ is observed. +It is the consequence of the application of Hempel's 3rd principle: "If an observed object $x$ does not satisfy the antecedent (i.e. $a(x) = false$), it does not count or it is irrelevant in relation to the conditional (= the direct proposition)". +In other words, the confirmation of the contraposition does not provide anything as to the direct version of the proposal, although it is logically equivalent to it. +For example, it is not the confirmatory observation of the contraposition of "All crows are black" by a red cat (i. e. not black) that confirms the validity of "All crows are black". Nor, for that matter, by continuing to observe other non-black objects. Because to confirm this statement and thus validate the induction, we would have to review all the non-black objects that can be infinite in number. + +In other words, according to Hempel, in the implication truth table, cases where $a(x)$ is false are uninteresting for induction; only the lines [$a(x)=true$ and $b(x)=true$] that confirm the rule and [$a(x)=false$ and $b(x)=true$] that invalidate it, are retained. +\\ + +\underline{However, in SIA, this paradox does not hold for two reasons:} + + +\begin{enumerate} +\item the objects $x$ are part of the same finite or unfinite reference set $E$, i.e. infinite, countable and even continuous, in which all $x$ are likely, with relevance, to satisfy or not satisfy the variables at stake. That is, by assigning them a value (truth or numerical), the direct proposition and/or its counterpart are also evaluable (for example, proposition $a \Rightarrow b$ is true even if $a(x)$ is false while $b(x)$ is true); +\item Since we are most often dealing with quasi-rules, the equivalence between a proposal and its counterpart no longer holds, and it is on the basis of the combination of the respective and evaluated qualities of these statements that we induce or not a causal character. Moreover, if the rule is strict, the logical equivalence with its counterpart is strict and the counterpart rule is satisfied at the same time. +\end{enumerate}