X-Git-Url: https://bilbo.iut-bm.univ-fcomte.fr/and/gitweb/book_chic.git/blobdiff_plain/c1aea4dc782230b123634e9c6f06bfc9d39b973a..c9f45698e9a535650b20a516d197dca86ed70d90:/chapter2.tex?ds=sidebyside diff --git a/chapter2.tex b/chapter2.tex index c99fe46..ed3ec5b 100644 --- a/chapter2.tex +++ b/chapter2.tex @@ -29,7 +29,7 @@ gradually being supplemented by the processing of modal, frequency and, recently, interval and fuzzy variables. } -\section{Preamble} +\section*{Preamble} Human operative knowledge is mainly composed of two components: that of facts and that of rules between facts or between rules themselves. @@ -107,4 +107,835 @@ implication index for binary data~\cite{Lermana} or \cite{Lallich}, on the other hand, this notion is not extended to other types of variables, to extraction and representation according to a rule graph or a hierarchy of meta-rules; structures aiming at access to the -meaning of a whole not reduced to the sum of its parts \footnote{ICI }, i.e. operating as a complex non-linear system. For example, it is well known, through usage, that the meaning of a sentence does not completely depend on the meaning of each of the words in it (see the previous chapter, point 4). +meaning of a whole not reduced to the sum of its +parts~\cite{Seve}\footnote{This is what the philosopher L. Sève + emphasizes :"... in the non-additive, non-linear passage of the + parts to the whole, there are properties that are in no way + precontained in the parts and which cannot therefore be explained by + them" }, i.e. operating as a complex non-linear system. +For example, it is well known, through usage, that the meaning of a +sentence does not completely depend on the meaning of each of the +words in it (see the previous chapter, point 4). + +Let us return to what we believe is fertile in the approach we are +developing. +It would seem that, in the literature, the notion of implication index +is also not extended to the search for subjects and categories of +subjects responsible for associations. +Nor that this responsibility is quantified and thus leads to a +reciprocal structuring of all subjects, conditioned by their +relationships to variables. +We propose these extensions here after recalling the founding +paradigm. + + +\section{Implication intensity in the binary case} + +\subsection{Fundamental and founding situation} + +A set of objects or subjects E is crossed with variables +(characters, criteria, successes,...) which are interrogated as +follows: "to what extent can we consider that instantiating variable\footnote{Throughout the book, the word "variable" refers to both an isolated variable in premise (example: "to be blonde") or a conjunction of isolated variables (example: "to be blonde and to be under 30 years old and to live in Paris")} $a$ +implies instantiating variable $b$? +In other words, do the subjects tend to be $b$ if we know that they are +$a$?". +In natural, human or life sciences situations, where theorems (if $a$ +then $b$) in the deductive sense of the term cannot be established +because of the exceptions that taint them, it is important for the +researcher and the practitioner to "mine into his data" in order to +identify sufficiently reliable rules (kinds of "partial theorems", +inductions) to be able to conjecture\footnote{"The exception confirms the rule", as the popular saying goes, in the sense that there would be no exceptions if there were no rule} a possible causal relationship, +a genesis, to describe, structure a population and make the assumption +of a certain stability for descriptive and, if possible, predictive +purposes. +But this excavation requires the development of methods to guide it +and to free it from trial and error and empiricism. + + +\subsection{Mathematization} + +To do this, following the example of the I.C. Lerman similarity +measurement method \cite{Lerman,Lermanb}, following the classic +approach in non-parametric tests (e. g. Fischer, Wilcoxon, etc.), we +define~\cite{Grasb,Grasf} the confirmatory quality measure of the +implicative relationship $a \Rightarrow b$ from the implausibility of +the occurrence in the data of the number of cases that invalidate it, +i.e. for which $a$ is verified without $b$ being verified. This +amounts to comparing the difference between the quota and the +theoretical if only chance occurred\footnote{"...[in agreement with + Jung] if the frequency of coincidences does not significantly + exceed the probability that they can be calculated by attributing + them solely by chance to the exclusion of hidden causal + relationships, we certainly have no reason to suppose the existence + of such relationships.", H. Atlan~\cite{Atlana}}. +But when analyzing data, it is this gap that we take into account and +not the statement of a rejection or null hypothesis eligibility. +This measure is relative to the number of data verifying $a$ and not +$b$ respectively, the circumstance in which the involvement is +precisely put in default. +It quantifies the expert's "astonishment" at the unlikely small number +of counter-examples in view of the supposed independence between the +variables and the numbers involved. + +Let us be clear. A finite set $V$ of $v$ variables is given: $a$, $b$, +$c$,... +In the classical paradigmatic situation and initially retained, it is +about the performance (success-failure) to items of a questionnaire. +To a finite set $E$ of $n$ subjects $x$, functions of the type : $x +\rightarrow a(x)$ where $a(x) = 1$ (or $a(x) = true$) if $x$ satisfies +or has the character $a$ and $0$ (or $a(x) = false$) otherwise are +associated by abuse of writing. +In artificial intelligence, we will say that $x$ is an example or an +instance for $a$ if $a(x) = 1$ and a counter-example if not. + + +The $a \Rightarrow b$ rule is logically true if for any $x$ in the +sample, $b(x)$ is null only if $a(x)$ is also null; in other words if +set $A$ of the $x$ for which $a(x)=1$ is contained in set $B$ of the +$x$ for which $b(x)=1$. +However, this strict inclusion is only exceptionally observed in the +pragmatically encountered experiments. +In the case of a knowledge questionnaire, we could indeed observe a +few rare students passing an item $a$ and not passing item $b$, +without contesting the tendency to pass item $b$ when we have passed +item $a$. +With regard to the cardinals of $E$ (of size $n$), but also of $A$ (or +$n_a$) and $B$ (or $n_b$), it is therefore the "weight" of the +counter-examples (or) that must be taken into account in order to +statistically accept whether or not to keep the quasi-implication or +quasi-rule $a \Rightarrow b$. Thus, it is from the dialectic of +example-counter-examples that the rule appears as the overcoming of +contradiction. + +\subsection{Formalization} + +To formalize this quasi-rule, we consider any two parts $X$ and $Y$ of +$E$, chosen randomly and independently (absence of a priori link +between these two parts) and of the same respective cardinals as $A$ +and $B$. Let $\overline{Y}$ and $\overline{B}$ be the respective complementary of $Y$ and $B$ in $E$ of the same cardinal $n_{\overline{b}}= n-n_b$. + +We will then say: + +\definition $a \Rightarrow b$ is acceptable at confidence level +$1-\alpha$ if and only if +$$Pr[Card(X\cap \overline{Y})\leq card(A\cap \overline{B})]\leq \alpha$$ + +\begin{figure}[htbp] + \centering +\includegraphics[scale=0.34]{chap2fig1.png} + \caption{The dark grey parts correspond to the counter-examples of the + implication $a \Rightarrow b$} +\label{chap2fig1} +\end{figure} + +It is established \cite{Lermanb} that, for a certain drawing process, +the random variable $Card(X\cap \overline{Y})$ follows the Poisson law +of parameter $\frac{n_a n_{\overline{b}}}{n}$. +We achieve this same result by proceeding differently in the following +way: + +Note $X$ (resp. $Y$) the random subset of binary transactions where +$a$ (resp. $b$) would appear, independently, with the frequency +$\frac{n_a}{n}$ (resp. $\frac{n_b}{n}$). +To specify how the transactions specified in variables $a$ and $b$, +respectively $A$ and $B$, are extracted, for example, the following +semantically permissible assumptions are made regarding the +observation of the event: $[a=1~ and~ b=0]$. $(A\cap +\overline{B})$\footnote{We then note $\overline{v}$ the variable + negation of $v$ (or $not~ v$) and $\overline{P}$ the complementary + part of the part P of E.} is the subset of transactions, +counter-examples of implication $a \Rightarrow b$: + +Assumptions: +\begin{itemize} +\item h1: the waiting times of an event $[a~ and~ not~ b]$ are independent + random variables; +\item h2: the law of the number of events occurring in the time + interval $[t,~ t+T[$ depends only on T; +\item h3: two such events cannot occur simultaneously +\end{itemize} + +It is then demonstrated (for example in~\cite{Saporta}) that the +number of events occurring during a period of fixed duration $n$ +follows a Poisson's law of parameter $c.n$ where $c$ is called the +rate of the apparitions process during the unit of time. + + +However, for each transaction assumed to be random, the event $[a=1]$ +has the probability of the frequency $\frac{n_a}{n}$, the event[b=0] +has as probability the frequency, therefore the joint event $[a=1~ + and~ b=0]$ has for probability estimated by the frequency +$\frac{n_a}{n}. \frac{n_{\overline{b}}}{b}$ in the hypothesis of absence of an a priori link between a and b (independence). + +We can then estimate the rate $c$ of this event by $\frac{n_a}{n}. \frac{n_{\overline{b}}}{b}$. + +Thus for a duration of time $n$, the occurrences of the event $[a~ and~ not~b]$ follow a Poisson's law of parameter : +$$\lambda = \frac{n_a.n_{\overline{b}}}{n}$$ + +As a result, $Pr[Card(X\cap \overline{Y})= s]= e^{-\lambda}\frac{\lambda^s}{s!}$ + +Consequently, the probability that the hazard will lead, under the +assumption of the absence of an a priori link between $a$ and $b$, to +more counter-examples than those observed is: + +$$Pr[Card(X\cap \overline{Y})\leq card(A\cap \overline{B})] = +\sum^{card(A\cap \overline{B})}_{s=0} e^{-\lambda}\frac{\lambda^s}{s!} $$ + + But other legitimate drawing processes lead to a binomial law, or + even a hypergeometric law (itself not semantically adapted to the + situation because of its symmetry). Under suitable convergence + conditions, these two laws are finally reduced to the Poisson Law + above (see Annex to this chapter). + +If $n_{\overline{b}}\neq 0$, we reduce and center this Poison variable +into the variable: + +$$Q(a,\overline{b})= \frac{card(X \cap \overline{Y})) - \frac{n_a.n_{\overline{b}}}{n}}{\sqrt{\frac{n_a.n_{\overline{b}}}{n}}} $$ + +In the experimental realization, the observed value of +$Q(a,\overline{b})$ is $q(a,\overline{b})$. +It estimates a gap between the contingency $(card(A\cap +\overline{B}))$ and the value it would have taken if there had been +independence between $a$ and $b$. + +\definition +\begin{equation} q(a,\overline{b}) = \frac{n_{a \wedge \overline{b}}- + \frac{n_a.n_{\overline{b}}}{n}}{\sqrt{\frac{n_a.n_{\overline{b}}}{n}}} + \label{eq2.1} +\end{equation} +is called the implication index, the number used as an indicator of +the non-implication of $a$ to $b$. +In cases where the approximation is properly legitimized (for example +$\frac{n_a.n_{\overline{b}}}{n}\geq 4$), the variable +$Q(a,\overline{b})$ approximately follows the reduced centered normal +distribution. The intensity of implication, measuring the quality of +$a\Rightarrow b$, for $n_a\leq n_b$ and $nb \neq n$, is then defined +from the index $q(a,\overline{b})$ by: + +\definition +The implication intensity that measures the inductive quality of a +over b is: +$$\varphi(a,b)=1-Pr[Q(a,\overline{b})\leq q(a,\overline{b})] = +\frac{1}{\sqrt{2 \pi}} \int^{\infty}_{ q(a,\overline{b})} +e^{-\frac{t^2}{2}} dt,~ if~ n_b \neq n$$ +$$\varphi(a,b)=0,~ otherwise$$ +As a result, the definition of statistical implication becomes: +\definition +Implication $a\Rightarrow b$ is admissible at confidence level +$1-\alpha $ if and only if: +$$\varphi(a,b)\geq 1-\alpha$$ + + +It should be recalled that this modeling of quasi-implication measures +the astonishment to note the smallness of counter-examples compared to +the surprising number of instances of implication. +It is a measure of the inductive and informative quality of +implication. Therefore, if the rule is trivial, as in the case where +$B$ is very large or coincides with $E$, this astonishment becomes +small. +We also demonstrate~\cite{Grasf} that this triviality results in a +very low or even zero intensity of implication: If, $n_a$ being fixed +and $A$ being included in $B$, $n_b$ tends towards $n$ ($B$ "grows" +towards $E$), then $\varphi(a,b)$ tends towards $0$. We therefore +define, by "continuity":$\varphi(a,b) = 0$ if $n_b = n$. Similarly, if +$A\subset B$, $\varphi(a,b)$ may be less than $1$ in the case where +the inductive confidence, measured by statistical surprise, is +insufficient. + +{\bf \remark Total correlation, partial correlation} + + +We take here the notion of correlation in a more general sense than +that used in the domain that develops the linear correlation +coefficient (linear link measure) or the correlation ratio (functional +link measure). +In our perspective, there is a total (or partial) correlation between +two variables $a$ and $b$ when the respective events they determine +occur (or almost occur) at the same time, as well as their opposites. +However, we know from numerical counter-examples that correlation and +implication do not come down to each other, that there can be +correlation without implication and vice versa~\cite{Grasf} and below. +If we compare the implication coefficient and the linear correlation +coefficient algebraically, it is clear that the two concepts do not +coincide and therefore do not provide the same +information\footnote{"More serious is the logical error inferred from + a correlation found to the existence of a causality" writes Albert + Jacquard in~\cite{Jacquard}, p.159. }. + +The quasi-implication of non-symmetric index $q(a,\overline{b})$ does +not coincide with the correlation coefficient $\rho(a, b)$ which is +symmetric and which reflects the relationship between variables a and +b. Indeed, we show~\cite{Grasf} that if $q(a,\overline{b}) \neq 0$ +then +$$\frac{\rho(a,b)}{q(a,\overline{b})} = \sqrt{\frac{n}{n_b + n_{\overline{a}}}} q(a,\overline{b})$$ +With the correlation considered from the point of view of linear +correlation, even if correlation and implication are rather in the +same direction, the orientation of the relationship between two +variables is not transparent because it is symmetrical, which is not +the bias taken in the SIA. +From a statistical relationship given by the correlation, two opposing +empirical propositions can be deduced. + +The following dual numerical situation clearly illustrates this: + + +\begin{table}[htp] +\center +\begin{tabular}{|l|c|c|c|}\hline +\diagbox[width=4em]{$a_1$}{$b_1$}& + 1 & 0 & marge\\ \hline + 1 & 96 & 4& 100 \\ \hline + 0 & 50 & 50& 100 \\ \hline + marge & 146 & 54& 200 \\ \hline +\end{tabular} ~ ~ ~ ~ ~ ~ ~ \begin{tabular}{|l|c|c|c|}\hline +\diagbox[width=4em]{$a_2$}{$b_2$}& + 1 & 0 & marge\\ \hline + 1 & 94 & 6& 100 \\ \hline + 0 & 52 & 48& 100 \\ \hline + marge & 146 & 54& 200 \\ \hline +\end{tabular} + +\caption{Numeric example of difference between implication and + correlation} +\label{chap2tab1} +\end{table} + +In Table~\ref{chap2tab1}, the following correlation and implications +can be computed:\\ +Correlation $\rho(a_1,b_1)=0.468$, Implication +$q(a_1,\overline{b_1})=-4.082$\\ +Correlation $\rho(a_2,b_2)=0.473$, Implication $q(a_2,\overline{b_2})=-4.041$ + + +Thus, we observe that, on the one hand, $a_1$ and $b_1$ are less +correlated than $a_2$ and $b_2$ while, on the other hand, the +implication intensity of $a_1$ over $b_1$ is higher than that of $a_2$ +over $b_2$ since $q1 + 0 + \label{eq2.3} +\end{equation} + + +\begin{equation} + \frac{\partial + q}{\partial n_{a \wedge + \overline{b}}} = \frac{1}{\sqrt{\frac{n_a n_{\overline{b}}}{n}}} + = \frac{1}{\sqrt{\frac{n_a (n-n_b)}{n}}} > 0 + \label{eq2.4} +\end{equation} + + +Thus, if the increases $\Delta nb$ and $\Delta n_{a \wedge + \overline{b}}$ are positive, the increase of $q(a,\overline{b})$ is +also positive. This is interpreted as follows: if the number of +examples of $b$ and the number of counter-examples of implication +increase then the intensity of implication decreases for $n$ and $n_a$ +constant. In other words, this intensity of implication is maximum at +observed values $n_b$ and $ n_{a \wedge + \overline{b}}$ and minimum at values $n_b+\Delta n_b$ and $n_{a \wedge + \overline{b}}+ n_{a \wedge + \overline{b}}$. + +If we examine the case where $n_a$ varies, we obtain the partial +derivative of $q$ with respect to $n_a$ which is: + +\begin{equation} + C = \frac{ n_{a \wedge \overline{b}}}{2 + \sqrt{\frac{n_{\overline{b}}}{n}}} + \left(\frac{n}{n_a}\right)^{\frac{3}{2}} + -\frac{1}{2}\sqrt{\frac{n_{\overline{b}}}{n_a}}<0 + \label{eq2.5} + \end{equation} + +Thus, for variations of $n_a$ on $[0,~ nb]$, the implication index function is always decreasing (and concave) with respect to $n_a$ and is therefore minimum for $n_a= n_b$. As a result, the intensity of implication is increasing and maximum for $n_a= n_b$. + +Note the partial derivative of $q$ with respect to $n$: + +$$\frac{\partial q}{\partial n} = \frac{1}{2\sqrt{n}} \left( n_{a + \wedge \overline{b}}+\frac{n_a n_{\overline{b}}}{n} \right)$$ + +Consequently, if the other 3 parameters are constant, the implication +index decreases by $\sqrt{n}$. +The quality of implication is therefore all the better, a specific +property of the SIA compared to other indicators used in the +literature~\cite{Grasab}. +This property is in accordance with statistical and semantic +expectations regarding the credit given to the frequency of +observations. +Since the partial derivatives of $q$ (at least one of them) are +non-linear according to the variable parameters involved, we are +dealing with a non-linear dynamic system\footnote{"Non-linear systems + are systems that are known to be deterministic but for which, in + general, nothing can be predicted because calculations cannot be + made"~\cite{Ekeland} p. 265.} with all the epistemological +consequences that we will consider elsewhere. + + + +\subsection{Numerical example} +In a first experiment, we observe the occurrences: $n = 100$, $n_a = +20$, $n_b = 40$ (hence $n_b=60$, $ n_{a \wedge \overline{b}} = 4$). +The application of formula (\ref{eq2.1}) gives = -2.309. +In a 2nd experiment, $n$ and $n_a$ are unchanged but the occurrences +of $b$ and counter-examples $n_{a \wedge \overline{b}}$ increase by one unit. + +At the initial point of the space of the 4 variables, the partial +derivatives that only interest us (according to $n_b$ and $n_{a + \wedge \overline{b}}$) have respectively the following values when +applying formulas (\ref{eq2.3}) and (\ref{eq2.4}): $\frac{\partial + q}{\partial n_b} = 0.0385$ and $\frac{\partial q}{\partial n_{a + \wedge \overline{b}}} = 0.2887$. + +As $\Delta n_b$, $\Delta n_{\overline{b}}$ and $\Delta n_{a + \wedge \overline{b}} $ are equal to 1, -1 and 1, then $\Delta q$ is +equal to: $0.0385 + 0.2887 + o(\Delta q) = 0.3272 + o(\Delta q)$ and +the approximate value of $q$ in the second experiment is $-2.309 + +0.2887 + o(\Delta q)= -1.982 +o(\Delta q)$ using the first order +development of $q$ (formula (\ref{eq2.2})). +However, the calculation of the new implication index $q$ at the point +of the 2nd experiment is, by the use of (\ref{eq2.1}): $-1.9795$, a +value well approximated by the development of $q$. + + + +\subsection{A first differential relationship of $\varphi$ as a function of function $q$} +Let us consider the intensity of implication $\varphi$ as a function +of $q(a,\overline{b})$: +$$\varphi(q)=\frac{1}{\sqrt{2\pi}}\int_q^{\infty}e^{-\frac{t^2}{2}}$$ +We can then examine how $\varphi(q)$ varies when $q$ varies in the neighberhood of a given value $(a,b)$, knowing how $q$ itself varies according to the 4 parameters that determine it. By derivation of the integration bound, we obtain: +\begin{equation} + \frac{d\varphi}{dq}=-\frac{1}{\sqrt{2\pi}}e^{-\frac{q^2}{2}} < 0 + \label{eq2.6} +\end{equation} +This confirms that the intensity increases when $q$ decreases, but the growth rate is specified by the formula, which allows us to study more precisely the variations of $\varphi$. Since the derivative of $\varphi$ from $q$ is always negative, the function $\varphi$ is decreasing. + +{\bf Numerical example}\\ +Taking the values of the occurrences observed in the 2 experiments +mentioned above, we find for $q = -2.309$, the value of the intensity +of implication $\varphi(q)$ is equal to 0.992. Applying formula +(\ref{eq2.6}), the derivative of $\varphi$ with respect to $q$ is: +-0.02775 and the negative increase in intensity is then: -0.02775, +$\Delta q$ = 0.3272. The approximate first-order intensity is +therefore: $0.992-\Delta q$ or 0.983. However, the actual calculation +of this intensity is, for $q= -1.9795$, $\varphi(q) = 0.976$. + + + +\subsection{Examination of other indices} +Unlike the core index $q$ and the intensity of implication, which +measures quality through probability (see definition 2.3), the other +most common indices are intended to be direct measures of quality. +We will examine their respective sensitivities to changes in the +parameters used to define these indices. +We keep the ratings adopted in paragraph 2.2 and select indices that +are recalled in~\cite{Grasm},~\cite{Lencaa} and~\cite{Grast2}. + +\subsubsection{The Loevinger Index} + +It is an "ancestor" of the indices of +implication~\cite{Loevinger}. This index, rated $H(a,b)$, varies from +1 to $-\infty$. It is defined by: $H(a,b) =1-\frac{n n_{a \wedge + b}}{n_a n_b}$. Its partial derivative with respect to the variable number of counter-examples is therefore: +$$\frac{\partial H}{\partial n_{a \wedge \overline{b}}}=-\frac{n}{n_a n_b}$$ +Thus the implication index is always decreasing with $n_{a \wedge + \overline{b}}$. If it is "close" to 1, implication is "almost" +satisfied. But this index has the disadvantage, not referring to a +probability scale, of not providing a probability threshold and being +invariant in any dilation of $E$, $A$, $B$ and $A \cap \overline{B}$. + + +\subsubsection{The Lift Index} + +It is expressed by: $l =\frac{n n_{a \wedge b}}{n_a n_b}$. +This expression, linear with respect to the examples, can still be +written to highlight the number of counter-examples: +$$l =\frac{n (n_a - n_{a \wedge \overline{b}})}{n_a n_b}$$ +To study the sensitivity of the $l$ to parameter variations, we use: +$$\frac{\partial l}{\partial n_{a \wedge \overline{b}} } = +-\frac{1}{n_a n_b}$$ +Thus, the variation of the Lift index is independent of the variation +of the number of counter-examples. +It is a constant that depends only on variations in the occurrences of $a$ and $b$. Therefore, $l$ decreases when the number of counter-examples increases, which semantically is acceptable, but the rate of decrease does not depend on the rate of growth of $n_{a \wedge \overline{b}}$. + +\subsubsection{Confidence} + +This index is the best known and most widely used thanks to the sound +box available in an Anglo-Saxon publication~\cite{Agrawal}. +It is at the origin of several other commonly used indices which are only variants satisfying this or that semantic requirement... Moreover, it is simple and can be interpreted easily and immediately. +$$c=\frac{n_{a \wedge b}}{n_a} = 1-\frac{n_{a \wedge \overline{b}}}{n_a}$$ + +The first form, linear with respect to the examples, independent of +$n_b$, is interpreted as a conditional frequency of the examples of +$b$ when $a$ is known. +The sensitivity of this index to variations in the occurrence of +counter-examples is read through the partial derivative: +$$\frac{\partial c}{\partial n_{a \wedge \overline{b}} } = +-\frac{1}{n_a }$$ + + +Consequently, confidence increases when $n_{a \wedge \overline{b}}$ +decreases, which is semantically acceptable, but the rate of variation +is constant, independent of the rate of decrease of this number, of +the variations of $n$ and $n_b$. +This property seems not to satisfy intuition. +The gradient of $c$ is expressed only in relation to $n_{a \wedge + \overline{b}}$ and $n_a$:(). {\bf CHECK FORMULA} +This may also appear to be a restriction on the role of parameters in +expressing the sensitivity of the index. + +\section{Gradient field, implicative field} +We highlight here the existence of fields generated by the variables +of the corpus. + +\subsection{Existence of a gradient field} +Like our Newtonian physical space, where a gravitational field emitted +by each material object acts, we can consider that it is the same +around each variable. +For example, the variable $a$ generates a scalar field whose value in +$b$ is maximum and equal to the intensity of implication or the +implicition index $q(a,\overline{b})$. +Its action spreads in V according to differential laws as J.M. Leblond +says, in~\cite{Leblond} p.242. + +Let us consider the space $E$ of dimension 4 where the coordinates of +the points $M$ are the parameters relative to the binary variables $a$ +and $b$, i.e. ($n$, $n_a$, $n_b$, $n_{a\wedge \overline{b}}$). $q(a,\overline{b})$ is the realization of a scalar field, as an application of $\mathbb{R}^4$ in $\mathbb{R}$ (immersion of $\mathbb{N}^4$ in $\mathbb{R}^4$). +For the grad vector $q$ of components the partial derivatives of $q$ +with respect to variables $n$, $n_a$, $n_b$, $n_{a\wedge + \overline{b}}$ to define a gradient field - a particular vector +field that we will also call implicit field - it must respect the +Schwartz criterion of an exact total differential, i.e.: + +$$\frac{\partial}{\partial n_{a\wedge \overline{b}}}\left( +\frac{\partial q}{\partial n_b} \right) =\frac{\partial}{\partial n_b}\left( +\frac{\partial q}{\partial n_{a\wedge \overline{b}}} \right) $$ +and the same for the other variables taken in pairs. However, we have, +through the formulas (\ref{eq2.3}) and (\ref{eq2.4})