A Revisit to Probability - Possibility Consistency Principles
Автор: Mamoni Dhar
Журнал: International Journal of Intelligent Systems and Applications(IJISA) @ijisa
Статья в выпуске: 4 vol.5, 2013 года.
Бесплатный доступ
In this article, our main intention is to highlight the fact that the probable links between probability and possibility which were established by different authors at different point of time on the basis of some well known consistency principles cannot provide the desired result. That is why the paper discussed some prominent works for transformations between probability and possibility and finally aimed to suggest a new principle because none of the existing principles because none of them found the unique transformation. The new consistency principle which is suggested hereby would in turn replace all others that exist in the literature references by providing a reliable estimate of consistency between the two.Furthermore some properties of entropy of fuzzy numbers are also presented in this article.
Superimposition of Sets, Glivenko- Cantelli’s Theorem, Dubois and Prade Definition of a Normal Fuzzy Number, FMF, PDF
Короткий адрес: https://sciup.org/15010412
IDR: 15010412
Текст научной статьи A Revisit to Probability - Possibility Consistency Principles
Published Online March 2013 in MECS
Real world problems typically involves processing uncertainty of two distinct types of which one type of uncertainty arises from a lack of knowledge relating to concepts which in the sense of classical logic, may be well defined and the other type of uncertainty may be due to inherent vagueness in concept themselves. Traditionally the above are modelled in terms of probability theory and fuzzy set theory respectively.
Possibility was first coined by Zadeh which an extension of Fuzzy Set theory. Possibility theory began with a measure of events which is not additive in contrast to probability measure. The conversion problem between probability and possibility has its roots in possibility – probability consistency principle of Zadeh [1], that he introduced in the paper founding possibility theory. This is a mathematical theory dealing with certain types of uncertainties and is often considered as an alternative to probability theory. Possibility theory is devoted to the handling of incomplete information. The process of transformation from probability to possibility had received attention in the past. This question is philosophically interesting as a part of debate between probability and fuzzy sets.
The transformation between probability and possibility has been studied by many researchers, but it can be seen that most of these studies examined principles that must be satisfied for transformations and devised an equation satisfying them. Later on Dubios and Prade further contributed to the development of the possibility theory. In Zadeh’s view, possibility distributions were meant to provide a graded semantics to natural statements. When information regarding some phenomenon is given in both probabilistic and possibilistic terms, the two descriptions should be in some sense consistent. This suggests the need for probability – possibility consistency principles.
These transformations are useful in some practical problems as: constructing a fuzzy membership function from a statistical data Krishnapuram [2], combining probabilities and possibilities in expert systems Klir [3, 4] and reducing complicated complexity Dubois [5]. In other words, the transformation from possibility to probability or conversely is useful in case of decision making when the experts need precise informations to take any decision. A long standing debate took place in literature on the relationship between probability and possibility. From the very beginning, several links have been established and these are argued in favor and against from various points of view. In the following section we would like to draw attention to some of the relationship between probability and possibility as established on the basis of the most common and well known consistency principles.
The paper is organized as follows: Section II discusses about some papers which link probability with possibility. SectionIII deals with the RandomnessFuzziness Consistency principle. Section IV introduces the new definition of entropy of fuzzy numbers. SectionV provides the Shannon’s entropy. Section VI deals with some numerical examples. Finally, Section VII presents our conclusions.
II. Some Papers Linking Probability and Possibility
The link between probability and possibility has been studied by different authors differently which cannot be put altogether. Here we would like to mention a few of these as an illustration to make our point clear and simple.
uncertainties. Klir has considered the principle of uncertainty preservation under two scales:
– The ratio scale: This is a normalization of the probability distribution. The transformations P ^ П and П ^ P are named the normalized transformations and they are defined by
2.1 Zadeh consistency principle:
Zadeh defined the probability- possibility consistency principle such as “a high degree of possibility does not imply a high degree of probability, nor does a low degree of probability imply a low degree of possibility”, Zadeh [1]. He defined the degree of consistency between a probability distribution
pi
ni =
p1
and
П
Pi= —n-- n Y. ni =1
–
The log-interval
transformations P ^ П
scales: the corresponding and П ^ P are defined by:
P = (P1, P2,
, pn)
and a possibility distribution
П = (п1, П 2,........, Пп ) as.
as:
/ Pi a
ni = ( ")
p1
ni =
and
Pa n1
Y Pi“
= 1
n
Cz=Y nPi i=1 (1)
Zadeh pointed out that the probability -possibility consistency, defined in (1), is not a precise law or a relationship that is intrinsic in the concept of possibility and probability distributions. It is an approximate formalization of the heuristic connection that a lessening of the possibility of an event tends to lessen its probability but not vice-versa (Zadeh [1]). From the above lines it is clear that Zadeh had some weaker constraints in mind and so many other researchers felt the need of the development of some other principles .
-
2.2 Klir consistency principle:
Let X = ( w , , W,
, wn )
be a finite universe of
singletons, let
Assume that the
Pi= P(W) and n n(w) .
elements of X are ordered in such a way that: i = 1,…
P > - P - - 1 and П > 0
,n, > 0 and Pi>^ and and П- П.1 with P-.1 = 0
n= 0
and n + 1 . According to Klir, the transformation from to must preserves some appropriate scale and the amount of information contained in each distribution Klir [3, 4]. The information contained in p or can be expressed by the equality of their
These transformations, which are named Klir’s transformation satisfy the uncertainty preservation principle defined by Klir [3,4], a is a parameter that belongs to the open interval] 0, 1[. In the way to satisfy uncertainty preservation principle, Klir tried to define a probability space, in the measure theoretic sense from the knowledge of possibilities concerned. After this transformation procedure, we can see the existence of another principle which was the brainchild of Dubois and Prade.
-
2.3 Dubois and Prade consistency principle:
Dubios- Prade consistency principle came into force after Klir and Zadeh because they did not agree with the way of defining consistency between probability and possibility by their predecessors. Consequently, they developed another consistency principle in the assumption that it would overcome the shortcomings in the existing principles.This principle they stated in the following way:
The possibilistic representation is weaker than the probabilistic one because it explicitly handles imprecision (e.g. incomplete data) and because possibility measures are based on ordering structure than an additive one in the probability measures , Dubois [5]. Thus in going from a probabilistic representation to a possiblistic one, some information is lost because we go from point-valued probabilities to interval valued ones; the converse transformation adds information to some possibilistic incomplete knowledge.
The transformation П ^P is guided by the principle of maximum specificity, which aims at finding the most informative possibility distribution. While the transformation P ^ П is guided by the principle of insufficient reason which aims at finding the probability distribution that contains as much uncertainty as possible but that retains the features of possibility distribution (Dubois [5]). This leads to write the consistency principle of Dubois and Prade such as:
A X: Π (A) P (A)
The transformations P ^П and П ^ P are defined by:
ni = ^LPj j=1
and
n
P- = ^
j = 1
n j - j j
The two transformations defined by (2) and (3) are not converse of each other because they are not based on the same informational principle. For this reason, we name the transformation П ^ P defined by (3) as asymmetric one. Dubois and Prade suggested a symmetric transformation P ^ П which is defined by:
n
ni=E min( p^ Pj)
j = 1 (4)
Dubois and Prade proved that the symmetric transformation П ^ P defined by (3), is the most specific transformation which satisfies the condition of consistency of Dubois and Prade defined by (4) (Dubois [5]).Thus here we can see the use of possibility measue. But the possibility measure is not a measure in the classical sense and so it can be said that the use of such words as possibility measure is not justifiable from our standpoints.
A possible justification of this may be as follows: The measure of a point is zero, Possibility of occurrence of a point is defined by membership function and therefore in this case the possibility of occurrence of the point is not zero. Hence there should not be any formalism with reference to the membership function.That is to say, the use of the word possibility measure is not justifiable from our standpoints.
Some of the reasons can be described in the following paper of Alt and Yovits. In [6], Alt and Yovits countered these arguments in the following way:
Although possibility theory employs weaker rules than probability theory in manipulating uncertainty, the basic structure of the two theories are not comparable. Hence even though manipulating uncertainty within possibility theory results in a greater loss of informations, than corresponding uncertainty within probability theory, it is neither necessary nor desirable to lose or gain informations solely by transforming uncertainty from one representtion to another.
There are other reasons also for which the principle becomes debatable. These can be described by the fact that the authors provided the transformations for continuos case , namely for unimodal continuos probability density function, with bounded support and finally arrived at the conclusion that further research is needed in continuos case. The authors failed to find for which class of pdf and possibility distribution, the transformation makes sense. In their work the authors had pointed out that the transformations they devised are not related to each other and the converse transformations were also shown to be inadequate. Another thing for which the transformation becomes debatable can be found from the fact that there was the use of the word measure with possibility which is not acceptable.The measure of a point is zero in the classical sense but the possibility of a point is determined by a membership function. Further, since a possibility space can be bifurcated into two probability spaces, we can say that with the help of two probability spaces we can study possibility mathematically. So it becomes obvious that we cannot use these principles in all application areas.
These are the three most well- known consistency principles which can be found in the literature of fuzzy set theory from Zadeh’s initial conception. These principles might have led many other authors of this field to develop many other principles of this kind without having any proper thinking that those principles defined probability in the same space in which possibility is defined. This is not the case with the transformation devised here. Another important thing to be noted here that all the well-known consistency principles that can be found in the literature of fuzzy set theory deal with the consistency in discrete case and nothing is discussed about the continuos cases. Continuos cases were discussed a bit in a paer of Dubios-Prade-Shandri but at the same time it was mentioned that these transformations were not related to each other and the converse transformations were shown to be inadequate. We shall however mention two more papers in which the authors had tried to find a relation in their way but in the process, the authors also committed the same mistake by adopting the concepts of those existing principles and thereby failed to define it properly. Due to these reasons, we would like to draw attention in some of those principles which are as follows:
If f : R — [0, + m ] is a bounded density function, ц f : R —— [0, +^ ]
then the function f defined for each
x G R byMf (x) = kf (kx)
where,
k =-------1-------sup{ f (x), x g R}
where k is a constant which gurantees the following condition of consistency:
Vw g X: n(w) > p(w)
This condition is a particular case of Dubious- Prade consistency principle but there is a condition that the value of ‘k’ must belong to the following interval:
0 < k <
log pn(1 - Pn )log pnp1
is a density function and a possibility distribution function. Additionally, if f is continuos, then there
It was mentioned by them that this above mentioned transformation is different from Klir’s transformation in the sense that Klir’s transformation has a constant
exists y 0 G R such that M f ( y 1 = 1
power which belongs to the open interval
while
But a logical interpretation would lead us to defy the result obtained above. The reason behind such a claim can be contributed to the fact that since a possibility space can be expressed as a combination of two probability spaces which would naturally be associated with some densities and as such it can be said that the possibility spaces are associated with those densities. Hence it can be claimed that there is no need of converting the densities to possibility distributions as speicified by the authors. To be more specific we would like to say that the aforesaid procedure is not appropriately defined to yield an accurate result.
Du, Choi and Young [8] were of the opinion that unlike the probability based methods in which the probability density function and the cumulative distribution function of the random variable is well known the selection of the membership function of the fuzzy variable in possibility based methods are not clear. They introduced a probability-possibility consistent principle to generate the membership function of a fuzzy variable from temporary probability density function. Moreover, the kernal smoothing method was recommended to generate the temporary probability density function of the fuzzy variable from the insufficient data.
Again there is no need of introducing temporary probability density function with the help of kernal smoothing methods because posibility distribution can be expressed as two distribution functions which are associated with some densities.
the power
k (1 - Pi)
in variable transformation , is a
variable to make it more specific. Thus it is clear that the authors were not satisfied with the procedure developed by Klir and consequently they tried to develop another one in that line by replacing the value of a . That is to say that they tried to modify Klir’s principle in the process. But one thing to be worth mentioning here is that the logic behind such a development is rooted in the Dubois-Prade consistency principle which itself was criticized for any reasons.
Yager [10] introduced a general procedure for transforming a probability distribution to another probabilistic distribution so that the resulting probability distribution at least has as much entropy as the original probability distribution. For developing this procedure, they used an approach of possibility – probability distribution which was initially described by Dubois and Prade.The procedure for transformation from a probability distribution to another probabilistic distribution introduced by them can be described in the following manner.
In the process, P was assumed as a probability
distribution on
P1 > P2 >.....> P,
X = { xi, x 2,....., xn}
where
n . The elements have been indexed
in descending order of their probabilities. They then
associated with these probability
possibility distribution on X such
x possibility of j where
distributions a
u that j is the
and
un = nPn
uj = j(Pj - Pj-1)+uj+1
П = ( p i- ) k (1 - p )
p1
It was mentioned that using the formula stated above the transformation from probability distribution to possibility distribution was derived.
U ^ u ^^ U
Similarly assuming 12 n as a u = 1
normal probability distribution on X with 1 , they obtained an associated probability distribution on X where, pn
un
n
and uj- uj+1
j
It is to be mentioned here that our intention is only to focus on the ideas underlying the procedures discussed but not on their technical details. Here also like other cases we can see that these relationships were the result of Dubois-Prade consistency principles which cannot be considered any more as an appropriate method of transformation as has been discussed the previous subsection and in more details in our previous works. That is to say, unlike other principles we cannot consider this method of transformation also for further studies.
Lee and Llinas [11], focused on fusing process of threat assessment by combining two different approaches which are very common in the literature references. They tried to build a hybrid model of threat assessment because air-to-air battle space requires fast decision making for which it is essential to develop softwares for the fast computations. To enable hybridization, they have employed representative transformation methods between probability and possibility as found in literature. They applied two transformation methods of which one was developed by Geer and Klir whereas the other was developed by Dubois, Prade and Shandri. Unfortunately, these theories have not been sufficiently developed as yet because there remained some controversial properties in the transformation procedures which if not taken care of would lead to serious problems especially in defence related cases.
Geer and Klir proposed the “information preserving transformations (IPT)”, concepts in transforming possibility and probability. They found the log interval transformation to be the most appropriate transformation because it satisfies the criterion of consistency in both directions. This IPT concept requires that the numbers expressing uncertainty in one theory be transformed into corresponding numbers in another theory by an appropriate scale and that the amount of uncertainty and information be preserved under the transformations.
Thus in all of the above mentioned principles; we can see the touch of the two common principles which are Klir’s consistency principle and Dubios- Prade consistency principle. There are innumerable alternative extensions of possibility theory to fuzzy sets consistent with crisp sets. But from the above discussions, we can have the glimpse of the fact that these two principles on the basis of which many other principles developed, were criticized in many ways on some reasonable grounds. But one thing becomes clear that there are some problems for which such types of criticisms took place. In order to avoid these types of criticisms, we would like to suggest a transformation procedure which is expected to meet all the requirements that were essential in defining the required transformation between probability and possibility.
The ultimate goal of the newly suggested consistency principle is to capture properties formalised within feasible mathematical frameworks. But it is to be noted here that the principle is rooted in the operation of superimposition of sets. Also one has to look into the matter through the application of Glivenko-Cantelli’s theorem of order statistics. There are of course, alternative ways in linking probability with possibility or vice- versa, however, we shall consider only the following for future course of studies.
-
III. The Randomness – Fuzziness Consistency Principles
For a normal fuzzy number of the type N = [ α, β, γ ] with membership function
V (x), a < x < в
V n ( x ) =*
^ 2 ( x ), в < x < Y
0, otherwise
μ N ( x ) = Ψ 1( x ), if α ≤ x ≤ β ,
= Ψ2(x), if β ≤ x ≤ γ, and = 0, otherwise, with Ψ1 (α) = Ψ2 (γ) = 0,
Ψ1 (β) = Ψ2 (β) = 1, the partial presence of a value x of the variable X in the interval [α, γ] is expressible as
μN(x) = θ Prob [α ≤ X ≤ x] + (1 – θ) {1 – Prob [β ≤ X ≤ x]}, where θ=1 if α ≤ x ≤ β and θ=0 if β ≤ x ≤ γ
This transformation is named as “The RandomnessFuzziness Consistency principles” and it is expected that the shortcomings which are observed in the existing principles will be reduced to a great extent if this procedure of linking possibility and probability is taken into consideration. It was thus established that two laws of randomness are needed to define one possibility law.
With the above result, we would like to establish the fact that the spirit of this approach is to our opinion, better founded than the existing ones. If this be the case, then it is obvious that the results of all the transformations which basically depend on the existing link between probability and possibility or conversely would be illogical from our standpoints, the reasons for which are discussed in the previous section. That is why the principles cannot be accepted for further studies and also those who depended on these results without having the indepth thinking would have to reconsider the procedures developed with the existing principles linking possibility with probability. In other words, we would like to say here that the method of linking probability with possibility which is suggested by us is preferable among various other existing transformation procedures because of its logical foundations and appropriate mathematical frameworks within which this is constructed. While dealing with a subject like mathematics, it is very important to see whether the things which are in use are constructed within proper mathematical frameworks or not. It is necessary because otherwise we would have to be contended with some results having no logic at all. Hence it is expected that the above mentioned method of transformation would be workable in all respect and it is for this reason this principle of consistency is suggested in this article.
From the above, it can be said that the researchers who tried to link probability with possibility had ignored one most important thing that two probability spaces are required to define a possibility space. That is to say that while developing their principles, it was seen that possibility was defined in the same space over which probability was defined. Various other principles which were developed one after another from time to time without having any logical thinking. But one thing can be noticed that none of the researchers, who were dealt with finding a link between probability and possibility, was satisfied with the principles developed by their predecessors. This was clear from their attempts to find a new consistency principle. As a result of which we can find a myriad of principles in this respect. But it is important to mention here that the newcomers in the field will be in a very difficult situation to choose from the various principles, the most appropriate one that suits there studies. So, an effort has been made to solve this problem with the powerful concept of “The Randomness – Fuzziness Consistency
Principle” proposed by Baruah [13] Again, it is worth mentioning the fact that with the help of this suggested principle, we can define entropy of a fuzzy set in a logical manner. Let us have a brief visit to the concept of entropy of a fuzzy set in the following section.
-
IV. New Definition of Entropy of a Fuzzy Number
Fuzziness a feature of uncertainty results from lack of sharp distinction of being or not being a member of the set. Probability has been traditionally used in modelling uncertainty. A measure of fuzziness used and cited in the literature is the fuzzy entropy, also first mentioned by Zadeh in 1968. The name entropy was chosen due to intrinsic similarity with Shannon’s entropy. However, the two functions measures fundamentally different types of uncertainties. Basically, Shannon’s entropy measures the average uncertainty in bits associated with prediction of outcomes in a random experiment. In other words, it can be said that fuzzy entropy is the measurement of fuzziness of fuzzy set, and thus has an important position in fuzzy systems such as fuzzy decision making systems, fuzzy control systems, fuzzy neural networks systems, and fuzzy management information systems. That is to say, measuring fuzziness of a fuzzy set is an important step in fuzzy systems. Further, entropy is a well known concept within physics, information theory and fuzzy set theory. Depending on its context, entropy is used for quantifying the amount of disorder of information, or of fuzziness usually defined within either a stastistical or fuzzy frameworks. As in probability – possibility consistency principles, we can see different formulas for finding the entropy of fuzzy sets. But unfortunately, all these formulas were rooted in the concept that in case of fuzzy set A, neither its union with the complement is an universal set nor its intersection with the complement is a null set. As a consequence of the availability of many such definitions, newcomers in the field will be overwhelmed by the fact which definition is to follow. This would lead them to a chaotic state. In a subject like mathematics, these types of situations are not desirable. Unlike the consistency principles, we would not favour the aforesaid concepts which have been discussed in details in our previous works, Dhar [15, 16, 17,18, 19] and as such we tried to define it in accordance with the consistency principle discussed above. It is expected that this result would work towards finding a result within an appropriate mathematical framework. The immediate application of this suggested consistency principle can be found in estimating entropy of fuzzy numbers. But for this purpose, we need the help of Shannon’s entropy and hence for convenience a brief discussion about it is provided in the next section.
-
V. Shannon’s Entropy:
In probability theory, a well known concept is the entropy. If P is the probability distribution on where X = { x. , xD ,, x „} p,
-
12 n is the probability of i ,
then entropy of P is expressed as
H (p)=-E PiInPi i i =1,2, …,n.
This expression is called Shannon’s Index or Shannon’s entropy measure, is most widely used. Using this definition of entropy, fuzzy entropy too can be found with the help of the Randomness – Fuzziness consistency principle which is suggested in this article.
According to the probability – possibility consistency principle suggested above, the left reference function of a normal fuzzy number which is nothing but a E distribution function, would lead to entropy. 1 In a similar manner, the right reference function of the normal fuzzy number, which is nothing but a complementary distribution function, would lead to entropy 2 . The pair [ 1, 2] , found can rightly be called fuzzy entropy in the classical sense of defining Shannon’s entropy for a discrete law of randomness. Discretizing a law of randomness for a continuous variable should not be of much problem, which in turn can be used to define fuzzy entropy [ 1, 2] , where EE
-
VI. Numerical Examples
Let X= [4, 16, 25 ] be a fuzzy number with membership function defined as:
x— — 2
------,4 < x < 162
P x ( x ) = ’
5 — V — ,16 < x < 25
0, otherwise
It is important to mention here that the left reference function
V x — 2
P x ( x ) = 2
,4 < x < 16
According to Duboi and Prade definition of a normal fuzzy number would now be considered as a probability distribution function while the right reference function
Px (x) = 5 — Vx ,16 < x < 25
Of Dubois and Prade would now be considered as complementary probability distribution function.
Here we shall find Shannon’s entropy for the left reference function and the right reference function respectively for the above mentioned fuzzy number with given fmf, with the help of which our proposed definition of entropy as illustration purposes. It is expected that the matter will be clear with the process discussed below.
entropy which is better known as Shannon’s Index.
Table 1: Shannon’s Entropy for the Left Reference function
x |
F(x) |
p |
Inp |
pInp |
4.0 |
0 |
.2649111 |
-1.3283610 |
-0.3518957 |
6.4 |
.2649111 |
.2183286 |
-1.5217540 |
-0.3322424 |
8.8 |
.4832397 |
.1900804 |
-1.6603081 |
-0.3155920 |
11.2 |
.6733201 |
.1705888 |
-1.7684993 |
-0.3016862 |
13.6 |
.8439089 |
.1560911 |
-1.8573155 |
-0.2899104 |
16 |
1 |
1.5913267 |
Table 2: Shannon’s Entropy for the Right reference function
x |
G(x) |
1-G(x) |
p |
Inp |
pInp |
16.0 |
1 |
0 |
.2190046 |
-1.5186625 |
-0.3325940 |
17.8 |
.7809954 |
.2190046 |
.2081841 |
-1.5693325 |
-0.3267100 |
19.6 |
.5728113 |
.4271887 |
.1988247 |
-1.6153317 |
-0.3211678 |
21.4 |
.3739866 |
.6260134 |
.1906244 |
-1.6574503 |
-0.3159504 |
23.2 |
.1833622 |
.8166378 |
.1833622 |
-1.69629184 |
-0.3110358 |
25.0 |
0 |
1 |
1.607459 |
Again from above results, we can say that the pair of entropy according to our proposed definition would become (1.5913267, 1.607459).
Thus we have seen that with the help of “The Randomness-Fuzziness Consistency Principle” and Shannon’s Entropy Index, we can find the entropy of a fuzzy number. An important characteristic of entropy of fuzzy number is that the entropy of all triangular fuzzy numbers is the same whereas it varies in case of non triangular fuzzy number. It is known that the sum of two triangular fuzzy numbers is again a triangular fuzzy number and hence we get the same entropy for the resulting triangular fuzzy number as that of the individual triangular fuzzy numbers. That is to say we donot get different entropies for different triangular fuzzy numbers but for the same choice of intervals. To make the matter clear and complete let us have a look at the following example.
6.1 Numerical Example
Let us consider two triangular fuzzy sets A=[-5,-2,1] and B=[-3, 4,12] defined with the membership function
Ц а ( x ) = <
x + 5
----, - 5 < x < - 2
- x +1
------, -2 < x < 13,
o, otherwise
M b ( x ) = ' |
x + 3, - 3 < x < 4 - x + 12 ,4 < x < 12 3 o, otherwise |
Let us calculate the entropy for the fuzzy set A
Table 3: Shannon’s Entropy for the Left reference function
x |
F(x) |
p |
Inp |
pInp |
-5 |
0 |
.2 |
-1.609437912 |
-0.321887582 |
-4.4 |
.2 |
.2 |
-1.609437912 |
-0.321887582 |
-3.8 |
.4 |
.2 |
-1.609437912 |
-0.321887582 |
-3.2 |
.6 |
.2 |
-1.609437912 |
-0.321887582 |
-2.6 |
.8 |
.2 |
-1.609437912 |
-0.321887582 |
-2 |
1 |
-1.189820643 |
Table 4:Shannon’s Entropy for the Right reference function
x |
G(x) |
1-G(x) |
p |
Inp |
pInp |
-2 |
1 |
0 |
.2 |
-1.609437912 |
-0.321887582 |
-1.4 |
.8 |
.2 |
.2 |
-1.609437912 |
-0.321887582 |
-.8 |
.6 |
.4 |
.2 |
-1.609437912 |
-0.321887582 |
-.2 |
.4 |
.6 |
.2 |
-1.609437912 |
-0.321887582 |
.4 |
.2 |
.8 |
.2 |
-1.609437912 |
-0.321887582 |
1 |
0 |
1 |
-1.609437912 |
Hence the Shannon’s entropy for the fuzzy number B Now we are going to find the entropy for the fuzzy is (1.609437912, 1.609437912) number B
Table 5: Shannon’s Entropy for the Left Reference function
x |
F(x) |
p |
Inp |
pInp |
-3 |
0 |
.2 |
-1.609437912 |
-0.321887582 |
-1.6 |
.2 |
.2 |
-1.609437912 |
-0.321887582 |
-.2 |
.4 |
.2 |
-1.609437912 |
-0.321887582 |
1.2 |
.6 |
.2 |
-1.609437912 |
-0.321887582 |
2.6 |
.8 |
.2 |
-1.609437912 |
-0.321887582 |
4.0 |
1 |
-1.609437912 |
Table 6: Shannon’s Entropy for the Right reference function
x |
G(x) |
1-G(x) |
p |
Inp |
pInp |
4 |
1 |
0 |
.2 |
-1.609437912 |
-0.321887582 |
5.6 |
.8 |
.2 |
.2 |
-1.609437912 |
-0.321887582 |
7.2 |
.6 |
.4 |
.2 |
-1.609437912 |
-0.321887582 |
8.8 |
.4 |
.6 |
.2 |
-1.609437912 |
-0.321887582 |
10.4 |
.2 |
.8 |
.2 |
-1.609437912 |
-0.321887582 |
12 |
0 |
1 |
-1.609437912 |
Hence the Shannon’s entropy for the fuzzy number B is (1.609437912, 1.609437912)
Now if the two fuzzy numbers are added then the membership function of the resulting triangular fuzzy number would become
Ца+B (Х ) = ’
x + 8
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^™ ™
10 ,
- x + 13
-8 < x < 2
,2 < x < 13
0, otherwise
Table 7: Shannon’s Entropy for Left Reference Function
x |
F(x) |
p |
Inp |
pInp |
-8 |
0 |
.2 |
-1.609437912 |
-0.321887582 |
-6 |
.2 |
.2 |
-1.609437912 |
-0.321887582 |
-4 |
.4 |
.2 |
-1.609437912 |
-0.321887582 |
-2 |
.6 |
.2 |
-1.609437912 |
-0.321887582 |
0 |
.8 |
.2 |
-1.609437912 |
-0.321887582 |
2 |
1 |
-1.609437912 |
Table 8: Shannon’s Entropy for Right Reference Function
x |
G(x) |
1-G(x) |
p |
Inp |
pInp |
2 |
1 |
0 |
.2 |
-1.609437912 |
-0.321887582 |
4.2 |
.8 |
.2 |
.2 |
-1.609437912 |
-0.321887582 |
6.4 |
.6 |
.4 |
.2 |
-1.609437912 |
-0.321887582 |
8.6 |
.4 |
.6 |
.2 |
-1.609437912 |
-0.321887582 |
10.8 |
.2 |
.8 |
.2 |
-1.609437912 |
-0.321887582 |
13 |
0 |
1 |
-1.609437912 |
Hence the Shannon’s entropy for the fuzzy number A+B is (1.609437912, 1.609437912)
From the above example it is clear that the entropies of triangular fuzzy numbers are always equal for same choice of intervals whereas it varies if different interval lengths are chosen for the fuzzy numbers which are taken into consideration.
-
VII. Conclusion
In this article, efforts have been made to show that two laws of randomness are needed to define a normal fuzzy number with one law of randomness leading to the membership function on the left of the point of maximum possibility and another law of randomness leading to the membership function on the right of the point of maximum possibility. Since a possibility distribution of a normal fuzzy number can be expressed as two distribution functions by using set superimpositions, it seems that such types of efforts of finding the density functions which are possibility distribution and probability distributions at the same time would have no logical meaning from our standpoints. Since possibility distributions can be expressed either as a probability or as a complementary probability and hence these are already associated with some densities. For the same reason we would like to discard the variable transformation also. The result obtained by us with the help of operation of set superimposition seem more logical as it is established in accordance with the definitions of left reference function and right reference functions which are used to define a normal fuzzy number as can be found in the literature of fuzzy set theory. Finally we would like to say that the time has come to rethink about the problems as already been cited and it is therefore rational to replace all the transformations which are found in the case of probability –possibility consistency with the one proposed by us because here every effort is made to make it logical and workable. Further, as an application purpose we have discussed a bit about entropy of fuzzy sets and thereafter showed how this principle is helpful in finding an appropriate value of the entropy of a fuzzy sets with the help of an example. Hence is our claim.
Acknowledgments
The authors would like to thank the anonymous reviewers for their careful reading of this paper and for their helpful comments.
Список литературы A Revisit to Probability - Possibility Consistency Principles
- L. A. Zadeh, Fuzzy sets as a basis for a theory of possibility. Fuzzy Sets and System, 1,(1978) 3-28, 1978
- R. Krishnapuram, J.M. Keller, A possibilistic approach to clustering, IEEE transcation on fuzzy systems, 1(2), (1993) 98-109.
- G.J.Klir, Information preserving probability-possibility transformation, recent developments, Fuzzy Logic,(1993), 417-428.
- G.J. Klir, Probability-possibility TransformationsL A comparision, Unternational Journal of General System Vol 21, 291-310, 1992
- D. Dubois and H. Prade, On possibility/probability transformations, Fuzzy Logic, 103- 112, 1993
- F. Alt and Marshall C.Yovits, Advances in Computers, New York (1993)315-318 .
- Elina Castineira,Susana Cubillo,Enric Trillas,On the Coherence between Probability and Posibility Measures,International Journal of Information Theories and Applications ,Vol. 14,303-310,2007
- Liu Du, Kyung K. Choi, Byeng D. Youn, AIAA Journal,2005.
- Moamar Sayed Mouchaweb, Mohamed Said Bouguelid, Patrice Billaudel and Bernard RiERA: Variable Probability-Possibility transformation, 25th European Annual Conference on Human Decision -Making and Manual Control (EAM”06), September 27-29, Valenciennes, France
- Ronald R. Yager, Entropy Conserving Probability Transform and the entailment Principles, Fuzy Sets and Systems, Vol. 158, 1397-1405,2007
- K.D.Lee and J. llinas, hybrid Model for Intent estimation, Proceedings, Sixth International Conference of Information Fusion, Cairo, July, 2001, 1215-1220.
- H. K. Baruah, Set Superimposition and Its Application to the Theory of Fuzzy Sets, Journal of the Assam Science Society, Vol. 40, Nos. 1 & 2, 1999, 25-31.
- H. K. Baruah, the Randomness-Fuzzuness Consistency Principles, IJEIC, Vol. 1, Issue 1, 37-48, 2010
- H.K.Baruah , Theory of Fuzzy sets Beliefs and Realities ,IJEIC, Vol. 2 ,Issue 2,1-22,2011
- Mamoni Dhar, Rituparno Chutia and Supahi Mahanta, A Note on existing Definition of Fuzzy Entropy, IJEIC, Korea, Vol.3, Issue 1, 2012, p-17-21.
- Mamoni Dhar, A Note on the Coherence between Probability and Possibility Measures, IJCA, USA, Vol.43, No.7, 2012, p-28-29.
- Mamoni Dhar, Probability- Possibility Transformation: A Brief revisit, Accepted for publication in AFMI journal, Korea, 2012.
- Mamoni Dhar, A Note on Variable transformations, Accepted for publication in AFMI journal, Korea, 2012.
- Mamoni Dhar, Probability- Possibility Transformations: An Overview, Accepted for publication in IJMA journal, India, 2012.