Projective Capital Asset Pricing Model

Бесплатный доступ

This paper is interested in exploring the capabilities and limitations of investment decision making under uncertainty through the lens of Quantum Probabilities/formalism stand and will be focusing on the Capital Asset Pricing Model as use case. Our main purpose is to examine the historical and structural foundations surrounding decision making paradoxes. To ease the comprehension of the issue to the common reader, we first outline key cornerstones of investment decision making under the two competing conceptual frameworks, expected utility and mean-variance. We review then the axiomatic justifications of the mean-variance and set the comparison with the Expected utility generally. That's when the analogy with quantum probabilities arises. This comes from the fact that decision making process seems to be more likely to be presented in terms of amplitudes. Thus, here the quantum probabilities refer to a calculus of quantum states and not of probabilities. In the final section, we present the capital asset pricing model to understand the appeal of the usage of Mean variance over Expected utility in the financial theory, and how we can remediate to this approach once decisions are depicted in terms of quantum probability amplitudes. Several extensions of the rational decision-making theory using classical probability formulations emerged depending on the actual empirical findings, trying to explain such paradoxes and improve the existing framework decision making theory. These simplifying assumptions were seeking to generate the probabilistic measures assumptions without linearity or to make State-independent probabilistic estimates as well as agents’ possessing firm assumptions in the generalized utility theory loosened. While these trials helped to discuss the pitfalls of the classical probabilities in some decision-making situations, it failed to give a harmonized expected utility theoretical model. An established theory to consider is the prospect theory by Kahneman and Tversky which encompasses the human biases and heuristic. Indeed, its attributes make this theory likely to be extended to a general framework of the decision-making theory by using quantum probabilities as the mathematical scope.

Еще

Decision Making, mean–variance, expected utility, Decision Making paradoxes, Borsch’s paradox, quantum probability, probability mixture, portfolio theory, CAPM

Короткий адрес: https://sciup.org/14124345

IDR: 14124345   |   DOI: 10.47813/2782-2818-2022-2-4-0201-0213

Текст статьи Projective Capital Asset Pricing Model

DOI:

Rational decision Making modulization is based on Expected utility theory, first introduced by Von Newman and Morgenstern axiomatizing. However, mean-variance method remains by far the most used model in the economic and financial literature when it comes to represent investors preferences [1-5].

The framework presented by Markowitz on the early 50's, use standard deviation and ex-ante mean of the anticipated financial return of an available opportunity of an investment, to present an asset. This particularly means, that each portfolio and asset is presented only in two dimensions which are the pairs of the coordinates (μ,σ). However, this approach is considered as being inconsistent. One of the most advocates of this latter belief is Borch, claiming that the two-dimensional indifferences curves are logically incoherent with the representation of the investor’s rational preferences. This set the ground to what will be known as the Borch paradox, and few works that are of a highly theoretical importance emerged as a result, establishing first arguments toward the connection between the mean variance and the expected utility.

Nevertheless, even if the expected utility seems to be more logically coherent with the decision-making process, it still showing some major divergences when it comes to some real-life human thinking use cases. In fact, several paradoxes in psychology and economics, like Ellsberg’s and Allais’s [2] has been exposing cases where generalized utility theory fails to converge to the human thinking. This has been a direct consequence of the probabilistic structure of the expected utility theory. Surprisingly, due to their ability to handle this kind of problems efficiently, Quantum probabilities starts to be widely used for the decision-making modeling in behavioral economics, finance and cognitive psychology. This might be considered as over the original, if ignoring that Kahneman, Tversky and other researchers were already seeking to extend the normative Decision-Making framework to explain these decision-making paradoxes that arises from using classic probabilities [6-9]. These experimental investigations were mainly looking to improve probabilities in uncertain environment. Thus, the following interrogations arose: Does human thinking follow classical probability? if not, should we continue to adopt the classical probability as the cornerstone of the description and the normative predictions in the decision making?

Then what other rules can be pursued to formalize the human preferences and decisionmaking? And what are the consequences on the portfolio theory, especially the Capital Asset Pricing Model (CAPM)?

MEAN VARIANCE METHOD

Ranking probability distributions using mean and mean variance can be often considered as the main method for decision making under uncertainty. While modern literature has dedicated an important research area to ranking distributions via moments over a wild range of fields, this method has been playing particularly a central role in the portfolio analysis, especially through Markowitz' Framework. In this latter, financials assets are represented as random variables where each of them has a probability distribution over its possible returns [10-13]. Thus, the two moments: mean and variance are the two major instruments to rank distributions, as mean being the average return wile variance characterizes the risk. The investor is considered as being generally risk averse and then has a preference towards portfolios with lower «risk” (standard deviation) σ but higher mean return μ. Consequently, the rationale is to select a portfolio that minimizes the variance and maximizes mean. On the other hand, when two portfolios have same expected returns, the one with the smallest variance is picked, while when two portfolios have the same variance, the one with the biggest expected return is chosen. Then, the considered opportunity set basically applies to certain portfolios forming the so-called “efficient frontier".

The efficient frontier, constructively then, is over all assets and portfolios to its southeast, by having jointly highest σ and lowest μ. Then comes the indifference function V(σ,μ), used as a tool to discriminate between two efficient portfolios. In fact, in order to choose between portfolios located at the efficient frontier, the investor has to form equivalued curves. Once the indifference function V(σ,μ) and efficient frontier are set, the MV-optimal investment in risky assets is the point of tangency between V(σ,μ) which is owed to the decision maker and the efficient frontier.

THE EXPECTED UTILITY

The expected utility manifests a greater richness when considering the variety of stochastics environments dealing with a decision making. This paradigm entails usually more consistent assumptions towards risk preferences and then tries to fit Von Neumann Morgenstern utility functions properties that underlies the general expected utility framework.

In economical literature, expected utility has served on one hand, as a descriptive theory explaining how people make decisions. On the other hand, as a predictive theory, trying to correctly forecast people's choice while modelling to some level psychological mechanisms of decision making.

In fact, expected utility theory deals with a decision maker choice between uncertain or risky prospects. The chosen act/taste is the one resulting in the highest expected utility value. The underlying rule here is comparing expected utility values: weighted sums resulting from the addition of utility values of outcomes multiplied by their respective probabilities.

Utility theory includes two main models. The first one, is expected utility under risk, and it is characterized by the Von Neumann Morgensten framework, evaluating risky prospects, represented as lotteries over an arbitrary set of outcomes. Formally, let X={x 1 ,...,x n } be a set of outcomes and (x 1 ,p 1 ;...;x n ,p n ) a risky prospect, where each i,p i denotes the probability of the outcomes xi. Then, the formula: ∑n i=1 u(x i )p i is evaluating risky prospects, where u is a real valued function over X representing the decision makers preferences. The second expected utility concerns the one under uncertainty and is dealing with the evaluation of random variables/acts, whose distributions are not included in the data and representing alternative courses of action. Formally, an act f is evaluated, according to the model of expected utility under uncertainty, by ^|Ъ1М (xij ^(f *(X|)).

Where π is a probability measure on S representing the decision maker’s belief.

The origins of Expected utility theory are often interpreted in terms of the following generalization mechanism: the maximization of expected financial values is presented as an anterior foundational concept, being nowadays often generalized in two ways, whether by non-probabilistic or non-additive decision theories.

However, expected utility theory makes faulty predictions about people's decisions in many real-life choice situations (see Kahneman & Tversky 1982); Nevertheless, this does not settle whether people should make decisions on the basis of expected utility considerations. In the following paragraph, we shall highlight this by Allais' paradox, which is a widely known thought problem exposing a counter example of the expected utility hypothesis.

Allais Paradox

Let's recall the axiomatic foundations of the expected utility theory (Figure 1):

Transitivity:

If x £ У and У £ z> then x £ z.

I                Completeness:

x £ у or у ^ x.

Independence:

If x > у and 0 < p < 1, then [x, p; z, 1 p] > [y. p; z, 1 p].

Continuity:

If x >- у and у >- z, then there are numbers 0 < p < 1 and 0 <  q < 1 such that [x, p, z, 1 — p] >- у and у >- [x, q; z, 1 — q].

  • Figure 1. The axiomatic foundations of the expected utility theory.

Where :

'x> у denotes that x is preferred to y;

x~y indifferences between x and y.

  • x fc У Says that x is at least as preferred as y.

‘[x, p;y, 1-p] is a lottery assigning the probability p to x and 1-p to y.

Now, let's to Allais paradox, historically this is the Allais experiment that construed the Allais paradox:

p 1

100 with the probability 1

p 2

100 with a probability of 0.11 0 with a probability of 0.89

q 1

500 with a probability of 0.1

q 2

500 with a probability of 0.90

100 with a probability of 0.89 0 with a probability of 0.01

0 with a probability of 0.10

There are four couples of solutions possible, where two respect the hypothesis of the expected utility, that is, (p 1 , p 2 ) and (q 1 , q 2 ), and the rest are violating it, that is (p 1 , q 2 ) and (q 1 , p 2 ). These affirmations are verified by asking the algebraic inequalities which, result from comparisons of preference. For a couple to be compatible with the hypothesis of the expected utility, it is necessary and sufficient that these algebraic inequalities themselves be compatible. For example, with (p 1 , q 2 ), the comparisons of preference are (p 1 , q 1 ) and (q 2 , p 2 ) and they result in two inequalities that contradict each other’s.

Then (p 2 , q 1 ), violates the hypothesis of expected utility, as well as (p 1 , q 2 ). Whereas (p 1 , q 1 ) and (p 2 , q 2 ) are compatible with it.

This reasoning assumes that individual choices are effectively the reflection of his preferences as the before mentioned axiomatization. This hypothesis corresponds to the ordinary semantics of preferences in economics, the so-called revealed preferences, and neither Allais nor his successors consider it problematic, to the point that they do not even mention it. The major problem spotted here concerns the violation of the independence axiom.

The probabilistic issue of the expected utility

For many years, the expected utility paradigm, relying on the axiomatic foundation of von Neumann and Morgenstern, Savage, Anscombe and Aumann [10-13], has been considered in finance and economics as being normative. However, during the last decades, it has been severely criticized from a descriptive as well as a normative level. The examples of its systematic violations have been rising the discussion about the obviously non normal distributions above returns, since Fama and French [5].

In fact, violation appears in practice, where the uncertainty is represented by means of extrinsically specific probabilities: objective probabilities. This was pointed out by several researchers as Allais [2], Pennacchi [11] and others. They discovered that the choice made by the great majority of subjects violates the expected utility hypothesis in different situations. This is since uncertainty is rather representing itself as states of nature and rarely in terms of objective probability. This led to the concept of subjective uncertainty, suggesting that each combination of portfolio leads to a particular return.

The other point is that decision making appears to be inconsistent with expected utility model when forming probabilistic belief under relative information deficiencies Knight [12], which motivates the development of non-Bayesian models over the past 40 years while meanvariance model continues to serve as pillar to applicative situations, especially in the academic and practical field of finance.

MEAN VARIANCE METHOD

The double Slit parallel

The double slit experiment is one of the most fundamental experiments in quantum physics, to which probability interference is closely tied to. This experiment is usually described as follow.

An electron detector and an electron gun producing mainly a beam of electrons, and the detector function is counting the number of electrons hitting a given area. We have two slits of equal width and let us call them slits A and B, that are separated by a certain distance from each other. The experiment is about three scenarios:

  • 1.     slit A is open and slit B is open*.

  • 2.     slit A is open and slit B is closed.

  • 3.     slit A is closed and slit B is open.

The sum of probabilities of an electron arriving at the detector, when separately one slit is closed and the other open, is not equal to the probability produced when both slits are open, the experiment is paradoxical in terms of classical models adding exclusive events. In parallel, each electron in the double-slit experiment behaves like a decision-maker who violates the Independence axiom in Allais’ experiment [2, 6]. Hence, the superposition principle of quantum mechanics tells us to add amplitudes rather than probabilities and this results in interference.

Thus, a new probability framework must be used to define a more accurate Expected utility representation.

PROJECTIVE EXPECTED UTILITY

The following representation owes much to Lamura [6].

Let X be the positive orthant of the unit sphere in Rn, where n is the cardinality of the set of relevant outcomes S:={s 1 ,s 2 ,...,s n }.

Next, let .|. denote the usual inner product in Rn. An orthonormal basis is a set of unit vectors (b 1 ,...,b n ) such that b i |b j =0 whenever i≠j. Then:

(Born’s Rule) There exists an orthonormal basis (z1,...,zn) such that, for all x∈X and all si∈S, any two lotteries are indifferent whenever their risk profiles px(zi)=⟨x|zi⟩2, i=1,…,n coincide.

(Archimedean ) For all x,y,z X with p(x) p(y) p(z), there exist α,β (0,1) such that αp(x)+(1-α)p(z) p(y) βp(x)+(1-β)p(z).

( Independence) For all x,y,z X, px py if, and only if, αpx+(1-α)pz apy+(1-α)pz for all α [0,1].

The previous three axioms are jointly equivalent to the existence of a symmetric matrix U such that u(x):=x′Ux for all x X represents .

Archimedean and independence are jointly equivalent to the existence of a functional u which represents the ordering and is linear in p, i.e.

u(x)=∑ i=1 u(s i )p si (x)=∑u(s i ) x|z i 2, where the second equality is by definition of p as squared inner product with respect to the preferred basis. Then, the matrix form of it, is

u(x)=x′P′DPx=x′Ux,

Where P is the projection matrix associated to (z 1 ,z n ), D represents the diagonal matrix with the payoffs on the main diagonal, and U:=P′DP is symmetric. Thus, for any symmetric matrix U there exist a diagonal matrix D and a projection matrix P such that U=P′DP(using Spectral Decomposition theorem), and hence : x′Ux=x′P′DPx, for all x X.

Thus, the three axioms are jointly equivalent to the existence of a symmetric matrix U such that u(x):=x′Ux represents the preference ordering.

APPLICATION: TOWARD A NEW CAPM MODEL

The basic CAPM Model

The capital asset pricing model (CAPM) by Sharpe, Lintner and Mossin was first a direct consequence of the restatement of the expected utility in terms Mean variance [4]. Here, we remind a simple derivation of the CAPM and briefly highlight what MV brought to this model.

Let n risky assets in the market and the price of asset j is P j (j= 1,2, . . . , n). The investor spreads his money between risky assets and risk-free bonds. w rf and w M = 1-w rf are respectively his portfolio weights. His investment in the market (risky assets) is spread across all n risky assets in proportion to their respective prices. r rf the earned return from the risk-free asset and r M from the market portfolio of risky assets. Then, the return on the market portfolio is r M =∑ j P j r j /∑ j P j , where r j is the return on asset j.

Suppose that r rf is less than both expected returns μ(r M ) and μ(r j ), as must be the case to attract risk averse investors. To increase the expected return of one's investment portfolio, the first possibility is that the investor buys some more of asset j using the income earned from the investment in the risk-free asset. The new portfolio weights are then w M in the market portfolio, δ in security j and w rf -δ in the risk-free asset. The expected return of this portfolio is w M μ(r M ) +δμ(r j ) + (w rf -δ)r rf and its variance is w M 2σ2(r M ) +δ2σ2(r j ) + 2w M δcov(r J , r M ).

The marginal increase in expected return is therefore δμ(r j )-δr rf . Similarly, the marginal increase in portfolio variance is δ2σ2(r j ) + 2w M δcov(r j , r M ),which approaches 2w M δcov(r j , r M ) for small δ. The marginal rate of substitution, or price in terms of added risk (variance) for each extra unit of expected return (mean), is then μ(r j )-r rf /{2w M cov(r J , r M )}.(A)

The second way for the investor to increase expected return is to sell weight δ of the risk-free asset and add weight δ to her investment in the market portfolio. By an identical argument to that above, the marginal rate of substitution is then μ(r M )-r rf /[2w M σ2(r M )].(B)

Let (A) = (B), based on the “law of no arbitrage”, thus the equation commonly known as the mean–variance CAPM is obtained:  μ(r j ) =r rf +cov(r j , r M )/σ2(r M )[μ(r M )-r rf ].(C)

To rewrite this equation in terms of asset prices rather than returns, let the return on asset j be defined in terms of its initial price P i and its period-end price or value V j by r j =V j /P j -1.

Hence, by definition:

μ(r i ) = μ(V i )/P i -1, cov(r i , r M ) =cov(V j , V M )/P j P M and σ(r M ) =σ(V M )/P M . Substituting in (C) and rearranging reveals the CAPM as an explicit pricing model

P j =μ(V j )-β j [μ(V M )-P M (1 +r rf )]/(1 +r rf )

(P M =∑ j P j ), and where β j = cov(V j , V M)/ σ2(V M ).

CAPM model from the projection viewpoint

Given a lottery x H defined over a n-dimensional Hilbert space and chosen a (orthonormal) basis {{z}} = {z1,...,zn} (whose elements will be the random variables of the problem), the projection πx of x over a subspace Π H satisfies

(x-πx) ⊥ zi ∀i ∈ [0, n].(1)

In the matrix representation this can formalized as ≺x-zTωz≻= 0 → Zω = d.(2)

where [ℤ] ij = z i| z j , [ω] i = π x |z i , [d] i = z 1 |x , and the inner product of the

Hilbert space H is x 1 |x 2 =E [x 1 x 2 ] (E[·] being the expected value).

The projection πx of x can therefore be expressed as (assuming Z to be non-singular): πx= wTz = dTZ-1z.(3)

If p is the vector of prices associated to the vectors of the basis {{z}}, the price of the projection πx (which we assign by extension to the lottery x) is px = pTw = mTd = E[mTzx] = ≺µ|x≻ .(4)

where we have defined the pricing vector m := Z-1p(5)

and the vector

µ := mTz = pTZ-1z .(6)

In (4) notice the difference with the definition of probability weight in [1].

Introducing, without loss of generalization, a single risk-free bond, the pricing problem with price normalised to unity reduces to minimizing the quantity:

(ωTME[z] + rFwF)2+ ωTMCωM.(7)

where ω M is the vector of weights of the risky assets,

C := E[(z-E[z])(z-E[z])T].(8)

It is the variance for the random variables z i (E denoting the vector of expected values), while ω F = (1-ωT M p) and r F are the weight and the return for the risk-free bond, respectively.

Equation (7) gives

ωM= -rF(C + yyT)-1y = -rF [C-1y/( 1 + yTC-1y)].(9)

where we have defined y := E[z]-rFp.(10)

The pricing vector µ is

µ = γ rF[1- yTC-1(z-rFp)/ (1 + zTC-1z)].(11)

The proportionality constant γ can be found by evaluating the pricing vector for the riskfree assets (price being normalized to unity), which gives:

≺µ|1≻ = E[µ] = 1/RF= γRF/(1 + yTC-1y ) → γ = (1 + yTC-1y)/(rF)2(12)

The pricing vector g therefore reduces to

µ = (1/rF)(1-yTC-1(z-E[z])).(13)

From equation (4) follows:

px =(1/rF) (E[x]-cov(zTC-1z, x)).(14)

Concerning the reduction to the standard CAPM formula in the MV formulation see e.g. [8]. For considerations about the conciliation between MV and EU frameworks, refer to [4].

DISCUSSION

The obtained results are the same as Luenberger. One takes the unitary ray in the Rn Hilbert space representing the "lottery" (in his terminology, he talks of random payoffs) and project it over the subspace spanned by the chosen basis. Since to any vector of the basis is associated a weight contributing to the price, to any projection is associated a price, which by extension is associated to the lottery. What one does then is to minimize the norm of the lottery (eq.7) under the constraints of its projection, and a pricing formula can be obtained.

Pertaining to the utility function discussed earlier, in that case P identifies a change of basis from an arbitrary one describing the interests of the "decision maker" and the objective one used by the "modeler". In the latter case, the basis diagonalizes the utility matrix and corresponds to a projection decomposed in orthonormal components.

It is necessary to recall that the decision models were initially conceived as tools to look for logical protocols independently of the representations that the agents can give them. Nevertheless, the analysis of the problems raised by agents’ decisions, could not be reduced to these models which do not integrate the own agent’s representations of the information. Hence, appeared the necessity of the reintroduction of the cognitive dimension.

In fact, agents had to be consistent in decision-making, which was translated as a submission to the constraints of rationality. From then on, the cognitive dimension was at the heart of the decision-making economist models, since it is precisely at the level of the very notion of rationality and coherence, which are necessary and crucial to any decision; that the two domains were intertwined: cognitive sciences such as economic analysis.

However, experiments conducted by Khaneman and Tversky showed that the main assumptions on which coherence was based on, were violated in these empirical tests, which then was questioning not only the validity of these decision models but, the accepted identity between coherence logic of a model of choice and the rationality of the agents whose behavior influences this model.

This led them to propose the distinction between two types of operations: "editing" or "framing" and evaluation, the latter category refers to the rules of logic. Thus, the observed discrepancies between the results obtained and the corresponding responses to the logical protocols can be explained by the deformations induced by the representations of the agents on their calculations. Bringing together the prospect theory and the limited rationality, the representations of the agents then determine the domain and the possibilities of logical calculation which are offered to them, at the same time as they set the limits.

CONCLUSION

Thus, economic agents perform their calculations based on subjective representations made of the situation, for the information available. This analysis brings us in a very direct way to the epistemological revision of classical science proposed by quantum physics. Quantum mechanics transcribes an unprecedented relationship between the subject and the nature he seeks to represent. Heisenberg tried to explain it by inviting us to understand quantum theory not as a descriptive theory of elementary particles, but as a theory that restores the contextuality of a phenomenon to a primordial place in its relative definition.

In our case here, we applied the projective expected utility to CPAM Model, and we found a more accurate model entailing a richer representation.

Indeed, quantum physics seems to encompass these same operational conditions, which are "editing" or framing and evaluation, in a larger theoretical mathematical body. The explanation comes from the fact that, quantum physics responds structurally to the observation of the absence of independence of phenomena regarding the order of use of contexts; and in these terms, it is the only one to consider the conjunction of contexts that can hardly be ignored at the microscopic scale, because it leads otherwise to physical inconsistencies.

A closer look at the structure of quantum theory reveals its power and the relevance of its use in the cognitive sciences, and particularly in cognitive economics.

Статья