Optimal solutions for inclusions of geometric Brownian motion type with mean derivatives
Автор: Gliklikh Yu. E., Zheltikova O.O.
Рубрика: Математическое моделирование
Статья в выпуске: 3 т.6, 2013 года.
Бесплатный доступ
The idea of mean derivatives of stochastic processes was suggested by E. Nelson in 60-th years of XX century. Unlike ordinary derivatives, the mean derivatives are well-posed for a very broad class of stochastic processes and equations with mean derivatives naturally arise in many mathematical models of physics (in particular, E. Nelson introduced the mean derivatives for the needs of Stochastic Mechanics, a version of quantum mechanics). Inclusions with mean derivatives is a natural generalization of those equations in the case of feedback control or in motion in complicated media. The paper is devoted to a brief introduction into the theory of equations and inclusions with mean derivatives and to investigation of a special type of such inclusions called inclusions of geometric Brownian motion type. The existence of optimal solutions maximizing a certain cost criterion, is proved.
Mean derivatives, stochastic differential inclusions, optimal solution
Короткий адрес: https://sciup.org/147159223
IDR: 147159223
Текст научной статьи Optimal solutions for inclusions of geometric Brownian motion type with mean derivatives
The notion of mean derivatives (forward, backward, symmetric and antisymmetric) was introduced by Edward Nelson in 60-th in his construction of the so-called Stochastic Mechanics, a version of Quantum Mechanics ([1, 2, 3]). After that a lot of other applications of equations with mean derivatives to various problems of mathematical were fond (see, e.g., [4]). Inclusions with mean derivatives is a natural generalization of those equations in the case of feedback control or of motion in complicated media.
It should be pointed out that the classical Nelson’s mean derivatives give information about the drift of a stochastic process. In [5], as a slight modification of some Nelson’s constructions, a new sort of mean derivative called quadratic (it is responsible for the diffusion term of a process) was introduced so that, strictly speaking, it became possible to find processes having given mean derivatives.
The paper contains a brief introduction to the general theory of stochastic differential equations and inclusion given in terms of mean derivatives, and new applications. We investigate a special class of inclusions with mean derivatives called inclusions of geometric Brownian motion type, introduced previously in [6]. We show that under some natural condition, among the solutions of such an inclusion there is an optimal one maximizing (or minimizing) a certain cost criterion. For definiteness we deal with the problem of maximizing the criterion since the minimizing problem is quite analogous.
Some remarks on notations. In this paper we deal with equations and inclusions in the linear space R n , for which we always use coordinate presentation of vectors and linear operators. Vectors in R n are considered as columns. If X is such a vector, the transposed row vector is denoted by X * . Linear operators from R n to R n are represented as n x n matrices, the symbol * means transposition of a matrix (pass to the matrix of conjugate operator). The space of n x n matrices is denoted by L ( R n , R n ) .
By S( n ) we denote the linear space of symmetric n x n matrices that is a subspace in L ( R n , R n ) . The symbol S + ( n ) denotes the set of positive definite symmetric n x n matrices
МАТЕМАТИЧЕСКОЕ МОДЕЛИРОВАНИЕ that is a convex open set in S(n). Its closure, i.e., the set of positive semi-definite symmetric n x n matrices, is denoted by S+ (n).
Everywhere below for a set B in R n or in L ( R n , R n ) we use the norm introduced by usual formula || B H = sup \\ y \\ .
y ∈ B
Everywhere we use Einstein’s summation convention with respect to a shared upper and lower index.
For the sake of simplicity we consider equations, their solutions and other objects on a finite time interval t E [0 , T ] .
We refer the reader to [7, 4] for details about set-valued mappings; to [4, 8, 9] for details about Stochastic differential equations and weak convergence of probability measures; to [10] for details about weak convergence in Hilbert spaces and to [11] for details about conditional expectation.
The research is supported in part by RFBR Grants 12-01-00183 and 13-01-00041.
1. Introduction into equations and inclusions with mean derivatives
Consider a stochastic process £ ( t ) in R n , t E [0 ,T ] , given on a certain probability space (Q , F , P ) and such that £ ( t ) is an 1 i -random element for all t . It is known that such a process determines three families of σ -subalgebras of the σ -algebra F :
-
(i) « the past » P 1 generated by preimages of Borel sets from R n under all mappings £ ( s ) : Q ^ R n for 0 < s < t ;
-
(ii) « the future » F 1 generated by preimages of Borel sets from R n under all mappings £ ( s ) : Q ^ R n for t < s < T ;
-
(iii) « the present » ( « now » ) N t generated by preimages of Borel sets from R n under the mapping £ ( t ) : Q ^ R n .
All the above families we suppose to be complete, i.e., containing all sets of probability zero.
For the sake of convenience we denote by E t ( • ) the conditional expectation E ( -IN 1 ) with respect to the « present » N 1 for £ ( t ) .
Following [1, 2, 3], introduce the following notions of forward and backward mean derivatives.
Definition 1. (i) The forward mean derivative D£ ( t ) of £ ( t ) at the time instant t is an 1 1 -random element of the form
D£ (t )= lim E (£ (t + 4*) ~ £(t)),(1)
4t→+0 t 4t where the limit is supposed to exist in Li(Q, F, P) and 4t ^ +0 means that 4t ^ 0 and 4t > 0. (ii) The backward mean derivative D*£(t) of £(t) at t is the Li-random element
D.s (t ) = A li^ E (x V, where (as well as in (i)) the limit is assumed to exist in L 1(Q, F, P) and At ^ +0 means that At ^ 0 and At > 0.
Remark 1. If £ ( t ) is a Markov process then evidently E ^ can be replaced by E ( -Ip t ) in (1) and by E ( -IF 1 ) in (2). In initial Nelson’s works there were two versions of definition of mean derivatives: as in our Definition 1 and with conditional expectations with respect to « past » and « future » as above that coincide for Markov processes. We shall not suppose £ ( t ) to be a Markov process and give the definition with conditional expectation with respect to « present » taking into account the physical principle of locality: the derivative should be determined by the present state of the system, not by its past or future.
Ю.Е. Гликлих, О.О. Желтикова
Following [5], we introduce the differential operator D 2 that differentiates an l 1 -random process £ ( t ) , t G [0 , T ] according to the rule
' ( t ) = lim Et t )
2 ’ 4t^+0 At , where (£ (t + At) — £ (t)) is considered as a column vector (vector in Rn), (£ (t + At) — £ (t)) * is a row vector (transposed, or conjugate vector) and the limit is supposed to exists in Li(Q, F, P). We emphasize that the matrix product of a column on the left and a row on the right is a matrix so that D2£ (t) is a symmetric semi-positive definite matrix function on [0 ,T] x Rn. We call D2 the quadratic mean derivative.
It is shown (see, e.g., [5, 4]) that for an Ito diffusion type process £ ( t ) = £ 0 + Jt a ( s ) ds + J 0 A ( s ) dw ( s ) the formulae D£ ( t ) = E ^ ( a ( t )) and D 2 £ ( t ) = E ^ ( A ( t ) A * ( t )) hold (recall that by the definition of diffusion-type process, see, e.g. [9], here w ( t ) is adapted to the “past” of £ ( t ) , such a process is a solution of a diffusion type equation, see [9]). If £ ( t ) is a diffusion process, i.e., a solution of stochastic differential equation £ ( t ) = £ 0 + J 0 a ( s,£ ( s )) ds + J o t A ( s,£ ( s )) dw ( s ) (a particular case of diffusion type processes), D£ ( t ) = a ( t, £ ( t )) and D 2 £ ( t ) = A ( t, £ ( t )) A * ( t, £ ( t )) . Note that quadratic derivative takes values in S + ( n ) .
Let Borel measurable mappings a ( t,x ) and b ( t,x ) from [0 ,T ] x R n to R n and to S + ( n ) , respectively, be given. We call the system of the form
J D£ ( t ) = a ( t,£ ( t )) , I D 2 £ ( t ) = b ( t,£ ( t )) ,
_
S + ( n ) ,
a first order differential equation with forward mean derivatives.
Let a ( t, x ) and b ( t, x ) be set-valued mappings from [0 , T ] x R n to R n and to respectively. The system of the form
/ D£ ( t ) G a ( t,£ ( t )) , I D 2 £ ( t ) G b ( t,£ ( t )) .
is called a first order differential inclusion with forward mean derivatives.
Definition 2. We say that (5) has a solution on [0 , T ] with initial condition £ 0 G R n , if there exist a probability space (Q , F , P ) and a process £ ( t ) given on (Q , F , P ) and taking values in R n such that £ (0) = £ 0 and P - a.s. for almost all t (5) is satisfied. For equation (4) the notion of solution is quite analogous.
Note that for simplicity here we consider only deterministic initial conditions, i.e., ξ 0 in Definition 2 is a point in R n .
Recall that for a mapping F : X ^ Y of a metric space X to a metric space Y its graph is the set of pairs { ( x, F ( x )) | x G X } in X x Y . Note that for a set-valued F the value F ( x ) is a set in Y .
For considering upper semicontinuous mean forward differential inclusions we need to recall the following
Definition 3. Let X and Y be metric spaces. For given e > 0 a continuous single-valued mapping f e : X ^ Y is called an e-approximation of the set-valued mapping F : X ^ Y, if the graph of f belongs to ε-neighbourhood of the graph of F .
It is known (see, e.g., [7]), that for upper semicontinuous set-valued mappings with convex closed images in normed linear spaces the e -approximations exist for each e > 0 .
МАТЕМАТИЧЕСКОЕ МОДЕЛИРОВАНИЕ
Denote by Q the Banach space C 0 ([0 ,T ] , R n ) of continuous curves in R n given on [0 , T ] , with usual uniform norm. Introduce in Q the a -algebra F generated by cylinder sets. Everywhere below we use this notation. Recall that F is the Borel a -algebra in Q . Note that the elementary event in Q is a curve that we denote by x ( • ) . Its value at t E [0 , T ] is denoted by x ( t ) .
It is a well-known fact that every stochastic process n ( t ) with continuous sample paths in R n , given on a certain probability space (Q , F , P ) for t E [0 , T ] , is a measurable mapping from (Q , F ) to (Q , F ) . Thus it determines a measure m n on (Q , F ) by the standard formula m n ( A ) = P ( П - 1 ( A )) for every A ∈ F .
There is a standard process c ( t, x ( • )) in R n given on (Q , F ) . It is the so-called « coordinate process » defined by the formula c ( t, x ( • )) = x ( t ) . The coordinate process on the probability space (Q , F , m n ) is the standard description of the process n ( t ) on this probability space. See details, e.g., in [9, 4].
We shall look for solutions of (5) with continuous sample paths and mainly the solution will be described as a coordinate process on Q where the corresponding measure will be constructed.
Definition 4. The perfect solution of (5) is a stochastic process with continuous sample paths such that it is a solution in the sense of Definition 2 and the measure corresponding to it on the space of continuous curves, is a weak limit of measures generated by solutions of a sequence of diffusion-type Itoˆ equations with continuous coefficients.
Lemma 1. Let b ( t, x ) be a jointly continuous (measurable, smooth) mapping from [0 , T ] x R n to S + ( n ) . Then there exists a jointly continuous (measurable, smooth, respectively) mapping A ( t,x ) from [0 , T ] x R n to L ( R n , R n ) such that for all t E R, x E R n the equality A ( t,x ) A * ( t,x ) = b ( t,x ) holds.
The proof is available in [5, Lemma 2.2].
Below we deal with the sequence of processes f i ( t ) (solutions of a sequence of stochastic differential equations in R n ) such that the estimate E ( sup || f i ( t ) | |2 ) < C 2 holds for all i with
0≤t≤T the same constant C2 > 0 (see [9, Section III,2, Lemma 1]). In presentation via the coordinate process the latter inequality means that
( sup
Ω 0 ≤t≤T
| x ( t ) | |2) d < C 2
for all measures in generated by processes f i ( t ) as above. For such processes we have to use the following technical statement.
Lemma 2. Consider a sequence of probabilistic measures m i on (Q , F ) such that (6) holds for all i. Let the measures µ i weakly converge to a certain measure µ as i → ∞ . Introduce the measures ν i by relations dv i = (1 + | x ( • ) | c 0 ) d^ i and the measures v 1 by relations dv i = (1 + | x ( • ) ||C 0 ) dm i - Then the measures v i weakly converge to the measure v defined by the relation dv = (1 + | x ( • ) | c 0 ) dm and the measures v i weakly converge to the measure v 1 defined by the relation dv 1 = (1+ | x ( • ) ||C 0 ) dm.
Indeed, specify an arbitrary bounded continuous function f : Q ^ R . Assertion of Lemma 2 follows from the fact that by (6) random variables f ( f k )(1 + I f k | |) are uniformly integrable as well as f ( f k )(1 + ! f k | |2 ) (see e.g. [12, Lemma 8]).
Corollary 1.
Let b
: [0
, T
]
x
Q
^
R
n
be a continuous vector-function such that
^
b
(
t, x
(
•
))
||
< K
(1 +
|
x
(
•
)
|
c
0
)
and analogous b
1
be such that
\\
b
1
(
t,x
(
•
))
||
(i) lim L b(t, x(•))dmk = JQ b(t,x(•))dm; k→∞
(ii) lim L b 1(t,x(•))dmk = Jq b 1(t,x(•))dm. k→∞Ω Ω
2. Equations and inclusions with mean derivatives of geometric Brownian motion type
Ю.Е. Гликлих, О.О. Желтикова
This section presents a brief description and a slight modification of material suggested in [6]. We deal with the following generalization of the so-called geometric Brownian motion, namely with a process S(t) that satisfies the system of stochastic differential equations 1
d S a ( t ) = S a a a ( t ; S 1 ( t ) ,..., S n ( t ))d t + S a ( t ) А в ( t ; S 1 ( t ) ,..., S n ( t ))d w e , (7)
where w e are independent Wiener processes in R 1 that together form a Wiener process in R n , a ( t, x ) is a vector field on R n , A ( t, x ) is a mapping from [0 , T ] x R n to the space of linear operators L ( R n , R n ) and ( А в ) denotes the matrix of operator A . Note that the (standard) geometric Brownian motion satisfies (7) in the case where a ( t ) and A ( t ) depend only on time t (i.e., do not depend on the point x E R n ).
The processes satisfying (7), arise in various stochastic models (e.g., in economy).
Suppose that the coordinates Sα of the solution of (7) are positive for all t. Than by Itˆo formula the process e(t) = log S(t) = {log S 1(t),..., log Sn (t)} satisfies the equation dea(t) = (aa - 2(A^AY)} (t,^(t))dt + A(t,^(t))dwe(t), (8)
since d w a d w e = 5 вв d t (here 5 вв is Kronecker’s symbol: 5 aa = 1 , 5 вв = 0 for a = в ).
Analogously, from Ito formula we derive that if a process e ( t ) satisfies (8), the process S ( t ) = exp e ( t ) = (exp e 1 ( t ) ,..., exp e n ( t )) satisfies (7). Note that in this case the coordinates S a are positive.
Denote by B the symmetric positive semi-definite matrix AA ∗ (where A ∗ is the operator conjugate to A as above) and by diagB the vector constructed from the diagonal elements of matrix B . Note that A β α δ βγ A γ α is the α -th element B αα of diagB . If a process satisfies (8), it also satisfies the following equation with mean derivatives:
J De ( t ) = ( a - 2 diagB) ( t,^ ( t )) , I d 2 e ( t ) = в ( t,e ( t ))
or, equivalently,
J De ( t ) + 2 diagD 2 ( e ( t )) = a ( t,e ( t )) , i d 2 e ( t ) = в ( t,e ( t )) .
Let e ( t ) be a solution of equation (9) (or (10)). We call it the logarithm of the process S ( t ) = exp e ( t ) = ( e ^ 1 ( t ) ,..., e ^" ( t ) ) •
Note that if equation (9) (or (10)) is given a priory with some B E S + ( n ) , the process S ( t ) = exp ( e ( t )) may not satisfy (7). Thus the models based on equations (9) or (10) cover a broader class of problems then those based on (8).
Consider set-valued mappings a : [0 ,T ] x R n ^ R n and B : [0 ,T ] x R n ^ S + ( n ) and the following inclusion with mean derivatives
( De ( t ) + 2 diagD 2 e ( t ) E a ( t,e ( t )) , t d 2 e ( t ) e b( t,e ( t ))
Inclusion (11) is called the one of geometric Brownian motion type. Such an inclusion can be constructed from an equation of form (10) with control in the usual way. Let the right-hand
МАТЕМАТИЧЕСКОЕ МОДЕЛИРОВАНИЕ sides a(t,x,u) and B(t, x, u) of (10) depend on controlling parameter u and U(t,x) be the set of the possible values of controlling parameters at (t, x), then on constructing a(t, x) = cl U a(t,x,u) and B(t,x) = cl B(t, x, u) where cl denotes the convex closure, we uEU (t,x) uEU (t,x)
obtain inclusion (11).
Below we describe conditions, under which the solutions of (11) do exist, and prove existence of optimal solutions maximizing a certain cost criterion.
Note that inclusion (11) has the form analogous to equation (10). The inclusion given in the form analogous to (9) would be ill-posed.
3. The main results
Theorem 1. Specify an arbitrary initial value £о G Rn. Let a(t,x) be an upper semicontinuous set-valued mapping with closed convex images from [0, T] x Rn to Rn and let it satisfy the estimate ha(t,x) h2 K(1 + hxh2) (12)
for some K > 0 .
Let B ( t, x ) be an upper semicontinuous set-valued mapping with closed convex images from [0 , T ] x R n to S + ( n ) such that for each B ( t,x ) G B ( t, x ) the estimate
\WB
(
t,x
)
h
takes place for some K > 0 .
Then for every sequence E i ^ 0 , E i > 0 , each pair of sequence a i ( t,x ) and B i ( t,x ) of e i -approximations of a ( t, x ) and B ( t, x ) , respectively, generates a perfect solution of (11) with initial condition £ о •
Proof. Specify a sequence of E i ^ 0 and sequences of E i -approximations a i ( t,x ) and B i ( t,x ) as in the hypothesis of Theorem. Without loss of generality we may suppose that B i ( t, x ) are ^ 2 -approximations of B ( t, x ) .
As the norm in S( n ) we take the restriction to S( n ) of Euclidean norm (i.e., the square root from the sum of squares of all elements of a matrix) in the space L ( R n , R n ) isomorphic to R n . Without loss of generality we suppose that (13) is valid for this norm.
All ai (t, x) satisfy (12) with a certain constant that is bigger than K (see the hypothesis). Nevertheless we keep the notation K for this constant. Since 1 + hxh2 — (1 + l|xh)2, for ai(t,x) the estimate ha(t,x)h is valid as well. The approximations Bi(t,x) take values in S+(n). Introduce Bi(t,x) = Bi(t,x) + ^iI where I is the unit matrix. Immediately from the construction it follows that Bi(t,x) for every i is a continuous Ei-approximation of B(t,x) and that at each (t,x) it belongs to S+(n), i.e., it is strictly positive definite. Besides, Bi(t,x) satisfy (13) where the constant K > 0 is bigger than the constant from the hypothesis of Theorem but nevertheless we keep the notation K for it. By Lemma 1 there exist continuous fields Ai(t,x) such that Bi(t,x) = Ai(t,x)A*(t,x). Directly from the definition of trace we obtain that trBi (t, x) is equal to the sum of squares of all elements of Ai(t, x), i.e., it is the square of the Euclidean norm of Ai(t, x) in L(Rn, Rn). Hence from (13) and from the obvious inequality (1 + hxh) — (1 + hxh)2 it follows that all Ai(t, x) satisfy hAi(t,x)h Ю.Е. Гликлих, О.О. Желтикова Without loss of generality we can suppose that the above-mentioned continuous approximations ai and Ai are smooth. Indeed, if a certain ai is not smooth, we can approximate it by a sequence of smooth mappings aij that converges to ai as j → ∞ with respect to the uniform norm on compacts. Hence for j large enough the graph of aij belongs to the εi-neighborhood of the graph of a. Thus aij is an εi-approximation of a. Then we replace continuous ai by this aij, i.e., take it as new ai. For Ai the arguments are the same. Note that after this replacement estimates (14) and (15) remain true. Consider the sequence of stochastic diferential equations d^i(t) = (ai - 2diagBi)(t, where w(t) is a Wiener process in Rn. Note that every (ai — 2diagBi)(t, x) is smooth as the difference of smooth mappings. Consider || (ai — 2diagBi)(t, x) || and show that it satisfies the estimate of (14) type with a constant greater than K. Indeed, ||(ai — 2diagBi)(t,x)|| < |ai(t,x)|| + ||2diagBi(t,x)|| < K(1 + ||x||) + KitrBi(t, x) < K2(1 + ||x||). (17) Thus, the coefficients of equations (16) are smooth and satisfy estimates (17) and (15). So, every equation of this sequence has a unique strong solution ^i (t) well-defined on the entire interval [0, T] (see. [9]). In particular, this means that each process ^i can be given on every appropriate probability space, where w(t) is adapted to its own «past». Consider the measure space (Q, F) introduced in Section 1.Denote by Pt the ст -subalgebra of F, generated by cylinder sets with bases on [0, t], and by Nt - the ст-algebra generated by the preimages of Borel sets in Rn under the mapping x(•) ^ x(t). Since all the solutions ^i (t) are strong, they all can be defined on a certain unique probability space (Q, F, P) and so they all can be considered as measurable mappings from (Q, F) to (Q, F) (see Section 1).. On the measure space ([0, T], B), where B is Borel ст-algebra, by A 1 we denote the Lebesgue measure. As it is mentioned in Section 1,.every process ^i (t) determines a measure p,i on (Q, F) and on the probability space (Q, F, ^i) the coordinate process represents ^i(t). Since all (ai — 2diagBi)(t,x) satisfy (17) and all Ai(t,x) satisfy (15) with the same K (see above), equations (16) satisfy the hypothesis of [9, Lemma III.2.1] and the remark after it for all i and so the estimate E(sup Ki(t)||2) < C2. (18) t≤T is valid for all ^i, where C2 depends only on the interval [0,T] and on K from (17) and (15). Remark 2. In the proof of [9, Lemma III.2.1] estimate (18) is derived from the relation E(sup ||^i(t)||2) < K(1 + J^ E(sup ||^i(u) ||2ds). Since the solutions are strong, they can be given t≤T u≤s on various probability spaces and the latter inequality is true on all such probability spaces. In particular, it is true on the probability space (Q, F, ^i) where the solution is described as the coordinate process. This means that (6) is valid for all i for some C2 depending only on the interval [0, T] and on K from (17) and (15). In addition by corollary in Section III.2 [9] the set of measures {µi} is weakly compact. Thus for a given sequence of approximations ai and Ai , from the sequence of corresponding measures МАТЕМАТИЧЕСКОЕ МОДЕЛИРОВАНИЕ µi one can select a subsequence that weakly converges to a certain measure µ. For simplicity of presentation we suppose that the sequence ц itself weakly converges to ц. Denote by £ (t) the coordinate process on the probability space (Q, F,ц). Note that Pt is the «past» and Nt is the «present» ст -algebras for £ (t). Lemma 3. JQ( sup ||x(t) i2)dц < C2 where constant C2 > 0 depends only on the interval [0, T] 0 Since the sequence of measures µi weakly converges to µ, Lemma 3 follows directly from Remark 2 and Corollary 1 (ii). Let us continue the proof of Theorem. From the construction we derive that iB (t,x (t)) i2 = A (t,x (t)) A (t,x (t)) i2 < A(t,x(t))i2A(t,x(t))i2= (trBi(t,x(t))2 K 1(1 + ix(t)i)2< K2(1 + ix(t)i2). Since iBi(t, x)|2< K2(1 + |x|2)), taking into account Lemma 3, we see that / Q x [0 ,T ] iBi(t,x(t))|2dц x dX 1 < K3. Introduce the mapping Bi : [0,T] x Q ^ S+(n) by the formula Bi(t,x(•)) = Bi(t,x(t)). Then it follows from (19) that the set of all Bi is uniformly bounded in the Hilbert space L2([0, T] x Q, S(n)) defined with respect to measures X1 in [0, T] and ц in Q. Hence, this set is weakly relatively compact in L2([0, T] x Q, S+(n)) and so it is possible to select a subsequence that weakly converges in L2 ([0, T] x Q, S+(n)) to a certain B : [0, T] x Q ^ S+(n). For simplicity, let the sequence Bi(t,x(•)) itself converge to B : [0, T] x Q ^ S+(n). Introduce also B(t,x(•)) = E(B | Nt) on the probability space (Q, F,ц), x(•) G Q. From the definition of weak convergence and presentation of a linear functional in L2 it immediately follows that diagBi(t,x(•)) weakly converges to diagB(t,x(•)) in L2([0, T] x Q, Rn). As iai(t,x(t))12 < K(1 + ||x(t)12) by (12), then, taking into account Lemma 3, we obtain that for some K1 > 0 / [0 ,T ] x Q iai(t,x(t))i2dX 1 x dц < K 1. Introduce the mappings ai : [0,T] x Q ^ Rn by the formula ai(t,x(•)) = ai(t,x(t)). Then from formula (20) it follows that the set of all ai is uniformly bounded with respect to the norm in Hilbert space L2([0, T] x Q, Rn) defined with respect to measures X1 in [0, T] and ц in Q. Hence the set of all ai is weakly relatively compact in L2([0, T] x Q, Rn) and so it is possible to select a subsequence that weakly in L2([0, T] x Q, Rn) converges to a certain a : [0, T] x Q ^ Rn. For simplicity, let ai (t,x(•)) itself be this subsequence. Introduce also a(t,x(•)) = E(a | Nt) on the probability space (Q, F,ц), x(•) G Q. Immediately from the definition of weak convergence and from the above arguments we obtain that (ai — 2diagBi)(t,x) weakly converges to (a — 2diagB)(t,x) in L2([0, T] x Q, Rn). By Mazur’s lemma (see, [13]), for the weakly convergent sequence (ai — |diagBi)(t, x(•)) there exists a sequence of finite convex combinations ak(t,x(•)) of its elements that converges in the same space strongly (in norm). The convex combinations have the form ak (t,x (•)) n (k) , =>(;k ei ((ai — 2 diagBii)(t,x (•))) Ю.Е. Гликлих, О.О. Желтикова n (k) where вг > 0, i = j(k),...,n(k), j(k) ^ro as k ^ж and ^2 вг = 1. Remark 3. Above we have introduced a(t, x(•)) as a weak limit of ai(t, x(•)) in L2([0, T] x Q, Rn) equipped with measures A1 on [0, T] and ц on Q. By Mazur’s lemma, as well as above, it is a strong limit of some correspondent sequence of convex combinations of ai (different from that for (ai — 2diagBi)). But since the images of a are convex, those convex combinations are Ej-approximations for some sequence of Ej ^ 0. Thus ц-a.s. a(t,£(t)) is a selector of a(t, £(t)), measurable with respect to Nt. The same arguments show that ц-a.s. B(t, £(t)) is a selector of B(t, £(t)), measurable with respect to Nt. Note that by construction and by the properties of conditional expectation the sequence ai (t,x (•)) converges to (a — diagB)(t,x (•)) strongly in L 2([0,T ] x Q, Rn) where Q is equipped with the σ-algebra Nt. Hence it converges also in probability (in measure µ) and so it is possible to select a subsequence that converges µ-a.s. In order not to change the notation, we suppose that ai(t,x(•)) converges to (a — diagB)(t,x(•)) ц-a.s. Choose 6 > 0. By Egorov theorem (see, e.g., [13]) there exists a set K5 C Q such that ц(K5) > 1 — 6 and on this set the sequence ai(t, x(•)) converges to (a — diagB)(t, x(•)) uniformly. Let f : Q ^ R be an arbitrary bounded continuous function measurable with respect to Nt. Specify an arbitrary E > 0. From the above uniform convergence on K5 and boundedness of f it follows that for all i and all t E [0, T] simultaneously there exists N(e) > 0 such that for k > N(e) II j f (x(•))(°k(t,x(•)) — (a — diagB)(t,x(• ))}dmk Since f is bounded, there exists some E > 0 such that |f (x(•)) | < E for all x(•) E Q. Note also that ц(Q\K5) < 6. On the other hand, kai(t,£i(t))|| < K(1 + ||£i(t)||) by (14) and sup j ||£ikC0dц < C2 i Q by (18). Note also the relation kξi kC0dµi < kξik>c c j ll£ikC0 dц kξik>c (see [14]). Thus, taking into account Remark 2, we get k J Q \ K s f (x(•))(ak(t,x(•)) — (a — diagB)(t,x(•))) dm|| < 6 EK (1 + C2) c i.e., since δ is an arbitrary positive number, the above norm of integral becomes smaller than any positive number when 6 ^ 0. Together with (21) this means that lim k→∞ || ^ f (x(•))(ak(t,x(•)) — (a — diagB)(t,x(•))}dmk =0 for all i uniformly. Note that (a — diagB)(t,x(•)) is continuous on the set of full measure ц in Q. Indeed, it is a uniform limit of continuous functions on K5 for every 6 > 0 and so on every finite union of the j sets K5. Thus it is continuous on the finite unions of the sets K5. Evidently lim ц([J ц(K5.)) = 1 j ^ i=1 for a sequence of 6i ^ 0. Then by the properties of weak convergence of measures and by (14) we can apply Corollary 1 (i) and obtain that lim k→∞ j f (x(•))(a - diagB)(t,x(•))dцk = j f (x(•))(a — diagB)(t, x(•))dц МАТЕМАТИЧЕСКОЕ МОДЕЛИРОВАНИЕ and that lim [ f (x(•))((x(t + At) — x(t))dpk = k■ x 7" j^ f (x(•))((x(t + At) — x(t))dp. The following relations take place n(k) k ^-" ei ( У" f (x(^)) (x(t+ At) — x (t) - (ai — 2 diagBi)(t,x (t))) dp?^ — J f (x(•)) (x(t + At) — x(t) — (a — 2diagB)(t, x(•)))dpk < II j f (x(•))(x(t + A t) - x(t))dpk - j f (x(•))((x(t + A t) - x(t))dpk n (k) , + k E «( / f (x(•))(ai i=j (k) JU — 2 diagBi)(t,x (t)) dpi — " f (x(^))(ai — 2diagBi)(t,x(t))dp) k + k " f (x(^))°k(t, x(•))dp — " f (x(•))(a — 2diagB)(t, x(•))dpk where the right-hand side of this inequality becomes less than every positive number for k large enough. Thus lim / f(x(•))(x(t + At) — x(t) — (adiagB)(t,x(•)))dp = д t^+07" 2 n(k) и™(X ei( f(x(•))(x(t+ At)—x(t) Д•-+0 \=,j (k) " —(ai — 2diagBi)(t, x(t)))dpi) = 0 and so D£(t) + 2diagB(t,£(•)) = a(t,£(•)) E a(t,£(•)) = a(t,£(t)) p-a.s. (see Remark 3). Recall that we have constructed the sequence Bi (t, x(•)) that weakly converges to B(t,x(•)) and B(t,x(•)) is a Nt-measurable selector of B(t,£(t)) p-a.s. (see Remark 3). Then applying Mazur’s lemma and Egorov’s theorem in analogy to above arguments we show that for f (•) as above lim [ f (•)[(x(t + At) — x(t))(x(t + At) — x(t))* д t^+0 7" —A (t, x (t)) A* (t,x (t))] dp = 0. Hence D2£(t) = A(t, £(t))A(t, £(t))* = B(t,£(t)) E B(t, £(t)) p-a.s. □ Ю.Е. Гликлих, О.О. Желтикова Remark 4. Note that all sequences of e-approximations for all sequences of Ei ^ 0, used in the proof of Theorem 1, satisfy (17) and (15) with the same K so that by corollary in Section III.2 [9] the set of measures {µi} (corresponding to all sequences and all i) is weakly compact. Let f be a continuous bounded real-valued function on [0, T] x Rn. For solutions of (11) consider the cost criterion in the form j (e (•)) = e Г f(t,e(t)) dt. We are looking for solutions, for which the value of the criterion is maximal. Theorem 2. Among the perfect solutions of (11) constructed in the proof of Theorem 1, there is a solution e(t) on which the value of J is maximal. Proof. Since all the measures on (Q, F), constructed in the proof of Theorem 1 for perfect solutions of (11), are probabilistic and the function f in (23) is bounded, the set of values of J on those solutions is bounded. If that set of values has a maximum, then the corresponding measure µ is the one we are looking for: the coordinate process on the space (Q, F, ц) is an optimal solution. Suppose that the above-mentioned set of values has no maximum, but then it has a lowest upper bound ℵ that is a limit point in that set. Let µi∗ be a sequence of measures such that for the corresponding solutions ei (t) the values J(e* (t)) converge to К. Every ц* is a weak limit of a sequence of measures µij corresponding to some sequence of εj -approximations as j → ∞. Select from the sequence a subsequence (for simplicity we denote it by the same symbol µij) such that for the corresponding solutions eij (t) and for all i we obtain the uniform convergence of J(eij (•)) toJ(Q(•)) asj ^ to. Then J(eii (•)) ^ К as i ^ to . Since the set of all measures corresponding to all approximations, is weakly compact (see above), we can select from µii a subsequence (denote it by the same symbol µii) that weakly converges to a certain measure µ∗ . By the construction, for the coordinate process e*(t) on (Q, F, ц*) we get J(e*(•)) = К, i.e., the value is maximal. Since ц* is a limit of цii, e* (t) is a perfect solution of (11) that we are looking for. □ The assertion of Theorem 2 deals with logarithms of generalized geometric Brownian motions satisfying inclusion (11). Note that it remains true also for the corresponding generalized geometric Brownian motions. Indeed, introduce the cost criterion J (e (t)) = J (exp e (t)) (see Section 3). This criterion satisfies the hypothesis of Theorem 2 and so among the generalized geometric Brownian motions corresponding to solutions of inclusion (11), there is an optimal process maximizing J.
Список литературы Optimal solutions for inclusions of geometric Brownian motion type with mean derivatives
- Nelson, E. Derivation of the Schrödinger equation from Newtonian mechanics/E. Nelson//Phys. Reviews. -1966. -V. 150. -P. 1079-1085.
- Nelson, E. Dynamical theory of Brownian motion/E. Nelson. -Princeton: Princeton University Press, 1967. -142 p.
- Nelson, E. Quantum fluctuations/E. Nelson. -Princeton: Princeton University Press, 1985. -147 p.
- Gliklikh, Yu.E. Global and Stochastic Analysis with Applications to Mathematical Physics/Yu.E. Gliklikh. -London: Springer-Verlag, 2011. -460 p.
- Azarina, S.V. Differential inclusions with mean derivatives/S.V. Azarina, Yu.E. Gliklikh//Dynamic systems and applications. -2007. -V. 16. -P. 49-72.
- Азарина, С.В. Включения с производными в среднем для процессов типа геометрического броуновского движения и их приложения/С.В. Азарина, Ю.Е. Гликлих//Семинар по глобальному и стохастическому анализу. -2009. -Вып. 4. -С. 3-8.
- Введение в теорию многозначных отображений и дифференциальных включений/Ю.Г. Борисович, Б.Д. Гельман, А.Д. Мышкис, В.В. Обуховский. -М.: Комкнига, 2005. -213 с.
- Гликлих, Ю.Е. Глобальный и стохастический анализ в задачах математической физики/Ю.Е. Гликлих. -М.: Комкнига, 2005. -416 с.
- Гихман, И.И. Теория случайных процессов/И.И. Гихман, А.В. Скороход. -М.: Наука, 1975. -Т.3. -496 с.
- Канторович Л.В. Функциональный анализ/Л.В. Канторович, Г.П. Акилов. -М.: Наука, 1977. -742 с.
- Партасарати, К. Введение в теорию вероятностей и теорию меры/К. Партасарати. -М.: Мир, 1988. -343 с.
- Gliklikh, Yu.E. Stochastic differential inclusions of Langevin type on Riemannian manifolds/Yu.E. Gliklikh, A.V. Obukhovski//Discussiones Mathematicae DICO. -2001. -V. 21. -P. 173-190.
- Иосида, К. Функциональный анализ/К. Иосида. -М.: Мир, 1967. -624 с.
- Биллингсли П. Сходимость вероятностных мер/П. Биллингсли. -М.: Наука, 1977. -351 с.