We consider a general class of stochastic optimal control problems, where the state process lives in a real separable Hilbert space and is driven by a cylindrical Brownian motion and a Poisson random measure; no special structure is imposed on the coefficients, which are also allowed to be path-dependent; in addition, the diffusion coefficient can be degenerate. For such a class of stochastic control problems, we prove, by means of purely probabilistic techniques based on the so-called randomization method, that the value of the control problem admits a probabilistic representation formula (known as non-linear Feynman-Kac formula) in terms of a suitable backward stochastic differential equation. This probabilistic representation considerably extends current results in the literature on the infinite-dimensional case, and it is also relevant in finite dimension. Such a representation allows to show, in the non-path-dependent (or Markovian) case, that the value function satisfies the so-called randomized dynamic programming principle. As a consequence, we are able to prove that the value function is a viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation, which turns out to be a second-order fully non-linear integro-differential equation in Hilbert space.
Bandini, E., Cosso, A., Confortola, F. (2019). BSDE representation and randomized dynamic programming principle for stochastic control problems of infinite-dimensional jump-diffusions. ELECTRONIC JOURNAL OF PROBABILITY, 24, 1-37 [10.1214/19-EJP333].
BSDE representation and randomized dynamic programming principle for stochastic control problems of infinite-dimensional jump-diffusions
Bandini, E;
2019
Abstract
We consider a general class of stochastic optimal control problems, where the state process lives in a real separable Hilbert space and is driven by a cylindrical Brownian motion and a Poisson random measure; no special structure is imposed on the coefficients, which are also allowed to be path-dependent; in addition, the diffusion coefficient can be degenerate. For such a class of stochastic control problems, we prove, by means of purely probabilistic techniques based on the so-called randomization method, that the value of the control problem admits a probabilistic representation formula (known as non-linear Feynman-Kac formula) in terms of a suitable backward stochastic differential equation. This probabilistic representation considerably extends current results in the literature on the infinite-dimensional case, and it is also relevant in finite dimension. Such a representation allows to show, in the non-path-dependent (or Markovian) case, that the value function satisfies the so-called randomized dynamic programming principle. As a consequence, we are able to prove that the value function is a viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation, which turns out to be a second-order fully non-linear integro-differential equation in Hilbert space.File | Dimensione | Formato | |
---|---|---|---|
BandiniCossoConfortola_RandomMethodJumpDiff.pdf
accesso aperto
Descrizione: Preprint Arxiv
Tipologia di allegato:
Submitted Version (Pre-print)
Dimensione
464.83 kB
Formato
Adobe PDF
|
464.83 kB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.