Do you want to publish a course? Click here

A Donsker delta functional approach to optimal insider control and applications to finance

103   0   0.0 ( 0 )
 Added by Bernt {\\O}ksendal
 Publication date 2015
  fields
and research's language is English




Ask ChatGPT about the research

We study emph{optimal insider control problems}, i.e. optimal control problems of stochastic systems where the controller at any time $t$ in addition to knowledge about the history of the system up to this time, also has additional information related to a emph{future} value of the system. Since this puts the associated controlled systems outside the context of semimartingales, we apply anticipative white noise analysis, including forward integration and Hida-Malliavin calculus to study the problem. Combining this with Donsker delta functionals we transform the insider control problem into a classical (but parametrised) adapted control system, albeit with a non-classical performance functional. We establish a sufficient and a necessary maximum principle for such systems. Then we apply the results to obtain explicit solutions for some optimal insider portfolio problems in financial markets described by It^ o-L evy processes. Finally, in the Appendix we give a brief survey of the concepts and results we need from the theory of white noise, forward integrals and Hida-Malliavin calculus.



rate research

Read More

In this effort, a novel operator theoretic framework is developed for data-driven solution of optimal control problems. The developed methods focus on the use of trajectories (i.e., time-series) as the fundamental unit of data for the resolution of optimal control problems in dynamical systems. Trajectory information in the dynamical systems is embedded in a reproducing kernel Hilbert space (RKHS) through what are called occupation kernels. The occupation kernels are tied to the dynamics of the system through the densely defined Liouville operator. The pairing of Liouville operators and occupation kernels allows for lifting of nonlinear finite-dimensional optimal control problems into the space of infinite-dimensional linear programs over RKHSs.
This paper is concerned with the distributed linear quadratic optimal control problem. In particular, we consider a suboptimal version of the distributed optimal control problem for undirected multi-agent networks. Given a multi-agent system with identical agent dynamics and an associated global quadratic cost functional, our objective is to design suboptimal distributed control laws that guarantee the controlled network to reach consensus and the associated cost to be smaller than an a priori given upper bound. We first analyze the suboptimality for a given linear system and then apply the results to linear multiagent systems. Two design methods are then provided to compute such suboptimal distributed controllers, involving the solution of a single Riccati inequality of dimension equal to the dimension of the agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian. Furthermore, we relax the requirement of exact knowledge of the smallest nonzero and largest eigenvalue of the graph Laplacian by using only lower and upper bounds on these eigenvalues. Finally, a simulation example is provided to illustrate our design method.
We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: (i) The controller has access to inside information, i.e. access to information about a future state of the system, (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases: (1) When the control is allowed to depend both on time t and on the space variable x. (2) When the control is not allowed to depend on x. In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.
We combine stochastic control methods, white noise analysis and Hida-Malliavin calculus applied to the Donsker delta functional to obtain new representations of semimartingale decompositions under enlargement of filtrations. The results are illustrated by explicit examples.
This paper deals with the distributed $mathcal{H}_2$ optimal control problem for linear multi-agent systems. In particular, we consider a suboptimal version of the distributed $mathcal{H}_2$ optimal control problem. Given a linear multi-agent system with identical agent dynamics and an associated $mathcal{H}_2$ cost functional, our aim is to design a distributed diffusive static protocol such that the protocol achieves state synchronization for the controlled network and such that the associated cost is smaller than an a priori given upper bound. We first analyze the $mathcal{H}_2$ performance of linear systems and then apply the results to linear multi-agent systems. Two design methods are provided to compute such a suboptimal distributed protocol. For each method, the expression for the local control gain involves a solution of a single Riccati inequality of dimension equal to the dimension of the individual agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا