No Arabic abstract
Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major theme is that risk, usually thought one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramer for large deviations in this context. We first treat a simple model with a single risky asset that examplifies the distinction between the average return and the typical return, the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe reasonably well daily price variations. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.
Using a family of modified Weibull distributions, encompassing both sub-exponentials and super-exponentials, to parameterize the marginal distributions of asset returns and their natural multivariate generalizations, we give exact formulas for the tails and for the moments and cumulants of the distribution of returns of a portfolio make of arbitrary compositions of these assets. Using combinatorial and hypergeometric functions, we are in particular able to extend previous results to the case where the exponents of the Weibull distributions are different from asset to asset and in the presence of dependence between assets. We treat in details the problem of risk minimization using two different measures of risks (cumulants and value-at-risk) for a portfolio made of two assets and compare the theoretical predictions with direct empirical data. While good agreement is found, the remaining discrepancy between theory and data stems from the deviations from the Weibull parameterization for small returns. Our extended formulas enable us to determine analytically the conditions under which it is possible to ``have your cake and eat it too, i.e., to construct a portfolio with both larger return and smaller ``large risks.
Mean-variance portfolio optimization problems often involve separable nonconvex terms, including penalties on capital gains, integer share constraints, and minimum position and trade sizes. We propose a heuristic algorithm for this problem based on the alternating direction method of multipliers (ADMM). This method allows for solve times in tens to hundreds of milliseconds with around 1000 securities and 100 risk factors. We also obtain a bound on the achievable performance. Our heuristic and bound are both derived from similar results for other optimization problems with a separable objective and affine equality constraints. We discuss a concrete implementation in the case where the separable terms in the objective are piecewise-quadratic, and we demonstrate their effectiveness empirically in realistic tax-aware portfolio construction problems.
The dynamic portfolio optimization problem in finance frequently requires learning policies that adhere to various constraints, driven by investor preferences and risk. We motivate this problem of finding an allocation policy within a sequential decision making framework and study the effects of: (a) using data collected under previously employed policies, which may be sub-optimal and constraint-violating, and (b) imposing desired constraints while computing near-optimal policies with this data. Our framework relies on solving a minimax objective, where one player evaluates policies via off-policy estimators, and the opponent uses an online learning strategy to control constraint violations. We extensively investigate various choices for off-policy estimation and their corresponding optimization sub-routines, and quantify their impact on computing constraint-aware allocation policies. Our study shows promising results for constructing such policies when back-tested on historical equities data, under various regimes of operation, dimensionality and constraints.
For diffusive many-particle systems such as the SSEP (symmetric simple exclusion process) or independent particles coupled with reservoirs at the boundaries, we analyze the density fluctuations conditioned on current integrated over a large time. We determine the conditioned large deviation function of density by a microscopic calculation. We then show that it can be expressed in terms of the solutions of Hamilton-Jacobi equations, which can be written for general diffusive systems using a fluctuating hydrodynamics description.
We present a systematic analysis of stochastic processes conditioned on an empirical measure $Q_T$ defined in a time interval $[0,T]$ for large $T$. We build our analysis starting from a discrete time Markov chain. Results for a continuous time Markov process and Langevin dynamics are derived as limiting cases. We show how conditioning on a value of $Q_T$ modifies the dynamics. For a Langevin dynamics with weak noise, we introduce conditioned large deviations functions and calculate them using either a WKB method or a variational formulation. This allows us, in particular, to calculate the typical trajectory and the fluctuations around this optimal trajectory when conditioned on a certain value of $Q_T$.