Do you want to publish a course? Click here

Is a probabilistic modeling really useful in financial engineering? - A-t-on vraiment besoin dun mod`ele probabiliste en ingenierie financi`ere ?

91   0   0.0 ( 0 )
 Added by Michel Fliess
 Publication date 2011
and research's language is English
 Authors Michel Fliess




Ask ChatGPT about the research

A new standpoint on financial time series, without the use of any mathematical model and of probabilistic tools, yields not only a rigorous approach of trends and volatility, but also efficient calculations which were already successfully applied in automatic control and in signal processing. It is based on a theorem due to P. Cartier and Y. Perrin, which was published in 1995. The above results are employed for sketching a dynamical portfolio and strategy management, without any global optimization technique. Numerous computer simulations are presented.



rate research

Read More

121 - Ben Boukai 2021
Following Boukai (2021) we present the Generalized Gamma (GG) distribution as a possible RND for modeling European options prices under Hestons (1993) stochastic volatility (SV) model. This distribution is seen as especially useful in situations in which the spots price follows a negatively skewed distribution and hence, Black-Scholes based (i.e. the log-normal distribution) modeling is largely inapt. We apply the GG distribution as RND to modeling current market option data on three large market-index ETFs, namely the SPY, IWM and QQQ as well as on the TLT (an ETF that tracks an index of long term US Treasury bonds). The current option chain of each of the three market-index ETFs shows of a pronounced skew of their volatility `smile which indicates a likely distortion in the Black-Scholes modeling of such option data. Reflective of entirely different market expectations, this distortion appears not to exist in the TLT option data. We provide a thorough modeling of the available option data we have on each ETF (with the October 15, 2021 expiration) based on the GG distribution and compared it to the option pricing and RND modeling obtained directly from a well-calibrated Hestons (1993) SV model (both theoretically and empirically, using Monte-Carlo simulations of the spots price). All three market-index ETFs exhibit negatively skewed distributions which are well-matched with those derived under the GG distribution as RND. The inadequacy of the Black-Scholes modeling in such instances which involve negatively skewed distribution is further illustrated by its impact on the hedging factor, delta, and the immediate implications to the retail trader. In contrast, for the TLT ETF, which exhibits no such distortion to the volatility `smile, the three pricing models (i.e. Hestons, Black-Scholes and Generalized Gamma) appear to yield similar results.
Management of systemic risk in financial markets is traditionally associated with setting (higher) capital requirements for market participants. There are indications that while equity ratios have been increased massively since the financial crisis, systemic risk levels might not have lowered, but even increased. It has been shown that systemic risk is to a large extent related to the underlying network topology of financial exposures. A natural question arising is how much systemic risk can be eliminated by optimally rearranging these networks and without increasing capital requirements. Overlapping portfolios with minimized systemic risk which provide the same market functionality as empirical ones have been studied by [pichler2018]. Here we propose a similar method for direct exposure networks, and apply it to cross-sectional interbank loan networks, consisting of 10 quarterly observations of the Austrian interbank market. We show that the suggested framework rearranges the network topology, such that systemic risk is reduced by a factor of approximately 3.5, and leaves the relevant economic features of the optimized network and its agents unchanged. The presented optimization procedure is not intended to actually re-configure interbank markets, but to demonstrate the huge potential for systemic risk management through rearranging exposure networks, in contrast to increasing capital requirements that were shown to have only marginal effects on systemic risk [poledna2017]. Ways to actually incentivize a self-organized formation toward optimal network configurations were introduced in [thurner2013] and [poledna2016]. For regulatory policies concerning financial market stability the knowledge of minimal systemic risk for a given economic environment can serve as a benchmark for monitoring actual systemic risk in markets.
The history of research in finance and economics has been widely impacted by the field of Agent-based Computational Economics (ACE). While at the same time being popular among natural science researchers for its proximity to the successful methods of physics and chemistry for example, the field of ACE has also received critics by a part of the social science community for its lack of empiricism. Yet recent trends have shifted the weights of these general arguments and potentially given ACE a whole new range of realism. At the base of these trends are found two present-day major scientific breakthroughs: the steady shift of psychology towards a hard science due to the advances of neuropsychology, and the progress of artificial intelligence and more specifically machine learning due to increasing computational power and big data. These two have also found common fields of study in the form of computational neuroscience, and human-computer interaction, among others. We outline here the main lines of a computational research study of collective economic behavior via Agent-Based Models (ABM) or Multi-Agent System (MAS), where each agent would be endowed with specific cognitive and behavioral biases known to the field of neuroeconomics, and at the same time autonomously implement rational quantitative financial strategies updated by machine learning. We postulate that such ABMs would offer a whole new range of realism.
Housing markets are inherently spatial, yet many existing models fail to capture this spatial dimension. Here we introduce a new graph-based approach for incorporating a spatial component in a large-scale urban housing agent-based model (ABM). The model explicitly captures several social and economic factors that influence the agents decision-making behaviour (such as fear of missing out, their trend following aptitude, and the strength of their submarket outreach), and interprets these factors in spatial terms. The proposed model is calibrated and validated with the housing market data for the Greater Sydney region. The ABM simulation results not only include predictions for the overall market, but also produce area-specific forecasting at the level of local government areas within Sydney as arising from individual buy and sell decisions. In addition, the simulation results elucidate agent preferences in submarkets, highlighting differences in agent behaviour, for example, between first-time home buyers and investors, and between both local and overseas investors.
The local volatility model is a widely used for pricing and hedging financial derivatives. While its main appeal is its capability of reproducing any given surface of observed option prices---it provides a perfect fit---the essential component is a latent function which can be uniquely determined only in the limit of infinite data. To (re)construct this function, numerous calibration methods have been suggested involving steps of interpolation and extrapolation, most often of parametric form and with point-estimate representations. We look at the calibration problem in a probabilistic framework with a nonparametric approach based on a Gaussian process prior. This immediately gives a way of encoding prior beliefs about the local volatility function and a hypothesis model which is highly flexible yet not prone to over-fitting. Besides providing a method for calibrating a (range of) point-estimate(s), we draw posterior inference from the distribution over local volatility. This leads to a better understanding of uncertainty associated with the calibration in particular, and with the model in general. Further, we infer dynamical properties of local volatility by augmenting the hypothesis space with a time dimension. Ideally, this provides predictive distributions not only locally, but also for entire surfaces forward in time. We apply our approach to S&P 500 market data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا