ترغب بنشر مسار تعليمي؟ اضغط هنا

121 - Jiamin Yu 2021
It has been for a long time to use big data of autonomous vehicles for perception, prediction, planning, and control of driving. Naturally, it is increasingly questioned why not using this big data for risk management and actuarial modeling. This art icle examines the emerging technical difficulties, new ideas, and methods of risk modeling under autonomous driving scenarios. Compared with the traditional risk model, the novel model is more consistent with the real road traffic and driving safety performance. More importantly, it provides technical feasibility for realizing risk assessment and car insurance pricing under a computer simulation environment.
We derive a formula for the adjoint $overline{A}$ of a square-matrix operation of the form $C=f(A)$, where $f$ is holomorphic in the neighborhood of each eigenvalue. We then apply the formula to derive closed-form expressions in particular cases of i nterest such as the case when we have a spectral decomposition $A=UDU^{-1}$, the spectrum cut-off $C=A_+$ and the Nearest Correlation Matrix routine. Finally, we explain how to simplify the computation of adjoints for regularized linear regression coefficients.
In this paper, we first introduce a simulator of cases estimates of incurred losses, called `SPLICE` (Synthetic Paid Loss and Incurred Cost Experience). In three modules, case estimates are simulated in continuous time, and a record is output for eac h individual claim. Revisions for the case estimates are also simulated as a sequence over the lifetime of the claim, in a number of different situations. Furthermore, some dependencies in relation to case estimates of incurred losses are incorporated, particularly recognizing certain properties of case estimates that are found in practice. For example, the magnitude of revisions depends on ultimate claim size, as does the distribution of the revisions over time. Some of these revisions occur in response to occurrence of claim payments, and so `SPLICE` requires input of simulated per-claim payment histories. The claim data can be summarized by accident and payment periods whose duration is an arbitrary choice (e.g. month, quarter, etc.) available to the user. `SPLICE` is a fully documented R package that is publicly available and open source (on CRAN). It is built on an existing simulator of individual claim experience called `SynthETIC` (Avanzi et al., 2021a,b), which offers flexible modelling of occurrence, notification, as well as the timing and magnitude of individual partial payments. This is in contrast with the incurred losses, which constitute the additional contribution of `SPLICE`. The inclusion of incurred loss estimates provides a facility that almost no other simulators do.
295 - Jiamin Yu 2021
Since Claude Shannon founded Information Theory, information theory has widely fostered other scientific fields, such as statistics, artificial intelligence, biology, behavioral science, neuroscience, economics, and finance. Unfortunately, actuarial science has hardly benefited from information theory. So far, only one actuarial paper on information theory can be searched by academic search engines. Undoubtedly, information and risk, both as Uncertainty, are constrained by entropy law. Todays insurance big data era means more data and more information. It is unacceptable for risk management and actuarial science to ignore information theory. Therefore, this paper aims to exploit information theory to discover the performance limits of insurance big data systems and seek guidance for risk modeling and the development of actuarial pricing systems.
The authors provide a comprehensive overview of flexibility characterization along the dimensions of time, spatiality, resource, and risk in power systems. These dimensions are discussed in relation to flexibility assets, products, and services, as w ell as new and existing flexibility market designs. The authors argue that flexibility should be evaluated based on the dimensions under discussion. Flexibility products and services can increase the efficiency of power systems and markets if flexibility assets and related services are taken into consideration and used along the time, geography, technology, and risk dimensions. Although it is possible to evaluate flexibility in existing market designs, a local flexibility market may be needed to exploit the value of the flexibility, depending on the dimensions of the flexibility products and services. To locate flexibility in power grids and prevent incorrect valuations, the authors also discuss TSO-DSO coordination along the four dimensions, and they present interrelations between flexibility dimensions, products, services, and related market designs for productive usage of flexible electricity.
In this paper, we construct a decentralized clearing mechanism which endogenously and automatically provides a claims resolution procedure. This mechanism can be used to clear a network of obligations through blockchain. In particular, we investigate default contagion in a network of smart contracts cleared through blockchain. In so doing, we provide an algorithm which constructs the blockchain so as to guarantee the payments can be verified and the miners earn a fee. We, additionally, consider the special case in which the blocks have unbounded capacity to provide a simple equilibrium clearing condition for the terminal net worths; existence and uniqueness are proven for this system. Finally, we consider the optimal bidding strategies for each firm in the network so that all firms are utility maximizers with respect to their terminal wealths. We first look for a mixed Nash equilibrium bidding strategies, and then also consider Pareto optimal bidding strategies. The implications of these strategies, and more broadly blockchain, on systemic risk are considered.
Recently, Castagnoli et al. (2021) introduce the class of star-shaped risk measures as a generalization of convex and coherent ones, proving that there is a representation as the pointwise minimum of some family composed by convex risk measures. Conc omitantly, Jia et al. (2020) prove a similar representation result for monetary risk measures, which are more general than star-shaped ones. Then, there is a question on how both classes are connected. In this letter, we provide an answer by casting light on the importance of the acceptability of 0, which is linked to the property of normalization. We then show that under mild conditions, a monetary risk measure is only a translation away from star-shapedness.
Motivated by the problem of finding dual representations for quasiconvex systemic risk measures in financial mathematics, we study quasiconvex compositions in an abstract infinite-dimensional setting. We calculate an explicit formula for the penalty function of the composition in terms of the penalty functions of the ingredient functions. The proof makes use of a nonstandard minimax inequality (rather than equality as in the standard case) that is available in the literature. In the second part of the paper, we apply our results in concrete probabilistic settings for systemic risk measures, in particular, in the context of Eisenberg-Noe clearing model. We also provide novel economic interpretations of the dual representations in these settings.
212 - HyeonJun Kim 2021
Renowned method of log-periodic power law(LPPL) is one of the few ways that a financial market crash could be predicted. Alongside with LPPL, this paper propose a novel method of stock market crash using white box model derived from simple assumption s about the state of rational bubble. By applying this model to Dow Jones Index and Bitcoin market price data, it is shown that the model successfully predicts some major crashes of both markets, implying the high sensitivity and generalization abilities of the model.
564 - Rui Wang 2021
We introduce the notion of Point in Time Economic Scenario Generation (PiT ESG) with a clear mathematical problem formulation to unify and compare economic scenario generation approaches conditional on forward looking market data. Such PiT ESGs shoul d provide quicker and more flexible reactions to sudden economic changes than traditional ESGs calibrated solely to long periods of historical data. We specifically take as economic variable the S&P500 Index with the VIX Index as forward looking market data to compare the nonparametric filtered historical simulation, GARCH model with joint likelihood estimation (parametric), Restricted Boltzmann Machine and the conditional Variational Autoencoder (Generative Networks) for their suitability as PiT ESG. Our evaluation consists of statistical tests for model fit and benchmarking the out of sample forecasting quality with a strategy backtest using model output as stop loss criterion. We find that both Generative Networks outperform the nonparametric and classic parametric model in our tests, but that the CVAE seems to be particularly well suited for our purposes: yielding more robust performance and being computationally lighter.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا