ترغب بنشر مسار تعليمي؟ اضغط هنا

Stochastic loss reserving with mixture density neural networks

214   0   0.0 ( 0 )
 نشر من قبل Benjamin Avanzi
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي مالية
والبحث باللغة English




اسأل ChatGPT حول البحث

Neural networks offer a versatile, flexible and accurate approach to loss reserving. However, such applications have focused primarily on the (important) problem of fitting accurate central estimates of the outstanding claims. In practice, properties regarding the variability of outstanding claims are equally important (e.g., quantiles for regulatory purposes). In this paper we fill this gap by applying a Mixture Density Network (MDN) to loss reserving. The approach combines a neural network architecture with a mixture Gaussian distribution to achieve simultaneously an accurate central estimate along with flexible distributional choice. Model fitting is done using a rolling-origin approach. Our approach consistently outperforms the classical over-dispersed model both for central estimates and quantiles of interest, when applied to a wide range of simulated environments of various complexity and specifications. We further extend the MDN approach by proposing two extensions. Firstly, we present a hybrid GLM-MDN approach called ResMDN. This hybrid approach balances the tractability and ease of understanding of a traditional GLM model on one hand, with the additional accuracy and distributional flexibility provided by the MDN on the other. We show that it can successfully improve the errors of the baseline ccODP, although there is generally a loss of performance when compared to the MDN in the examples we considered. Secondly, we allow for explicit projection constraints, so that actuarial judgement can be directly incorporated in the modelling process. Throughout, we focus on aggregate loss triangles, and show that our methodologies are tractable, and that they out-perform traditional approaches even with relatively limited amounts of data. We use both simulated data -- to validate properties, and real data -- to illustrate and ascertain practicality of the approaches.

قيم البحث

اقرأ أيضاً

144 - Carter T. Butts 2017
Continuous mixtures of distributions are widely employed in the statistical literature as models for phenomena with highly divergent outcomes; in particular, many familiar heavy-tailed distributions arise naturally as mixtures of light-tailed distrib utions (e.g., Gaussians), and play an important role in applications as diverse as modeling of extreme values and robust inference. In the case of social networks, continuous mixtures of graph distributions can likewise be employed to model social processes with heterogeneous outcomes, or as robust priors for network inference. Here, we introduce some simple families of network models based on continuous mixtures of baseline distributions. While analytically and computationally tractable, these models allow more flexible modeling of cross-graph heterogeneity than is possible with conventional baseline (e.g., Bernoulli or $U|man$ distributions). We illustrate the utility of these baseline mixture models with application to problems of multiple-network ERGMs, network evolution, and efficient network inference. Our results underscore the potential ubiquity of network processes with nontrivial mixture behavior in natural settings, and raise some potentially disturbing questions regarding the adequacy of current network data collection practices.
The aim of this paper is to present a mixture composite regression model for claim severity modelling. Claim severity modelling poses several challenges such as multimodality, heavy-tailedness and systematic effects in data. We tackle this modelling problem by studying a mixture composite regression model for simultaneous modeling of attritional and large claims, and for considering systematic effects in both the mixture components as well as the mixing probabilities. For model fitting, we present a group-fused regularization approach that allows us for selecting the explanatory variables which significantly impact the mixing probabilities and the different mixture components, respectively. We develop an asymptotic theory for this regularized estimation approach, and fitting is performed using a novel Generalized Expectation-Maximization algorithm. We exemplify our approach on real motor insurance data set.
An important problem in analysis of neural data is to characterize interactions across brain regions from high-dimensional multiple-electrode recordings during a behavioral experiment. Lead-lag effects indicate possible directional flows of neural in formation, but they are often transient, appearing during short intervals of time. Such non-stationary interactions can be difficult to identify, but they can be found by taking advantage of the replication structure inherent to many neurophysiological experiments. To describe non-stationary interactions between replicated pairs of high-dimensional time series, we developed a method of estimating latent, non-stationary cross-correlation. Our approach begins with an extension of probabilistic CCA to the time series setting, which provides a model-based interpretation of multiset CCA. Because the covariance matrix describing non-stationary dependence is high-dimensional, we assume sparsity of cross-correlations within a range of possible interesting lead-lag effects. We show that the method can perform well in realistic settings and we apply it to 192 simultaneous local field potential (LFP) recordings from prefrontal cortex (PFC) and visual cortex (area V4) during a visual memory task. We find lead-lag relationships that are highly plausible, being consistent with related results in the literature.
In this paper we review Bernstein and grid-type copulas for arbitrary dimensions and general grid resolutions in connection with discrete random vectors possessing uniform margins. We further suggest a pragmatic way to fit the dependence structure of multivariate data to Bernstein copulas via grid-type copulas and empirical contingency tables. Finally, we discuss a Monte Carlo study for the simulation and PML estimation for aggregate dependent losses form observed windstorm and flooding data.
116 - Edmondo Trentin 2020
Albeit worryingly underrated in the recent literature on machine learning in general (and, on deep learning in particular), multivariate density estimation is a fundamental task in many applications, at least implicitly, and still an open issue. With a few exceptions, deep neural networks (DNNs) have seldom been applied to density estimation, mostly due to the unsupervised nature of the estimation task, and (especially) due to the need for constrained training algorithms that ended up realizing proper probabilistic models that satisfy Kolmogorovs axioms. Moreover, in spite of the well-known improvement in terms of modeling capabilities yielded by mixture models over plain single-density statistical estimators, no proper mixtures of multivariate DNN-based component densities have been investigated so far. The paper fills this gap by extending our previous work on Neural Mixture Densities (NMMs) to multivariate DNN mixtures. A maximum-likelihood (ML) algorithm for estimating Deep NMMs (DNMMs) is handed out, which satisfies numerically a combination of hard and soft constraints aimed at ensuring satisfaction of Kolmogorovs axioms. The class of probability density functions that can be modeled to any degree of precision via DNMMs is formally defined. A procedure for the automatic selection of the DNMM architecture, as well as of the hyperparameters for its ML training algorithm, is presented (exploiting the probabilistic nature of the DNMM). Experimental results on univariate and multivariate data are reported on, corroborating the effectiveness of the approach and its superiority to the most popular statistical estimation techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا