Do you want to publish a course? Click here

Sensitivity indices for output on a Riemannian manifold

363   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

In the context of computer code experiments, sensitivity analysis of a complicated input-output system is often performed by ranking the so-called Sobol indices. One reason of the popularity of Sobols approach relies on the simplicity of the statistical estimation of these indices using the so-called Pick and Freeze method. In this work we propose and study sensitivity indices for the case where the output lies on a Riemannian manifold. These indices are based on a Cramer von Mises like criterion that takes into account the geometry of the output support. We propose a Pick-Freeze like estimator of these indices based on an $U$--statistic. The asymptotic properties of these estimators are studied. Further, we provide and discuss some interesting numerical examples.

rate research

Read More

150 - O Roustant 2019
The so-called polynomial chaos expansion is widely used in computer experiments. For example, it is a powerful tool to estimate Sobol sensitivity indices. In this paper, we consider generalized chaos expansions built on general tensor Hilbert basis. In this frame, we revisit the computation of the Sobol indices and give general lower bounds for these indices. The case of the eigenfunctions system associated with a Poincar{e} differential operator leads to lower bounds involving the derivatives of the analyzed function and provides an efficient tool for variable screening. These lower bounds are put in action both on toy and real life models demonstrating their accuracy.
Functional data analysis on nonlinear manifolds has drawn recent interest. Sphere-valued functional data, which are encountered for example as movement trajectories on the surface of the earth, are an important special case. We consider an intrinsic principal component analysis for smooth Riemannian manifold-valued functional data and study its asymptotic properties. Riemannian functional principal component analysis (RFPCA) is carried out by first mapping the manifold-valued data through Riemannian logarithm maps to tangent spaces around the time-varying Frechet mean function, and then performing a classical multivariate functional principal component analysis on the linear tangent spaces. Representations of the Riemannian manifold-valued functions and the eigenfunctions on the original manifold are then obtained with exponential maps. The tangent-space approximation through functional principal component analysis is shown to be well-behaved in terms of controlling the residual variation if the Riemannian manifold has nonnegative curvature. Specifically, we derive a central limit theorem for the mean function, as well as root-$n$ uniform convergence rates for other model components, including the covariance function, eigenfunctions, and functional principal component scores. Our applications include a novel framework for the analysis of longitudinal compositional data, achieved by mapping longitudinal compositional data to trajectories on the sphere, illustrated with longitudinal fruit fly behavior patterns. RFPCA is shown to be superior in terms of trajectory recovery in comparison to an unrestricted functional principal component analysis in applications and simulations and is also found to produce principal component scores that are better predictors for classification compared to traditional functional functional principal component scores.
Clustering methods seek to partition data such that elements are more similar to elements in the same cluster than to elements in different clusters. The main challenge in this task is the lack of a unified definition of a cluster, especially for high dimensional data. Different methods and approaches have been proposed to address this problem. This paper continues the study originated by Efimov, Adamyan and Spokoiny (2019) where a novel approach to adaptive nonparametric clustering called Adaptive Weights Clustering (AWC) was offered. The method allows analyzing high-dimensional data with an unknown number of unbalanced clusters of arbitrary shape under very weak modeling assumptions. The procedure demonstrates a state-of-the-art performance and is very efficient even for large data dimension D. However, the theoretical study in Efimov, Adamyan and Spokoiny (2019) is very limited and did not really address the question of efficiency. This paper makes a significant step in understanding the remarkable performance of the AWC procedure, particularly in high dimension. The approach is based on combining the ideas of adaptive clustering and manifold learning. The manifold hypothesis means that high dimensional data can be well approximated by a d-dimensional manifold for small d helping to overcome the curse of dimensionality problem and to get sharp bounds on the cluster separation which only depend on the intrinsic dimension d. We also address the problem of parameter tuning. Our general theoretical results are illustrated by some numerical experiments.
The Riemannian geometry of covariance matrices has been essential to several successful applications, in computer vision, biomedical signal and image processing, and radar data processing. For these applications, an important ongoing challenge is to develop Riemannian-geometric tools which are adapted to structured covariance matrices. The present paper proposes to meet this challenge by introducing a new class of probability distributions, Gaussian distributions of structured covariance matrices. These are Riemannian analogs of Gaussian distributions, which only sample from covariance matrices having a preassigned structure, such as complex, Toeplitz, or block-Toeplitz. The usefulness of these distributions stems from three features: (1) they are completely tractable, analytically or numerically, when dealing with large covariance matrices, (2) they provide a statistical foundation to the concept of structured Riemannian barycentre (i.e. Frechet or geometric mean), (3) they lead to efficient statistical learning algorithms, which realise, among others, density estimation and classification of structured covariance matrices. The paper starts from the observation that several spaces of structured covariance matrices, considered from a geometric point of view, are Riemannian symmetric spaces. Accordingly, it develops an original theory of Gaussian distributions on Riemannian symmetric spaces, of their statistical inference, and of their relationship to the concept of Riemannian barycentre. Then, it uses this original theory to give a detailed description of Gaussian distributions of three kinds of structured covariance matrices, complex, Toeplitz, and block-Toeplitz. Finally, it describes algorithms for density estimation and classification of structured covariance matrices, based on Gaussian distribution mixture models.
Prediction for high dimensional time series is a challenging task due to the curse of dimensionality problem. Classical parametric models like ARIMA or VAR require strong modeling assumptions and time stationarity and are often overparametrized. This paper offers a new flexible approach using recent ideas of manifold learning. The considered model includes linear models such as the central subspace model and ARIMA as particular cases. The proposed procedure combines manifold denoising techniques with a simple nonparametric prediction by local averaging. The resulting procedure demonstrates a very reasonable performance for real-life econometric time series. We also provide a theoretical justification of the manifold estimation procedure.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا