Do you want to publish a course? Click here

Intrinsic Non-stationary Covariance Function for Climate Modeling

119   0   0.0 ( 0 )
 Added by Chintan Dalal
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

Designing a covariance function that represents the underlying correlation is a crucial step in modeling complex natural systems, such as climate models. Geospatial datasets at a global scale usually suffer from non-stationarity and non-uniformly smooth spatial boundaries. A Gaussian process regression using a non-stationary covariance function has shown promise for this task, as this covariance function adapts to the variable correlation structure of the underlying distribution. In this paper, we generalize the non-stationary covariance function to address the aforementioned global scale geospatial issues. We define this generalized covariance function as an intrinsic non-stationary covariance function, because it uses intrinsic statistics of the symmetric positive definite matrices to represent the characteristic length scale and, thereby, models the local stochastic process. Experiments on a synthetic and real dataset of relative sea level changes across the world demonstrate improvements in the error metrics for the regression estimates using our newly proposed approach.



rate research

Read More

We introduce a novel covariance estimator that exploits the heteroscedastic nature of financial time series by employing exponential weighted moving averages and shrinking the in-sample eigenvalues through cross-validation. Our estimator is model-agnostic in that we make no assumptions on the distribution of the random entries of the matrix or structure of the covariance matrix. Additionally, we show how Random Matrix Theory can provide guidance for automatic tuning of the hyperparameter which characterizes the time scale for the dynamics of the estimator. By attenuating the noise from both the cross-sectional and time-series dimensions, we empirically demonstrate the superiority of our estimator over competing estimators that are based on exponentially-weighted and uniformly-weighted covariance matrices.
This paper proposes a novel non-oscillatory pattern (NOP) learning scheme for several oscillatory data analysis problems including signal decomposition, super-resolution, and signal sub-sampling. To the best of our knowledge, the proposed NOP is the first algorithm for these problems with fully non-stationary oscillatory data with close and crossover frequencies, and general oscillatory patterns. NOP is capable of handling complicated situations while existing algorithms fail; even in simple cases, e.g., stationary cases with trigonometric patterns, numerical examples show that NOP admits competitive or better performance in terms of accuracy and robustness than several state-of-the-art algorithms.
We consider the problem of learning over non-stationary ranking streams. The rankings can be interpreted as the preferences of a population and the non-stationarity means that the distribution of preferences changes over time. Our goal is to learn, in an online manner, the current distribution of rankings. The bottleneck of this process is a rank aggregation problem. We propose a generalization of the Borda algorithm for non-stationary ranking streams. Moreover, we give bounds on the minimum number of samples required to output the ground truth with high probability. Besides, we show how the optimal parameters are set. Then, we generalize the whole family of weighted voting rules (the family to which Borda belongs) to situations in which some rankings are more textit{reliable} than others and show that this generalization can solve the problem of rank aggregation over non-stationary data streams.
Gaussian process regression is a widely-applied method for function approximation and uncertainty quantification. The technique has gained popularity recently in the machine learning community due to its robustness and interpretability. The mathematical methods we discuss in this paper are an extension of the Gaussian-process framework. We are proposing advanced kernel designs that only allow for functions with certain desirable characteristics to be elements of the reproducing kernel Hilbert space (RKHS) that underlies all kernel methods and serves as the sample space for Gaussian process regression. These desirable characteristics reflect the underlying physics; two obvious examples are symmetry and periodicity constraints. In addition, non-stationary kernel designs can be defined in the same framework to yield flexible multi-task Gaussian processes. We will show the impact of advanced kernel designs on Gaussian processes using several synthetic and two scientific data sets. The results show that including domain knowledge, communicated through advanced kernel designs, has a significant impact on the accuracy and relevance of the function approximation.
Classic contextual bandit algorithms for linear models, such as LinUCB, assume that the reward distribution for an arm is modeled by a stationary linear regression. When the linear regression model is non-stationary over time, the regret of LinUCB can scale linearly with time. In this paper, we propose a novel multiscale changepoint detection method for the non-stationary linear bandit problems, called Multiscale-LinUCB, which actively adapts to the changing environment. We also provide theoretical analysis of regret bound for Multiscale-LinUCB algorithm. Experimental results show that our proposed Multiscale-LinUCB algorithm outperforms other state-of-the-art algorithms in non-stationary contextual environments.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا