ترغب بنشر مسار تعليمي؟ اضغط هنا

Joint estimation and model order selection for one dimensional ARMA models via convex optimization: a nuclear norm penalization approach

56   0   0.0 ( 0 )
 نشر من قبل Stephane Chretien
 تاريخ النشر 2015
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

The problem of estimating ARMA models is computationally interesting due to the nonconcavity of the log-likelihood function. Recent results were based on the convex minimization. Joint model selection using penalization by a convex norm, e.g. the nuclear norm of a certain matrix related to the state space formulation was extensively studied from a computational viewpoint. The goal of the present short note is to present a theoretical study of a nuclear norm penalization based variant of the method of cite{Bauer:Automatica05,Bauer:EconTh05} under the assumption of a Gaussian noise process.

قيم البحث

اقرأ أيضاً

101 - Zhao Ren , Harrison H. Zhou 2012
Discussion of Latent variable graphical model selection via convex optimization by Venkat Chandrasekaran, Pablo A. Parrilo and Alan S. Willsky [arXiv:1008.1290].
89 - Emilie Devijver 2015
We study a dimensionality reduction technique for finite mixtures of high-dimensional multivariate response regression models. Both the dimension of the response and the number of predictors are allowed to exceed the sample size. We consider predicto r selection and rank reduction to obtain lower-dimensional approximations. A class of estimators with a fast rate of convergence is introduced. We apply this result to a specific procedure, introduced in [11], where the relevant predictors are selected by the Group-Lasso.
106 - Quan Zhou , Hyunwoong Chang 2021
We consider MCMC methods for learning equivalence classes of sparse Gaussian DAG models when $p = e^{o(n)}$. The main contribution of this work is a rapid mixing result for a random walk Metropolis-Hastings algorithm, which we prove using a canonical path method. It reveals that the complexity of Bayesian learning of sparse equivalence classes grows only polynomially in $n$ and $p$, under some common high-dimensional assumptions. Further, a series of high-dimensional consistency results is obtained by the path method, including the strong selection consistency of an empirical Bayes model for structure learning and the consistency of a greedy local search on the restricted search space. Rapid mixing and slow mixing results for other structure-learning MCMC methods are also derived. Our path method and mixing time results yield crucial insights into the computational aspects of high-dimensional structure learning, which may be used to develop more efficient MCMC algorithms.
We consider batch size selection for a general class of multivariate batch means variance estimators, which are computationally viable for high-dimensional Markov chain Monte Carlo simulations. We derive the asymptotic mean squared error for this cla ss of estimators. Further, we propose a parametric technique for estimating optimal batch sizes and discuss practical issues regarding the estimating process. Vector auto-regressive, Bayesian logistic regression, and Bayesian dynamic space-time examples illustrate the quality of the estimation procedure where the proposed optimal batch sizes outperform current batch size selection methods.
62 - Fangzheng Xie , Yanxun Xu 2019
We propose a one-step procedure to estimate the latent positions in random dot product graphs efficiently. Unlike the classical spectral-based methods such as the adjacency and Laplacian spectral embedding, the proposed one-step procedure takes advan tage of both the low-rank structure of the expected adjacency matrix and the Bernoulli likelihood information of the sampling model simultaneously. We show that for each vertex, the corresponding row of the one-step estimator converges to a multivariate normal distribution after proper scaling and centering up to an orthogonal transformation, with an efficient covariance matrix. The initial estimator for the one-step procedure needs to satisfy the so-called approximate linearization property. The one-step estimator improves the commonly-adopted spectral embedding methods in the following sense: Globally for all vertices, it yields an asymptotic sum of squares error no greater than those of the spectral methods, and locally for each vertex, the asymptotic covariance matrix of the corresponding row of the one-step estimator dominates those of the spectral embeddings in spectra. The usefulness of the proposed one-step procedure is demonstrated via numerical examples and the analysis of a real-world Wikipedia graph dataset.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا