A hybrid estimator of the log-spectral density of a stationary time series is proposed. First, a multiple taper estimate is performed, followed by kernel smoothing the log-multiple taper estimate. This procedure reduces the expected mean square error by $(pi^2/ 4)^{4/5} $ over simply smoothing the log tapered periodogram. A data adaptive implementation of a variable bandwidth kernel smoother is given.
We determine the expected error by smoothing the data locally. Then we optimize the shape of the kernel smoother to minimize the error. Because the optimal estimator depends on the unknown function, our scheme automatically adjusts to the unknown function. By self-consistently adjusting the kernel smoother, the total estimator adapts to the data. Goodness of fit estimators select a kernel halfwidth by minimizing a function of the halfwidth which is based on the average square residual fit error: $ASR(h)$. A penalty term is included to adjust for using the same data to estimate the function and to evaluate the mean square error. Goodness of fit estimators are relatively simple to implement, but the minimum (of the goodness of fit functional) tends to be sensitive to small perturbations. To remedy this sensitivity problem, we fit the mean square error %goodness of fit functional to a two parameter model prior to determining the optimal halfwidth. Plug-in derivative estimators estimate the second derivative of the unknown function in an initial step, and then substitute this estimate into the asymptotic formula.
Trajectory reconstruction is the process of inferring the path of a moving object between successive observations. In this paper, we propose a smoothing spline -- which we name the V-spline -- that incorporates position and velocity information and a penalty term that controls acceleration. We introduce a particular adaptive V-spline designed to control the impact of irregularly sampled observations and noisy velocity measurements. A cross-validation scheme for estimating the V-spline parameters is given and we detail the performance of the V-spline on four particularly challenging test datasets. Finally, an application of the V-spline to vehicle trajectory reconstruction in two dimensions is given, in which the penalty term is allowed to further depend on known operational characteristics of the vehicle.
Gaussian conditional realizations are routinely used for risk assessment and planning in a variety of Earth sciences applications. Conditional realizations can be obtained by first creating unconditional realizations that are then post-conditioned by kriging. Many efficient algorithms are available for the first step, so the bottleneck resides in the second step. Instead of doing the conditional simulations with the desired covariance (F approach) or with a tapered covariance (T approach), we propose to use the taper covariance only in the conditioning step (Half-Taper or HT approach). This enables to speed up the computations and to reduce memory requirements for the conditioning step but also to keep the right short scale variations in the realizations. A criterion based on mean square error of the simulation is derived to help anticipate the similarity of HT to F. Moreover, an index is used to predict the sparsity of the kriging matrix for the conditioning step. Some guides for the choice of the taper function are discussed. The distributions of a series of 1D, 2D and 3D scalar response functions are compared for F, T and HT approaches. The distributions obtained indicate a much better similarity to F with HT than with T.
Multistage design has been used in a wide range of scientific fields. By allocating sensing resources adaptively, one can effectively eliminate null locations and localize signals with a smaller study budget. We formulate a decision-theoretic framework for simultaneous multi-stage adaptive testing and study how to minimize the total number of measurements while meeting pre-specified constraints on both the false positive rate (FPR) and missed discovery rate (MDR). The new procedure, which effectively pools information across individual tests using a simultaneous multistage adaptive ranking and thresholding (SMART) approach, can achieve precise error rates control and lead to great savings in total study costs. Numerical studies confirm the effectiveness of SMART for FPR and MDR control and show that it achieves substantial power gain over existing methods. The SMART procedure is demonstrated through the analysis of high-throughput screening data and spatial imaging data.
We propose a methodology for filtering, smoothing and assessing parameter and filtering uncertainty in score-driven models. Our technique is based on a general representation of the Kalman filter and smoother recursions for linear Gaussian models in terms of the score of the conditional log-likelihood. We prove that, when data is generated by a nonlinear non-Gaussian state-space model, the proposed methodology results from a local expansion of the true filtering density. A formal characterization of the approximation error is provided. As shown in extensive Monte Carlo analyses, our methodology performs very similarly to exact simulation-based methods, while remaining computationally extremely simple. We illustrate empirically the advantages in employing score-driven models as approximate filters rather than purely predictive processes.