No Arabic abstract
Trajectory reconstruction is the process of inferring the path of a moving object between successive observations. In this paper, we propose a smoothing spline -- which we name the V-spline -- that incorporates position and velocity information and a penalty term that controls acceleration. We introduce a particular adaptive V-spline designed to control the impact of irregularly sampled observations and noisy velocity measurements. A cross-validation scheme for estimating the V-spline parameters is given and we detail the performance of the V-spline on four particularly challenging test datasets. Finally, an application of the V-spline to vehicle trajectory reconstruction in two dimensions is given, in which the penalty term is allowed to further depend on known operational characteristics of the vehicle.
A hybrid estimator of the log-spectral density of a stationary time series is proposed. First, a multiple taper estimate is performed, followed by kernel smoothing the log-multiple taper estimate. This procedure reduces the expected mean square error by $(pi^2/ 4)^{4/5} $ over simply smoothing the log tapered periodogram. A data adaptive implementation of a variable bandwidth kernel smoother is given.
We determine the expected error by smoothing the data locally. Then we optimize the shape of the kernel smoother to minimize the error. Because the optimal estimator depends on the unknown function, our scheme automatically adjusts to the unknown function. By self-consistently adjusting the kernel smoother, the total estimator adapts to the data. Goodness of fit estimators select a kernel halfwidth by minimizing a function of the halfwidth which is based on the average square residual fit error: $ASR(h)$. A penalty term is included to adjust for using the same data to estimate the function and to evaluate the mean square error. Goodness of fit estimators are relatively simple to implement, but the minimum (of the goodness of fit functional) tends to be sensitive to small perturbations. To remedy this sensitivity problem, we fit the mean square error %goodness of fit functional to a two parameter model prior to determining the optimal halfwidth. Plug-in derivative estimators estimate the second derivative of the unknown function in an initial step, and then substitute this estimate into the asymptotic formula.
We propose modeling raw functional data as a mixture of a smooth function and a highdimensional factor component. The conventional approach to retrieving the smooth function from the raw data is through various smoothing techniques. However, the smoothing model is not adequate to recover the smooth curve or capture the data variation in some situations. These include cases where there is a large amount of measurement error, the smoothing basis functions are incorrectly identified, or the step jumps in the functional mean levels are neglected. To address these challenges, a factor-augmented smoothing model is proposed, and an iterative numerical estimation approach is implemented in practice. Including the factor model component in the proposed method solves the aforementioned problems since a few common factors often drive the variation that cannot be captured by the smoothing model. Asymptotic theorems are also established to demonstrate the effects of including factor structures on the smoothing results. Specifically, we show that the smoothing coefficients projected on the complement space of the factor loading matrix is asymptotically normal. As a byproduct of independent interest, an estimator for the population covariance matrix of the raw data is presented based on the proposed model. Extensive simulation studies illustrate that these factor adjustments are essential in improving estimation accuracy and avoiding the curse of dimensionality. The superiority of our model is also shown in modeling Canadian weather data and Australian temperature data.
In Path Integral control problems a representation of an optimally controlled dynamical system can be formally computed and serve as a guidepost to learn a parametrized policy. The Path Integral Cross-Entropy (PICE) method tries to exploit this, but is hampered by poor sample efficiency. We propose a model-free algorithm called ASPIC (Adaptive Smoothing of Path Integral Control) that applies an inf-convolution to the cost function to speedup convergence of policy optimization. We identify PICE as the infinite smoothing limit of such technique and show that the sample efficiency problems that PICE suffers disappear for finite levels of smoothing. For zero smoothing this method becomes a greedy optimization of the cost, which is the standard approach in current reinforcement learning. We show analytically and empirically that intermediate levels of smoothing are optimal, which renders the new method superior to both PICE and direct cost-optimization.
Understanding forest fire spread in any region of Canada is critical to promoting forest health, and protecting human life and infrastructure. Quantifying fire spread from noisy images, where regions of a fire are separated by change-point boundaries, is critical to faithfully estimating fire spread rates. In this research, we develop a statistically consistent smooth estimator that allows us to denoise fire spread imagery from micro-fire experiments. We develop an anisotropic smoothing method for change-point data that uses estimates of the underlying data generating process to inform smoothing. We show that the anisotropic local constant regression estimator is consistent with convergence rate $Oleft(n^{-1/{(q+2)}}right)$. We demonstrate its effectiveness on simulated one- and two-dimensional change-point data and fire spread imagery from micro-fire experiments.