No Arabic abstract
We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated condition) the least-squares estimator is near optimal, as its performance asymptotically approaches the Cramer-Rao bound. However, we also demonstrate that, in general, there is no unbiased estimator for the astrometric position that can precisely reach the Cramer-Rao bound. We validate our theoretical analysis through simulated digital-detector observations under typical observing conditions. We show that the nominal value for the mean-square-error of the least-squares estimator (obtained from our theorem) can be used as a benchmark indicator of the expected statistical performance of the least-squares method under a wide range of conditions. Our results are valid for an idealized linear (one-dimensional) array detector where intra-pixel response changes are neglected, and where flat-fielding is achieved with very high accuracy.
The problem of astrometry is revisited from the perspective of analyzing the attainability of well-known performance limits (the Cramer-Rao bound) for the estimation of the relative position of light-emitting (usually point-like) sources on a CCD-like detector using commonly adopted estimators such as the weighted least squares and the maximum likelihood. Novel technical results are presented to determine the performance of an estimator that corresponds to the solution of an optimization problem in the context of astrometry. Using these results we are able to place stringent bounds on the bias and the variance of the estimators in close form as a function of the data. We confirm these results through comparisons to numerical simulations under a broad range of realistic observing conditions. The maximum likelihood and the weighted least square estimators are analyzed. We confirm the sub-optimality of the weighted least squares scheme from medium to high signal-to-noise found in an earlier study for the (unweighted) least squares method. We find that the maximum likelihood estimator achieves optimal performance limits across a wide range of relevant observational conditions. Furthermore, from our results, we provide concrete insights for adopting an adaptive weighted least square estimator that can be regarded as a computationally efficient alternative to the optimal maximum likelihood solution. We provide, for the first time, close-form analytical expressions that bound the bias and the variance of the weighted least square and maximum likelihood implicit estimators for astrometry using a Poisson-driven detector. These expressions can be used to formally assess the precision attainable by these estimators in comparison with the minimum variance bound.
Wireless sensor network has recently received much attention due to its broad applicability and ease-of-installation. This paper is concerned with a distributed state estimation problem, where all sensor nodes are required to achieve a consensus estimation. The weighted least squares (WLS) estimator is an appealing way to handle this problem since it does not need any prior distribution information. To this end, we first exploit the equivalent relation between the information filter and WLS estimator. Then, we establish an optimization problem under the relation coupled with a consensus constraint. Finally, the consensus-based distributed WLS problem is tackled by the alternating direction method of multiplier (ADMM). Numerical simulation together with theoretical analysis testify the convergence and consensus estimations between nodes.
We investigate theoretically and numerically the use of the Least-Squares Finite-element method (LSFEM) to approach data-assimilation problems for the steady-state, incompressible Navier-Stokes equations. Our LSFEM discretization is based on a stress-velocity-pressure (S-V-P) first-order formulation, using discrete counterparts of the Sobolev spaces $H({rm div}) times H^1 times L^2$ respectively. Resolution of the system is via minimization of a least-squares functional representing the magnitude of the residual of the equations. A simple and immediate approach to extend this solver to data-assimilation is to add a data-discrepancy term to the functional. Whereas most data-assimilation techniques require a large number of evaluations of the forward-simulations and are therefore very expensive, the approach proposed in this work uniquely has the same cost as a single forward run. However, the question arises: what is the statistical model implied by this choice? We answer this within the Bayesian framework, establishing the latent background covariance model and the likelihood. Further we demonstrate that - in the linear case - the method is equivalent to application of the Kalman filter, and derive the posterior covariance. We practically demonstrate the capabilities of our method on a backward-facing step case. Our LSFEM formulation (without data) is shown to have good approximation quality, even on relatively coarse meshes - in particular with respect to mass-conservation and reattachment location. Adding limited velocity measurements from experiment, we show that the method is able to correct for discretization error on very coarse meshes, as well as correct for the influence of unknown and uncertain boundary-conditions.
Partial measurements of relative position are a relatively common event during the observation of visual binary stars. However, these observations are typically discarded when estimating the orbit of a visual pair. In this article we present a novel framework to characterize the orbits from a Bayesian standpoint, including partial observations of relative position as an input for the estimation of orbital parameters. Our aim is to formally incorporate the information contained in those partial measurements in a systematic way into the final inference. In the statistical literature, an imputation is defined as the replacement of a missing quantity with a plausible value. To compute posterior distributions of orbital parameters with partial observations, we propose a technique based on Markov chain Monte Carlo with multiple imputation. We present the methodology and test the algorithm with both synthetic and real observations, studying the effect of incorporating partial measurements in the parameter estimation. Our results suggest that the inclusion of partial measurements into the characterization of visual binaries may lead to a reduction in the uncertainty associated to each orbital element, in terms of a decrease in dispersion measures (such as the interquartile range) of the posterior distribution of relevant orbital parameters. The extent to which the uncertainty decreases after the incorporation of new data (either complete or partial) depends on how informative those newly-incorporated measurements are. Quantifying the information contained in each measurement remains an open issue.
We consider a nonparametric version of the integer-valued GARCH(1,1) model for time series of counts. The link function in the recursion for the variances is not specified by finite-dimensional parameters, but we impose nonparametric smoothness conditions. We propose a least squares estimator for this function and show that it is consistent with a rate that we conjecture to be nearly optimal.