No Arabic abstract
Radio tomographic imaging (RTI) is an emerging technology to locate physical objects in a geographical area covered by wireless networks. From the attenuation measurements collected at spatially distributed sensors, radio tomography capitalizes on spatial loss fields (SLFs) measuring the absorption of radio frequency waves at each location along the propagation path. These SLFs can be utilized for interference management in wireless communication networks, environmental monitoring, and survivor localization after natural disaster such as earthquakes. Key to success of RTI is to model accurately the shadowing effects as the bi-dimensional integral of the SLF scaled by a weight function, which is estimated using regularized regression. However, the existing approaches are less effective when the propagation environment is heterogeneous. To cope with this, the present work introduces a piecewise homogeneous SLF governed by a hidden Markov random field (MRF) model. Efficient and tractable SLF estimators are developed by leveraging Markov chain Monte Carlo (MCMC) techniques. Furthermore, an uncertainty sampling method is developed to adaptively collect informative measurements in estimating the SLF. Numerical tests using synthetic and real datasets demonstrate capabilities of the proposed algorithm for radio tomography and channel-gain estimation.
Radio tomographic imaging (RTI) is an emerging technology for localization of physical objects in a geographical area covered by wireless networks. With attenuation measurements collected at spatially distributed sensors, RTI capitalizes on spatial loss fields (SLFs) measuring the absorption of radio frequency waves at spatial locations along the propagation path. These SLFs can be utilized for interference management in wireless communication networks, environmental monitoring, and survivor localization after natural disasters such as earthquakes. Key to the success of RTI is to accurately model shadowing as the weighted line integral of the SLF. To learn the SLF exhibiting statistical heterogeneity induced by spatially diverse environments, the present work develops a Bayesian framework entailing a piecewise homogeneous SLF with an underlying hidden Markov random field model. Utilizing variational Bayes techniques, the novel approach yields efficient field estimators at affordable complexity. A data-adaptive sensor selection strategy is also introduced to collect informative measurements for effective reconstruction of the SLF. Numerical tests using synthetic and real datasets demonstrate the capabilities of the proposed approach to radio tomography and channel-gain estimation.
We report an experimental realization of an adaptive quantum state tomography protocol. Our method takes advantage of a Bayesian approach to statistical inference and is naturally tailored for adaptive strategies. For pure states we observe close to 1/N scaling of infidelity with overall number of registered events, while best non-adaptive protocols allow for $1/sqrt{N}$ scaling only. Experiments are performed for polarization qubits, but the approach is readily adapted to any dimension.
Modern genomic studies are increasingly focused on discovering more and more interesting genes associated with a health response. Traditional shrinkage priors are primarily designed to detect a handful of signals from tens and thousands of predictors. Under diverse sparsity regimes, the nature of signal detection is associated with a tail behaviour of a prior. A desirable tail behaviour is called tail-adaptive shrinkage property where tail-heaviness of a prior gets adaptively larger (or smaller) as a sparsity level increases (or decreases) to accommodate more (or less) signals. We propose a global-local-tail (GLT) Gaussian mixture distribution to ensure this property and provide accurate inference under diverse sparsity regimes. Incorporating a peaks-over-threshold method in extreme value theory, we develop an automated tail learning algorithm for the GLT prior. We compare the performance of the GLT prior to the Horseshoe in two gene expression datasets and numerical examples. Results suggest that varying tail rule is advantageous over fixed tail rule under diverse sparsity domains.
A Bayesian approach to quantum process tomography has yet to be fully developed due to the lack of appropriate probability distributions on the space of quantum channels. Here, by associating the Choi matrix form of a completely positive, trace preserving (CPTP) map with a particular space of matrices with orthonormal columns, called a Stiefel manifold, we present two parametric probability distributions on the space of CPTP maps that enable Bayesian analysis of process tomography. The first is a probability distribution that has an average Choi matrix as a sufficient statistic. The second is a distribution resulting from binomial likelihood data that enables a simple connection to data gathered through process tomography experiments. To our knowledge these are the first examples of continuous, non-unitary random CPTP maps, that capture meaningful prior information for use in Bayesian estimation. We show how these distributions can be used for point estimation using either maximum a posteriori estimates or expected a posteriori estimates, as well as full Bayesian tomography resulting in posterior credibility intervals. This approach will enable the full power of Bayesian analysis in all forms of quantum characterization, verification, and validation.
Variational Bayes (VB) has been used to facilitate the calculation of the posterior distribution in the context of Bayesian inference of the parameters of nonlinear models from data. Previously an analytical formulation of VB has been derived for nonlinear model inference on data with additive gaussian noise as an alternative to nonlinear least squares. Here a stochastic solution is derived that avoids some of the approximations required of the analytical formulation, offering a solution that can be more flexibly deployed for nonlinear model inference problems. The stochastic VB solution was used for inference on a biexponential toy case and the algorithmic parameter space explored, before being deployed on real data from a magnetic resonance imaging study of perfusion. The new method was found to achieve comparable parameter recovery to the analytic solution and be competitive in terms of computational speed despite being reliant on sampling.