Do you want to publish a course? Click here

Improving self-calibration

197   0   0.0 ( 0 )
 Added by Torsten Ensslin
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

Response calibration is the process of inferring how much the measured data depend on the signal one is interested in. It is essential for any quantitative signal estimation on the basis of the data. Here, we investigate self-calibration methods for linear signal measurements and linear dependence of the response on the calibration parameters. The common practice is to augment an external calibration solution using a known reference signal with an internal calibration on the unknown measurement signal itself. Contemporary self-calibration schemes try to find a self-consistent solution for signal and calibration by exploiting redundancies in the measurements. This can be understood in terms of maximizing the joint probability of signal and calibration. However, the full uncertainty structure of this joint probability around its maximum is thereby not taken into account by these schemes. Therefore better schemes -- in sense of minimal square error -- can be designed by accounting for asymmetries in the uncertainty of signal and calibration. We argue that at least a systematic correction of the common self-calibration scheme should be applied in many measurement situations in order to properly treat uncertainties of the signal on which one calibrates. Otherwise the calibration solutions suffer from a systematic bias, which consequently distorts the signal reconstruction. Furthermore, we argue that non-parametric, signal-to-noise filtered calibration should provide more accurate reconstructions than the common bin averages and provide a new, improved self-calibration scheme. We illustrate our findings with a simplistic numerical example.



rate research

Read More

Given one or more uses of a classical channel, only a certain number of messages can be transmitted with zero probability of error. The study of this number and its asymptotic behaviour constitutes the field of classical zero-error information theory, the quantum generalisation of which has started to develop recently. We show that, given a single use of certain classical channels, entangled states of a system shared by the sender and receiver can be used to increase the number of (classical) messages which can be sent with no chance of error. In particular, we show how to construct such a channel based on any proof of the Bell-Kochen-Specker theorem. This is a new example of the use of quantum effects to improve the performance of a classical task. We investigate the connection between this phenomenon and that of ``pseudo-telepathy games. The use of generalised non-signalling correlations to assist in this task is also considered. In this case, a particularly elegant theory results and, remarkably, it is sometimes possible to transmit information with zero-error using a channel with no unassisted zero-error capacity.
We consider the application of relative self-calibration using overlap regions to spectroscopic galaxy surveys that use slit-less spectroscopy. This method is based on that developed for the SDSS by Padmanabhan at al. (2008) in that we consider jointly fitting and marginalising over calibrator brightness, rather than treating these as free parameters. However, we separate the calibration of the detector-to-detector from the full-focal-plane exposure-to-exposure calibration. To demonstrate how the calibration procedure will work, we simulate the procedure for a potential implementation of the spectroscopic component of the wide Euclid survey. We study the change of coverage and the determination of relative multiplicative errors in flux measurements for different dithering configurations. We use the new method to study the case where the flat-field across each exposure or detector is measured precisely and only exposure-to-exposure or detector-to-detector variation in the flux error remains. We consider several base dither patterns and find that they strongly influence the ability to calibrate, using this methodology. To enable self-calibration, it is important that the survey strategy connects different observations with at least a minimum amount of overlap, and we propose an S-pattern for dithering that fulfills this requirement. The final survey strategy adopted by Euclid will have to optimise for a number of different science goals and requirements. The large-scale calibration of the spectroscopic galaxy survey is clearly cosmologically crucial, but is not the only one.
This paper is concerned with algorithms for calibration of direction dependent effects (DDE) in aperture synthesis radio telescopes (ASRT). After correction of Direction Independent Effects (DIE) using self-calibration, imaging performance can be limited by the imprecise knowledge of the forward gain of the elements in the array. In general, the forward gain pattern is directionally dependent and varies with time due to a number of reasons. Some factors, such as rotation of the primary beam with Parallactic Angle for Azimuth-Elevation mount antennas are known a priori. Some, such as antenna pointing errors and structural deformation/projection effects for aperture-array elements cannot be measured {em a priori}. Thus, in addition to algorithms to correct for DD effects known a priori, algorithms to solve for DD gains are required for high dynamic range imaging. Here, we discuss a mathematical framework for antenna-based DDE calibration algorithms and show that this framework leads to computationally efficient optimal algorithms which scale well in a parallel computing environment. As an example of an antenna-based DD calibration algorithm, we demonstrate the Pointing SelfCal algorithm to solve for the antenna pointing errors. Our analysis show that the sensitivity of modern ASRT is sufficient to solve for antenna pointing errors and other DD effects. We also discuss the use of the Pointing SelfCal algorithm in real-time calibration systems and extensions for antenna Shape SelfCal algorithm for real-time tracking and corrections for pointing offsets and changes in antenna shape.
A new method for analyzing the returns of the custom-made micro-LIDAR system, which is operated along with the two MAGIC telescopes, allows to apply atmospheric corrections in the MAGIC data analysis chain. Such corrections make it possible to extend the effective observation time of MAGIC under adverse atmospheric conditions and reduce the systematic errors of energy and flux in the data analysis. LIDAR provides a range-resolved atmospheric backscatter profile from which the extinction of Cherenkov light from air shower events can be estimated. Knowledge of the extinction can allow to reconstruct the true image parameters, including energy and flux. Our final goal is to recover the source-intrinsic energy spectrum also for data affected by atmospheric extinction from aerosol layers, such as clouds.
We consider the class of convex minimization problems, composed of a self-concordant function, such as the $logdet$ metric, a convex data fidelity term $h(cdot)$ and, a regularizing -- possibly non-smooth -- function $g(cdot)$. This type of problems have recently attracted a great deal of interest, mainly due to their omnipresence in top-notch applications. Under this emph{locally} Lipschitz continuous gradient setting, we analyze the convergence behavior of proximal Newton schemes with the added twist of a probable presence of inexact evaluations. We prove attractive convergence rate guarantees and enhance state-of-the-art optimization schemes to accommodate such developments. Experimental results on sparse covariance estimation show the merits of our algorithm, both in terms of recovery efficiency and complexity.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا