Do you want to publish a course? Click here

A new approach to the reduction of Carte du Ciel plates

105   0   0.0 ( 0 )
 Added by Amelia Ortiz-Gil
 Publication date 1997
  fields Physics
and research's language is English




Ask ChatGPT about the research

A new procedure for the reduction of Carte du Ciel plates is presented. A typical Carte du Ciel plate corresponding to the Bordeaux zone has been taken as an example. It shows triple exposures for each object and the modelling of the data has been performed by means of a non-linear least squares fitting of the sum of three bivariate Gaussian distributions. A number of solutions for the problems present in this kind of plates (optical aberrations, adjacency photographic effects, presence of grid lines, emulsion saturation) have been investigated. An internal accuracy of 0.1 in x and y was obtained for the position of each of the individual exposures. The external reduction to a catalogue led to results with an accuracy of 0.16 in x and 0.13 in y for the mean position of the three exposures. A photometric calibration has also been performed and magnitudes were determined with an accuracy of 0.09 mags.

rate research

Read More

We want to study whether the astrometric and photometric accuracies obtained for the Carte du Ciel plates digitized with a commercial digital camera are high enough for scientific exploitation of the plates. We use a digital camera Canon EOS~5Ds, with a 100mm macrolens for digitizing. We analyze six single-exposure plates and four triple-exposure plates from the Helsinki zone of Carte du Ciel (+39 degr < delta < 47 degr). Each plate is digitized using four images, with a significant central area being covered twice for quality control purposes. The astrometric calibration of the digitized images is done with the data from the Gaia TGAS (Tycho-Gaia Astrometric Solution) of the first Gaia data release (Gaia DR1), Tycho-2, HSOY (Hot Stuff for One Year), UCAC5 (USNO CCD Astrograph Catalog), and PMA catalogs. The best astrometric accuracy is obtained with the UCAC5 reference stars. The astrometric accuracy for single-exposure plates is sigma(R.A.)=0.16 and sigma(Dec.)=0.15 expressed as a Gaussian deviation of the astrometric residuals. For triple-exposure plates the astrometric accuracy is sigma(R.A.)=0.12 and sigma(Dec.)=0.13. The 1-sigma uncertainty of photometric calibration is about 0.28 mag and 0.24 mag for single- and triple-exposure plates, respectively. We detect the photographic adjacency (Kostinsky) effect in the triple-exposure plates. We show that accuracies at least of the level of scanning machines can be achieved with a digital camera, without any corrections for possible distortions caused by our instrumental setup. This method can be used to rapidly and inexpensively digitize and calibrate old photographic plates enabling their scientific exploitation.
53 - John Collins 2019
Lehmann, Symanzik and Zimmermann (LSZ) proved a theorem showing how to obtain the S-matrix from time-ordered Green functions. Their result, the reduction formula, is fundamental to practical calculations of scattering processes. A known problem is that the operators that they use to create asymptotic states create much else besides the intended particles for a scattering process. In the infinite-time limits appropriate to scattering, the extra contributions only disappear in matrix elements with normalizable states, rather than in the created states themselves, i.e., the infinite-time limits of the LSZ creation operators are weak limits. The extra particles that are created are in a different region of space-time than the intended scattering process. To be able to work with particle creation at non-asymptotic times, e.g., to give a transparent and fully deductive treatment for scattering with long-lived unstable particles, it is necessary to have operators for which the infinite-time limits are strong limits. In this paper, I give an improved method of constructing such operators. I use them to give an improved systematic account of scattering theory in relativistic quantum field theories, including a new proof of the reduction formula. I make explicit calculations to illustrate the problems with the LSZ operators and their solution with the new operators. Not only do these verify the existence of the extra particles created by the LSZ operators and indicate a physical interpretation, but they also show that the extra components are so large that their contribution to the norm of the state is ultra-violet divergent in renormalizable theories. Finally, I discuss the relation of this work to the work of Haag and Ruelle on scattering theory.
We have developed a new method, close in philosophy to the photometric redshift technique, which can be applied to spectral data of very low signal-to-noise ratio. Using it we intend to measure redshifts while minimising the dangers posed by the usual extraction techniques. GRB afterglows have generally very simple optical spectra over which the separate effects of absorption and reddening in the GRB host, the intergalactic medium, and our own Galaxy are superimposed. We model all these effects over a series of template afterglow spectra to produce a set of clean spectra that reproduce what would reach our telescope. We also model carefully the effects of the telescope-spectrograph combination and the properties of noise in the data, which are then applied on the template spectra. The final templates are compared to the two-dimensional spectral data, and the basic parameters (redshift, spectral index, Hydrogen absorption column) are estimated using statistical tools. We show how our method works by applying it to our data of the NIR afterglow of GRB090423. At z ~ 8.2, this was the most distant object ever observed. We use the spectrum taken by our team with the Telescopio Nazionale Galileo to derive the GRB redshift and its intrinsic neutral Hydrogen column density. Our best fit yields z=8.4^+0.05/-0.03 and N(HI)<5x10^20 cm^-2, but with a highly non-Gaussian uncertainty including the redshift range z [6.7, 8.5] at the 2-sigma confidence level. Our method will be useful to maximise the recovered information from low-quality spectra, particularly when the set of possible spectra is limited or easily parameterisable while at the same time ensuring an adequate confidence analysis.
Kernel methods have great promise for learning rich statistical representations of large modern datasets. However, compared to neural networks, kernel methods have been perceived as lacking in scalability and flexibility. We introduce a family of fast, flexible, lightly parametrized and general purpose kernel learning methods, derived from Fastfood basis function expansions. We provide mechanisms to learn the properties of groups of spectral frequencies in these expansions, which require only O(mlogd) time and O(m) memory, for m basis functions and d input dimensions. We show that the proposed methods can learn a wide class of kernels, outperforming the alternatives in accuracy, speed, and memory consumption.
A new approach is given for the implementation of boundary conditions used in solving the Mukhanov-Sasaki equation in the context of inflation. The familiar quantization procedure is reviewed, along with a discussion of where one might expect deviations from the standard approach to arise. The proposed method introduces a (model dependent) fitting function for the z/z and a/a terms in the Mukhanov-Sasaki equation for scalar and tensor modes, as well as imposes the boundary conditions at a finite conformal time. As an example, we employ a fitting function, and compute the spectral index, along with its running, for a specific inflationary model which possesses background equations that are analytically solvable. The observational upper bound on the tensor to scalar ratio is used to constrain the parameters of the boundary conditions in the tensor sector as well. An overview on the generalization of this method is also discussed.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا