Do you want to publish a course? Click here

Fundamental Limitations of Pixel Based Image Deconvolution in Radio Astronomy

148   0   0.0 ( 0 )
 Publication date 2010
  fields Physics
and research's language is English




Ask ChatGPT about the research

Deconvolution is essential for radio interferometric imaging to produce scientific quality data because of finite sampling in the Fourier plane. Most deconvolution algorithms are based on CLEAN which uses a grid of image pixels, or clean components. A critical matter in this process is the selection of pixel size for optimal results in deconvolution. As a rule of thumb, the pixel size is chosen smaller than the resolution dictated by the interferometer. For images consisting of unresolved (or point like) sources, this approach yields optimal results. However, for sources that are not point like, in particular for partially resolved sources, the selection of right pixel size is still an open issue. In this paper, we investigate the limitations of pixelization in deconvolving extended sources. In particular, we pursue the usage of orthonormal basis functions to model extended sources yielding better results than by using clean components.



rate research

Read More

Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the Hogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the Hogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.
102 - Sarod Yatawatta 2011
In order to produce high dynamic range images in radio interferometry, bright extended sources need to be removed with minimal error. However, this is not a trivial task because the Fourier plane is sampled only at a finite number of points. The ensuing deconvolution problem has been solved in many ways, mainly by algorithms based on CLEAN. However, such algorithms that use image pixels as basis functions have inherent limitations and by using an orthonormal basis that span the whole image, we can overcome them. The construction of such an orthonormal basis involves fine tuning of many free parameters that define the basis functions. The optimal basis for a given problem (or a given extended source) is not guaranteed. In this paper, we discuss the use of generalized prolate spheroidal wave functions as a basis. Given the geometry (or the region of interest) of an extended source and the sampling points on the visibility plane, we can construct the optimal basis to model the source. Not only does this gives us the minimum number of basis functions required but also the artifacts outside the region of interest are minimized.
65 - K. Hagino , K. Negishi , K. Oono 2019
We have been developing the X-ray silicon-on-insulator (SOI) pixel sensor called XRPIX for future astrophysical satellites. XRPIX is a monolithic active pixel sensor consisting of a high-resistivity Si sensor, thin SiO$_2$ insulator, and CMOS pixel circuits that utilize SOI technology. Since XRPIX is capable of event-driven readouts, it can achieve high timing resolution greater than $sim 10{rm ~mu s}$, which enables low background observation by adopting the anti-coincidence technique. One of the major issues in the development of XRPIX is the electrical interference between the sensor layer and circuit layer, which causes nonuniform detection efficiency at the pixel boundaries. In order to reduce the interference, we introduce a Double-SOI (D-SOI) structure, in which a thin Si layer (middle Si) is added to the insulator layer of the SOI structure. In this structure, the middle Si layer works as an electrical shield to decouple the sensor layer and circuit layer. We measured the detector response of the XRPIX with D-SOI structure at KEK. We irradiated the X-ray beam collimated with $4{rm ~mu mphi}$ pinhole, and scanned the device with $6{rm ~mu m}$ pitch, which is 1/6 of the pixel size. In this paper, we present the improvement in the uniformity of the detection efficiency in D-SOI sensors, and discuss the detailed X-ray response and its physical origins.
A substantial amount of important scientific information is contained within astronomical data at the submillimeter and far-infrared (FIR) wavelengths, including information regarding dusty galaxies, galaxy clusters, and star-forming regions; however, these wavelengths are among the least-explored fields in astronomy because of the technological difficulties involved in such research. Over the past 20 years, considerable efforts have been devoted to developing submillimeter- and millimeter-wavelength astronomical instruments and telescopes. The number of detectors is an important property of such instruments and is the subject of the current study. Future telescopes will require as many as hundreds of thousands of detectors to meet the necessary requirements in terms of the field of view, scan speed, and resolution. A large pixel count is one benefit of the development of multiplexable detectors that use kinetic inductance detector (KID) technology. This paper presents the development of all aspects of the readout electronics for a KID-based instrument, which enabled one of the largest detector counts achieved to date in submillimeter-/millimeter-wavelength imaging arrays: a total of 2304 detectors. The work presented in this paper had been implemented in the MUltiwavelength Submillimeter Inductance Camera (MUSIC), a instrument for the Caltech Submillimeter Observatory (CSO) between 2013 and 2015.
A community meeting on the topic of Radio Astronomy in the LSST Era was hosted by the National Radio Astronomy Observatory in Charlottesville, VA (2013 May 6--8). The focus of the workshop was on time domain radio astronomy and sky surveys. For the time domain, the extent to which radio and visible wavelength observations are required to understand several classes of transients was stressed, but there are also classes of radio transients for which no visible wavelength counterpart is yet known, providing an opportunity for discovery. From the LSST perspective, the LSST is expected to generate as many as 1 million alerts nightly, which will require even more selective specification and identification of the classes and characteristics of transients that can warrant follow up, at radio or any wavelength. The LSST will also conduct a deep survey of the sky, producing a catalog expected to contain over 38 billion objects in it. Deep radio wavelength sky surveys will also be conducted on a comparable time scale, and radio and visible wavelength observations are part of the multi-wavelength approach needed to classify and understand these objects. Radio wavelengths are valuable because they are unaffected by dust obscuration and, for galaxies, contain contributions both from star formation and from active galactic nuclei. The workshop touched on several other topics, on which there was consensus including the placement of other LSST Deep Drilling Fields, inter-operability of software tools, and the challenge of filtering and exploiting the LSST data stream. There were also topics for which there was insufficient time for full discussion or for which no consensus was reached, which included the procedures for following up on LSST observations and the nature for future support of researchers desiring to use LSST data products.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا