Do you want to publish a course? Click here

Correct estimate of the probability of false detection of the matched filter in weak-signal detection problems. III (Peak distribution method versus the Gumbel distribution method)

93   0   0.0 ( 0 )
 Added by Paola Andreani
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The matched filter (MF) represents one of the main tools to detect signals from known sources embedded in the noise. In the Gaussian case the noise is assumed to be the realization of a Gaussian random field (GRF). The most important property of the MF, the maximization of the probability of detection subject to a constant probability of false detection or false alarm (PFA), makes it one of the most popular techniques. However, the MF technique relies upon the a priori knowledge of the number and the position of the searched signals in the GRF which usually are not available. A typical way out is to assume that the position of a signal coincides with one of the peaks in the matched filtered data. A detection is claimed when the probability that a given peak is due only to the noise (i.e. the PFA) is smaller than a prefixed threshold. In this case the probability density function (PDF) of the amplitudes has to be used for the computation of the PFA, which is different from the Gaussian. Moreover, the probability that a detection is false depends on the number of peaks present in the filtered GRF, the greater the number of peaks in a GRF, the higher the probability of peaks due to the noise that exceed the detection threshold. If not taken into account, the PFA can be severely underestimated. Many solutions proposed to this problem are non-parametric hence not able to exploit all the available information. This limitation has been overcome by means of two efficient parametric approaches, one based on the PDF of the peak amplitudes of a smooth and isotropic GRF whereas the other uses the Gumbel distribution (the asymptotic PDF of the corresponding extreme). Simulations and ALMA maps show that, although the two methods produce almost identical results, the first is more flexible and allows us to check the reliability of the detection procedure.



rate research

Read More

The detection reliability of weak signals is a critical issue in many astronomical contexts and may have severe consequences for determining number counts and luminosity functions, but also for optimising the use of telescope time in follow-up observations. Because of its optimal properties, one of the most popular and widely-used detection technique is the matched filter (MF). This is a linear filter designed to maximise the detectability of a signal of known structure that is buried in additive Gaussian random noise. In this work we show that in the very common situation where the number and position of the searched signals within a data sequence (e.g. an emission line in a spectrum) or an image (e.g. a point-source in an interferometric map) are unknown, this technique, when applied in its standard form, may severely underestimate the probability of false detection. This is because the correct use of the MF relies upon a-priori knowledge of the position of the signal of interest. In the absence of this information, the statistical significance of features that are actually noise is overestimated and detections claimed that are actually spurious. For this reason, we present an alternative method of computing the probability of false detection that is based on the probability density function (PDF) of the peaks of a random field. It is able to provide a correct estimate of the probability of false detection for the one-, two- and three-dimensional case. We apply this technique to a real two-dimensional interferometric map obtained with ALMA.
The matched filter (MF) is one of the most popular and reliable techniques to the detect signals of known structure and amplitude smaller than the level of the contaminating noise. Under the assumption of stationary Gaussian noise, MF maximizes the probability of detection subject to a constant probability of false detection or false alarm (PFA). This property relies upon a priori knowledge of the position of the searched signals, which is usually not available. Recently, it has been shown that when applied in its standard form, MF may severely underestimate the PFA. As a consequence the statistical significance of features that belong to noise is overestimated and the resulting detections are actually spurious. For this reason, an alternative method of computing the PFA has been proposed that is based on the probability density function (PDF) of the peaks of an isotropic Gaussian random field. In this paper we further develop this method. In particular, we discuss the statistical meaning of the PFA and show that, although useful as a preliminary step in a detection procedure, it is not able to quantify the actual reliability of a specific detection. For this reason, a new quantity is introduced called the specific probability of false alarm (SPFA), which is able to carry out this computation. We show how this method works in targeted simulations and apply it to a few interferometric maps taken with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Australia Telescope Compact Array (ATCA). We select a few potential new point sources and assign an accurate detection reliability to these sources.
114 - N. Schneider 2015
We report the novel detection of complex high-column density tails in the probability distribution functions (PDFs) for three high-mass star-forming regions (CepOB3, MonR2, NGC6334), obtained from dust emission observed with Herschel. The low column density range can be fit with a lognormal distribution. A first power-law tail starts above an extinction (Av) of ~6-14. It has a slope of alpha=1.3-2 for the rho~r^-alpha profile for an equivalent density distribution (spherical or cylindrical geometry), and is thus consistent with free-fall gravitational collapse. Above Av~40, 60, and 140, we detect an excess that can be fitted by a flatter power law tail with alpha>2. It correlates with the central regions of the cloud (ridges/hubs) of size ~1 pc and densities above 10^4 cm^-3. This excess may be caused by physical processes that slow down collapse and reduce the flow of mass towards higher densities. Possible are: 1. rotation, which introduces an angular momentum barrier, 2. increasing optical depth and weaker cooling, 3. magnetic fields, 4. geometrical effects, and 5. protostellar feedback. The excess/second power-law tail is closely linked to high-mass star-formation though it does not imply a universal column density threshold for the formation of (high-mass) stars.
In recent years, neural network-based anomaly detection methods have attracted considerable attention in the hyperspectral remote sensing domain due to the powerful reconstruction ability compared with traditional methods. However, actual probability distribution statistics hidden in the latent space are not discovered by exploiting the reconstruction error because the probability distribution of anomalies is not explicitly modeled. To address the issue, we propose a novel probability distribution representation detector (PDRD) that explores the intrinsic distribution of both the background and the anomalies in original data for hyperspectral anomaly detection in this paper. First, we represent the hyperspectral data with multivariate Gaussian distributions from a probabilistic perspective. Then, we combine the local statistics with the obtained distributions to leverage the spatial information. Finally, the difference between the corresponding distributions of the test pixel and the average expectation of the pixels in the Chebyshev neighborhood is measured by computing the modified Wasserstein distance to acquire the detection map. We conduct the experiments on four real data sets to evaluate the performance of our proposed method. Experimental results demonstrate the accuracy and efficiency of our proposed method compared to the state-of-the-art detection methods.
In the context of tomographic cosmic shear surveys, there exists a nulling transformation of weak lensing observations (also called BNT transform) that allows us to simplify the correlation structure of tomographic cosmic shear observations, as well as to build observables that depend only on a localised range of redshifts and thus independent from the low-redshift/small-scale modes. This procedure renders possible accurate, and from-first-principles, predictions of the convergence and aperture mass one-point distributions (PDF). We here explore other consequences of this transformation on the (reduced) numerical complexity of the estimation of the joint PDF between nulled bins and demonstrate how to use these results to make theoretical predictions.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا