ترغب بنشر مسار تعليمي؟ اضغط هنا

Detecting Subhalos in Strong Gravitational Lens Images with Image Segmentation

64   0   0.0 ( 0 )
 نشر من قبل Bryan Ostdiek
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop a machine learning model to detect dark substructure (subhalos) within simulated images of strongly lensed galaxies. Using the technique of image segmentation, we turn the task of identifying subhalos into a classification problem where we label each pixel in an image as coming from the main lens, a subhalo within a binned mass range, or neither. Our network is only trained on images with a single smooth lens and either zero or one subhalo near the Einstein ring. On a test set of noiseless simulated images with a single subhalo, the network is able to locate subhalos with a mass of $10^{8} M_{odot}$ and place them in the correct or adjacent mass bin, effectively detecting them 97% of the time. For this test set, the network detects subhalos down to masses of $10^{6} M_{odot}$ at 61% accuracy. However, noise limits the sensitivity to light subhalo masses. With 1% noise (with this level of noise, the distribution of signal-to-noise in the image pixels approximates that of images from the Hubble Space Telescope for sources with magnitude $< 20$), a subhalo with mass $10^{8.5}M_{odot}$ is detected 86% of the time, while subhalos with masses of $10^{8}M_{odot}$ are only detected 38% of the time. Furthermore, the model is able to generalize to new contexts it has not been trained on, such as locating multiple subhalos with varying masses, subhalos far from the Einstein ring, or more than one large smooth lens.



قيم البحث

اقرأ أيضاً

Detecting substructure within strongly lensed images is a promising route to shed light on the nature of dark matter. It is a challenging task, which traditionally requires detailed lens modeling and source reconstruction, taking weeks to analyze eac h system. We use machine learning to circumvent the need for lens and source modeling and develop a method to both locate subhalos in an image as well as determine their mass using the technique of image segmentation. The network is trained on images with a single subhalo located near the Einstein ring. Training in this way allows the network to learn the gravitational lensing of light and it is then able to accurately detect entire populations of substructure, even far from the Einstein ring. In images with a single subhalo and without noise, the network detects subhalos of mass $10^6 M_{odot}$ 62% of the time and 78% of these detected subhalos are predicted in the correct mass bin. The detection accuracy increases for heavier masses. When random noise at the level of 1% of the mean brightness of the image is included (which is a realistic approximation HST, for sources brighter than magnitude 20), the network loses sensitivity to the low-mass subhalos; with noise, the $10^{8.5}M_{odot}$ subhalos are detected 86% of the time, but the $10^8 M_{odot}$ subhalos are only detected 38% of the time. The false-positive rate is around 2 false subhalos per 100 images with and without noise, coming mostly from masses $leq10^8 M_{odot}$. With good accuracy and a low false-positive rate, counting the number of pixels assigned to each subhalo class over multiple images allows for a measurement of the subhalo mass function (SMF). When measured over five mass bins from $10^8 M_{odot}$ to $10^{10} M_{odot}$ the SMF slope is recovered with an error of 14.2 (16.3)% for 10 images, and this improves to 2.1 (2.6)% for 1000 images without (with 1%) noise.
We report on the initial results obtained with an image convolution/deconvolution computer code that we developed and used to study the image formation capabilities of the solar gravitational lens (SGL). Although the SGL of a spherical Sun creates a greatly blurred image, knowledge of the SGLs point-spread function (PSF) makes it possible to reconstruct the original image and remove the blur by way of deconvolution. We discuss the deconvolution process, which can be implemented either with direct matrix inversion or with the Fourier quotient method. We observe that the process introduces a ``penalty in the form of a reduction in the signal-to-noise ratio (SNR) of a recovered image, compared to the SNR at which the blurred image data is collected. We estimate the magnitude of this penalty using an analytical approach and confirm the results with a series of numerical simulations. We find that the penalty is substantially reduced when the spacing between image samples is large compared to the telescope aperture. The penalty can be further reduced with suitable noise filtering, which can yield ${cal O}(10)$ or better improvement for low-quality imaging data. Our results confirm that it is possible to use the SGL for imaging purposes. We offer insights on the data collection and image processing strategies that could yield a detailed image of an exoplanet within image data collection times that are consistent with the duration of a realistic space mission.
We study image formation with the solar gravitational lens (SGL). We consider a point source that is positioned at a large but finite distance from the Sun. We assume that an optical telescope is positioned in the image plane, in the focal region of the SGL. We model the telescope as a convex lens and evaluate the intensity distribution produced by the electromagnetic field that forms the image in the focal plane of the convex lens. We first investigate the case when the telescope is located on the optical axis of the SGL or in its immediate vicinity. This is the region of strong interference where the SGL forms an image of a distant source, which is our primary interest. We derive analytic expressions that describe the progression of the image from an Einstein ring corresponding to an on-axis telescope position, to the case of two bright spots when the telescope is positioned some distance away from the optical axis. At greater distances from the optical axis, in the region of weak interference and that of geometric optics, we recover expressions that are familiar from models of gravitational microlensing, but developed here using a wave-optical treatment. We discuss applications of the results for imaging and spectroscopy of exoplanets with the SGL.
Subhalos at subgalactic scales ($Mlesssim 10^7 M_odot$ or $kgtrsim 10^3 ,{rm Mpc}^{-1}$) are pristine test beds of dark matter (DM). However, they are too small, diffuse and dark to be visible, in any existing observations. In this paper, we develop a complete formalism for weak and strong diffractive lensing, which can be used to probe such subhalos with chirping gravitational waves (GWs). Also, we show that Navarro-Frenk-White(NFW) subhalos in this mass range can indeed be detected individually, albeit at a rate of ${cal O}(10)$ or less per year at BBO and others limited by small merger rates and large required SNR $gtrsim 1/gamma(r_0) sim 10^3$. It becomes possible as NFW scale radii $r_0$ are of the right size comparable to the GW Fresnel length $r_F$, and unlike all existing probes, their lensing is more sensitive to lighter subhalos. Remarkably, our formalism further reveals that the frequency dependence of weak lensing (which is actually the detectable effect) is due to shear $gamma$ at $r_F$. Not only is it consistent with an approximate scaling invariance, but it also offers a new way to measure the mass profile at a successively smaller scale of chirping $r_F propto f^{-1/2}$. Meanwhile, strong diffraction that produces a blurred Einstein ring has a universal frequency dependence, allowing only detections. These are further demonstrated through semianalytic discussions of power-law profiles. Our developments for a single lens can be generalized and will promote diffractive lensing to a more concrete and promising physics in probing DM and small-scale structures.
Large scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100,000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. (abridged)
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا