ترغب بنشر مسار تعليمي؟ اضغط هنا

Strong gravitationally lensed quasars provide powerful means to study galaxy evolution and cosmology. Current and upcoming imaging surveys will contain thousands of new lensed quasars, augmenting the existing sample by at least two orders of magnitud e. To find such lens systems, we built a robot, CHITAH, that hunts for lensed quasars by modeling the configuration of the multiple quasar images. Specifically, given an image of an object that might be a lensed quasar, CHITAH first disentangles the light from the supposed lens galaxy and the light from the multiple quasar images based on color information. A simple rule is designed to categorize the given object as a potential four-image (quad) or two-image (double) lensed quasar system. The configuration of the identified quasar images is subsequently modeled to classify whether the object is a lensed quasar system. We test the performance of CHITAH using simulated lens systems based on the Canada-France-Hawaii Telescope Legacy Survey. For bright quads with large image separations (with Einstein radius $r_{rm ein}>1.1$) simulated using Gaussian point-spread functions, a high true-positive rate (TPR) of $sim$90% and a low false-positive rate of $sim$$3%$ show that this is a promising approach to search for new lens systems. We obtain high TPR for lens systems with $r_{rm ein}gtrsim0.5$, so the performance of CHITAH is set by the seeing. We further feed a known gravitational lens system, COSMOS 5921$+$0638, to CHITAH, and demonstrate that CHITAH is able to classify this real gravitational lens system successfully. Our newly built CHITAH is omnivorous and can hunt in any ground-based imaging surveys.
A new approximation method for inverting the Poissons equation is presented for a continuously distributed and finite-sized source in an unbound domain. The advantage of this image multipole method arises from its ability to place the computational e rror close to the computational domain boundary, making the source region almost error free. It is contrasted to the modified Greens function method that has small but finite errors in the source region. Moreover, this approximation method also has a systematic way to greatly reduce the errors at the expense of somewhat greater computational efforts. Numerical examples of three-dimensional and two-dimensional cases are given to illustrate the advantage of the new method.
The conventional cold, particle interpretation of dark matter (CDM) still lacks laboratory support and struggles with the basic properties of common dwarf galaxies, which have surprisingly uniform central masses and shallow density profiles. In contr ast, galaxies predicted by CDM extend to much lower masses, with steeper, singular profiles. This tension motivates cold, wavelike dark matter ($psi$DM) composed of a non-relativistic Bose-Einstein condensate, so the uncertainty principle counters gravity below a Jeans scale. Here we achieve the first cosmological simulations of this quantum state at unprecedentedly high resolution capable of resolving dwarf galaxies, with only one free parameter, $bf{m_B}$, the boson mass. We demonstrate the large scale structure of this $psi$DM simulation is indistinguishable from CDM, as desired, but differs radically inside galaxies. Connected filaments and collapsed haloes form a large interference network, with gravitationally self-bound solitonic cores inside every galaxy surrounded by extended haloes of fluctuating density granules. These results allow us to determine $bf{m_B=(8.1^{+1.6}_{-1.7})times 10^{-23}~eV}$ using stellar phase-space distributions in dwarf spheroidal galaxies. Denser, more massive solitons are predicted for Milky Way sized galaxies, providing a substantial seed to help explain early spheroid formation. Suppression of small structures means the onset of galaxy formation for $psi$DM is substantially delayed relative to CDM, appearing at $bf{zlesssim 13}$ in our simulations.
Aims: We performed a detailed photometric analysis of the lensed system UM673 (Q0142-100) and an analysis of the tentative lens models. Methods: High-resolution adaptive optics images of UM673 taken with the Subaru telescope in the H band were examin ed. We also analysed the J, H and K-band observational data of UM673 obtained with the 1.3m telescope at the CTIO observatory. Results: We present photometry of quasar components A and B of UM673, the lens, and the nearby bright galaxy using H-band observational data obtained with the Subaru telescope. Based on the CTIO observations of UM673, we also present J- and H-band photometry and estimates of the J, H and K-band flux ratios between the two UM673 components in recent epochs. The near-infrared fluxes of the A and B components of UM673 and their published optical fluxes are analysed to measure extinction properties of the lensing galaxy. We estimate the extinction-corrected flux ratio between components A and B to be about 2.14 mag. We discuss lens models for the UM673 system constrained with the positions of the UM673 components, their flux ratio, and the previously measured time delay
(Abridged) In tandem with observational datasets, we utilize realistic mock catalogs, based on a semi-analytic galaxy formation model, constructed specifically for Pan-STARRS1 Medium Deep Surveys in order to assess the performance of the Probability Friends-of-Friends (PFOF, Liu et al.) group finder, and aim to develop a grouping optimization method applicable to surveys like Pan-STARRS1. Producing mock PFOF group catalogs under a variety of photometric redshift accuracies ({sigma}{Delta}z/(1+zs)), we find that catalog purities and completenesses from ``good {sigma}{Delta}z/(1+zs)) ~ 0.01) to ``poor {sigma}{Delta}z/(1+zs)) ~ 0.07) photo-zs gradually degrade respectively from 77% and 70% to 52% and 47%. To avoid model dependency of the mock for use on observational data we apply a ``subset optimization approach, using spectroscopic-redshift group data from the target field to train the group finder for application to that field, as an alternative method for the grouping optimization. We demonstrate this approach using these spectroscopically identified groups as the training set, i.e. zCOSMOS groups for PFOF searches within PS1 Medium Deep Field04 (PS1MD04) and DEEP2 EGS groups for searches in PS1MD07. We ultimately apply PFOF to four datasets spanning the photo-z uncertainty range from 0.01 to 0.06 in order to quantify the dependence of group recovery performance on photo-z accuracy. We find purities and completenesses calculated from observational datasets broadly agree with their mock analogues. Further tests of the PFOF algorithm are performed via matches to X-ray clusters identified within the PS1MD04 and COSMOS footprints. Across over a decade in group mass, we find PFOF groups match ~85% of X-ray clusters in COSMOS and PS1MD04, but at a lower statistical significance in the latter.
We present the newly developed code, GAMER (GPU-accelerated Adaptive MEsh Refinement code), which has adopted a novel approach to improve the performance of adaptive mesh refinement (AMR) astrophysical simulations by a large factor with the use of th e graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing TVD scheme for the hydrodynamic solver, and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is made to diminish by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely-baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using 1 GPU with 4096^3 effective resolution and 16 GPUs with 8192^3 effective resolution, respectively.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا