Do you want to publish a course? Click here

Optimizing Cadences with Realistic Light Curve Filtering for Serendipitous Kilonova Discovery with Vera Rubin Observatory

82   0   0.0 ( 0 )
 Added by Igor Andreoni
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Current and future optical and near-infrared wide-field surveys have the potential of finding kilonovae, the optical and infrared counterparts to neutron star mergers, independently of gravitational-wave or high-energy gamma-ray burst triggers. The ability to discover fast and faint transients such as kilonovae largely depends on the area observed, the depth of those observations, the number of re-visits per field in a given time frame, and the filters adopted by the survey; it also depends on the ability to perform rapid follow-up observations to confirm the nature of the transients. In this work, we assess kilonova detectability in existing simulations of the LSST strategy for the Vera C. Rubin Wide Fast Deep survey, with focus on comparing rolling to baseline cadences. Although currently available cadences can enable the detection of more than 300 kilonovae out to 1400 Mpc over the ten-year survey, we can expect only 3-32 kilonovae similar to GW170817 to be recognizable as fast-evolving transients. We also explore the detectability of kilonovae over the plausible parameter space, focusing on viewing angle and ejecta masses. We find that observations in redder izy bands are crucial for identification of nearby (within 300 Mpc) kilonovae that could be spectroscopically classified more easily than more distant sources. Rubins potential for serendipitous kilonova discovery could be increased by gain of efficiency with the employment of individual 30s exposures (as opposed to 2x15s snap pairs), with the addition of red-band observations coupled with same-night observations in g- or r-bands, and possibly with further development of a new rolling-cadence strategy.



rate research

Read More

The commissioning team for the Vera C. Rubin observatory is planning a set of engineering and science verification observations with the Legacy Survey of Space and Time (LSST) commissioning camera and then the Rubin Observatory LSST Camera. The time frame for these observations is not yet fixed, and the commissioning team will have flexibility in selecting fields to observe. In this document, the Dark Energy Science Collaboration (DESC) Commissioning Working Group presents a prioritized list of target fields appropriate for testing various aspects of DESC-relevant science performance, grouped by season for visibility from Rubin Observatory at Cerro Pachon. Our recommended fields include Deep-Drilling fields (DDFs) to full LSST depth for photo-$z$ and shape calibration purposes, HST imaging fields to full depth for deblending studies, and an $sim$200 square degree area to 1-year depth in several filters for higher-level validation of wide-area science cases for DESC. We also anticipate that commissioning observations will be needed for template building for transient science over a broad RA range. We include detailed descriptions of our recommended fields along with associated references. We are optimistic that this document will continue to be useful during LSST operations, as it provides a comprehensive list of overlapping data-sets and the references describing them.
113 - T. Petrushevska 2020
Strong lensing by galaxy clusters can be used to significantly expand the survey reach, thus allowing observation of magnified high-redshift supernovae that otherwise would remain undetected. Strong lensing can also provide multiple images of the galaxies that lie behind the clusters. Detection of strongly lensed Type Ia supernovae (SNe Ia) is especially useful because of their standardizable brightness, as they can be used to improve either cluster lensing models or independent measurements of cosmological parameters. The cosmological parameter, the Hubble constant, is of particular interest given the discrepancy regarding its value from measurements with different approaches. Here, we explore the feasibility of the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) of detecting strongly lensed SNe in the field of five galaxy clusters (Abell 1689 and Hubble Frontier Fields clusters) that have well-studied lensing models. Considering the 88 systems composed of 268 individual multiple images in the five cluster fields, we find that the LSST will be sensitive to SNe~Ia (SNe~IIP) exploding in 41 (23) galaxy images. The range of redshift of these galaxies is between $1.01 < z < 3.05$. During its 10 years of operation, LSST is expected to detect $0.2pm0.1$ SN~Ia and $0.9pm0.3$ core collapse SNe. However, as LSST will observe many more massive galaxy clusters, it is likely that the expectations are higher. We stress the importance of having an additional observing program for photometric and spectroscopic follow-up of the strongly lensed SNe detected by LSST.
Survey telescopes such as the Vera C. Rubin Observatory will increase the number of observed supernovae (SNe) by an order of magnitude, discovering millions of events; however, it is impossible to spectroscopically confirm the class for all the SNe discovered. Thus, photometric classification is crucial but its accuracy depends on the not-yet-finalized observing strategy of Rubin Observatorys Legacy Survey of Space and Time (LSST). We quantitatively analyze the impact of the LSST observing strategy on SNe classification using the simulated multi-band light curves from the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC). First, we augment the simulated training set to be representative of the photometric redshift distribution per supernovae class, the cadence of observations, and the flux uncertainty distribution of the test set. Then we build a classifier using the photometric transient classification library snmachine, based on wavelet features obtained from Gaussian process fits, yielding similar performance to the winning PLAsTiCC entry. We study the classification performance for SNe with different properties within a single simulated observing strategy. We find that season length is an important factor, with light curves of 150 days yielding the highest classification performance. Cadence is also crucial for SNe classification; events with median inter-night gap of <3.5 days yield higher performance. Interestingly, we find that large gaps (>10 days) in light curve observations does not impact classification performance as long as sufficient observations are available on either side, due to the effectiveness of the Gaussian process interpolation. This analysis is the first exploration of the impact of observing strategy on photometric supernova classification with LSST.
Neutron star mergers offer unique conditions for the creation of the heavy elements and additionally provide a testbed for our understanding of this synthesis known as the $r$-process. We have performed dynamical nucleosynthesis calculations and identified a single isotope, $^{254}$Cf, which has a particularly high impact on the brightness of electromagnetic transients associated with mergers on the order of 15 to 250 days. This is due to the anomalously long half-life of this isotope and the efficiency of fission thermalization compared to other nuclear channels. We estimate the fission fragment yield of this nucleus and outline the astrophysical conditions under which $^{254}$Cf has the greatest impact to the light curve. Future observations in the middle-IR which are bright during this regime could indicate the production of actinide nucleosynthesis.
We present a rapid analytic framework for predicting kilonova light curves following neutron star (NS) mergers, where the main input parameters are binary-based properties measurable by gravitational wave detectors (chirp mass and mass ratio, orbital inclination) and properties dependent on the nuclear equation of state (tidal deformability, maximum NS mass). This enables synthesis of a kilonova sample for any NS source population, or determination of the observing depth needed to detect a live kilonova given gravitational wave source parameters in low latency. We validate this code, implemented in the public MOSFiT package, by fitting it to GW170817. A Bayes factor analysis overwhelmingly ($B>10^{10}$) favours the inclusion of an additional luminosity source in addition to lanthanide-poor dynamical ejecta during the first day. This is well fit by a shock-heated cocoon model, though differences in the ejecta structure, opacity or nuclear heating rate cannot be ruled out as alternatives. The emission thereafter is dominated by a lanthanide-rich viscous wind. We find the mass ratio of the binary is $q=0.92pm0.07$ (90% credible interval). We place tight constraints on the maximum stable NS mass, $M_{rm TOV}=2.17^{+0.08}_{-0.11}$ M$_odot$. For a uniform prior in tidal deformability, the radius of a 1.4 M$_odot$ NS is $R_{1.4}sim 10.7$ km. Re-weighting with a prior based on equations of state that support our credible range in $M_{rm TOV}$, we derive a final measurement $R_{1.4}=11.06^{+1.01}_{-0.98}$ km. Applying our code to the second gravitationally-detected neutron star merger, GW190425, we estimate that an associated kilonova would have been fainter (by $sim0.7$ mag at one day post-merger) and declined faster than GW170817, underlining the importance of tuning follow-up strategies individually for each GW-detected NS merger.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا