No Arabic abstract
The discovery of a kilonova (KN) associated with the Advanced LIGO (aLIGO)/Virgo event GW170817 opens up new avenues of multi-messenger astrophysics. Here, using realistic simulations, we provide estimates of the number of KNe that could be found in data from past, present and future surveys without a gravitational-wave trigger. For the simulation, we construct a spectral time-series model based on the DES-GW multi-band light-curve from the single known KN event, and we use an average of BNS rates from past studies of $10^3 rm{Gpc}^{-3}/rm{year}$, consistent with the $1$ event found so far. Examining past and current datasets from transient surveys, the number of KNe we expect to find for ASAS-SN, SDSS, PS1, SNLS, DES, and SMT is between 0 and $0.3$. We predict the number of detections per future survey to be: 8.3 from ATLAS, 10.6 from ZTF, 5.5/69 from LSST (the Deep Drilling / Wide Fast Deep), and 16.0 from WFIRST. The maximum redshift of KNe discovered for each survey is z = 0.8 for WFIRST, z = 0.25 for LSST and z = 0.04 for ZTF and ATLAS. For the LSST survey, we also provide contamination estimates from Type Ia and Core-collapse supernovae: after light-curve and template-matching requirements, we estimate a background of just 2 events. More broadly, we stress that future transient surveys should consider how to optimize their search strategies to improve their detection efficiency, and to consider similar analyses for GW follow-up programs.
The future of astronomy is inextricably entwined with the care and feeding of astronomical data products. Community standards such as FITS and NDF have been instrumental in the success of numerous astronomy projects. Their very success challenges us to entertain pragmatic strategies to adapt and evolve the standards to meet the aggressive data-handling requirements of facilities now being designed and built. We discuss characteristics that have made standards successful in the past, as well as desirable features for the future, and an open discussion follows.
Specialized computational chemistry packages have permanently reshaped the landscape of chemical and materials science by providing tools to support and guide experimental efforts and for the prediction of atomistic and electronic properties. In this regard, electronic structure packages have played a special role by using first-principledriven methodologies to model complex chemical and materials processes. Over the last few decades, the rapid development of computing technologies and the tremendous increase in computational power have offered a unique chance to study complex transformations using sophisticated and predictive many-body techniques that describe correlated behavior of electrons in molecular and condensed phase systems at different levels of theory. In enabling these simulations, novel parallel algorithms have been able to take advantage of computational resources to address the polynomial scaling of electronic structure methods. In this paper, we briefly review the NWChem computational chemistry suite, including its history, design principles, parallel tools, current capabilities, outreach and outlook.
We review the history of space mission in Korea focusing on the field of astronomy and astrophysics. For each mission, scientific motivation and achievement are reviewed together with some technical details of the program including mission schedule. This review includes the ongoing and currently approved missions as well as some planned ones. Within the admitted limitations of authors perspectives, some comments on the future direction of space program for astronomy and astrophysics in Korea are made at the end of this review.
Experimentally and mysteriously, the concentration of quasiparticles in a gapped superconductor at low temperatures always by far exceeds its equilibrium value. We study the dynamics of localized quasiparticles in superconductors with a spatially fluctuating gap edge. The competition between phonon-induced quasiparticle recombination and generation by a weak non-equilibrium agent results in an upper bound for the concentration that explains the mystery.
Anomaly mining is an important problem that finds numerous applications in various real world domains such as environmental monitoring, cybersecurity, finance, healthcare and medicine, to name a few. In this article, I focus on two areas, (1) point-cloud and (2) graph-based anomaly mining. I aim to present a broad view of each area, and discuss classes of main research problems, recent trends and future directions. I conclude with key take-aways and overarching open problems.