Do you want to publish a course? Click here

Radio Astronomy in LSST Era

136   0   0.0 ( 0 )
 Added by Joseph Lazio
 Publication date 2014
  fields Physics
and research's language is English




Ask ChatGPT about the research

A community meeting on the topic of Radio Astronomy in the LSST Era was hosted by the National Radio Astronomy Observatory in Charlottesville, VA (2013 May 6--8). The focus of the workshop was on time domain radio astronomy and sky surveys. For the time domain, the extent to which radio and visible wavelength observations are required to understand several classes of transients was stressed, but there are also classes of radio transients for which no visible wavelength counterpart is yet known, providing an opportunity for discovery. From the LSST perspective, the LSST is expected to generate as many as 1 million alerts nightly, which will require even more selective specification and identification of the classes and characteristics of transients that can warrant follow up, at radio or any wavelength. The LSST will also conduct a deep survey of the sky, producing a catalog expected to contain over 38 billion objects in it. Deep radio wavelength sky surveys will also be conducted on a comparable time scale, and radio and visible wavelength observations are part of the multi-wavelength approach needed to classify and understand these objects. Radio wavelengths are valuable because they are unaffected by dust obscuration and, for galaxies, contain contributions both from star formation and from active galactic nuclei. The workshop touched on several other topics, on which there was consensus including the placement of other LSST Deep Drilling Fields, inter-operability of software tools, and the challenge of filtering and exploiting the LSST data stream. There were also topics for which there was insufficient time for full discussion or for which no consensus was reached, which included the procedures for following up on LSST observations and the nature for future support of researchers desiring to use LSST data products.



rate research

Read More

144 - Joan R. Najita 2019
How should we invest our available resources to best sustain astronomys track record of discovery, established over the past few decades? Two strong hints come from (1) our history of astronomical discoveries and (2) literature citation patterns that reveal how discovery and development activities in science are strong functions of team size. These argue that progress in astronomy hinges on support for a diversity of research efforts in terms of team size, research tools and platforms, and investment strategies that encourage risk taking. These ideas also encourage us to examine the implications of the trend toward big team science and survey science in astronomy over the past few decades, and to reconsider the common assumption that progress in astronomy always means trading up to bigger apertures and facilities. Instead, the considerations above argue that we need a balanced set of investments in small- to large-scale initiatives and team sizes both large and small. Large teams tend to develop existing ideas, whereas small teams are more likely to fuel the future with disruptive discoveries. While large facilities are the value investments that are guaranteed to produce discoveries, smaller facilities are the growth stocks that are likely to deliver the biggest science bang per buck, sometimes with outsize returns. One way to foster the risk taking that fuels discovery is to increase observing opportunity, i.e., create more observing nights and facilitate the exploration of science-ready data.
Among the most stringent constraints on the dark matter annihilation cross section are those derived from observations of dwarf galaxies by the Fermi Gamma-Ray Space Telescope. As current (e.g., Dark Energy Survey, DES) and future (Large Synoptic Survey Telescope, LSST) optical imaging surveys discover more of the Milky Ways ultra-faint satellite galaxies, they may increase Fermis sensitivity to dark matter annihilations. In this study, we use a semi-analytic model of the Milky Ways satellite population to predict the characteristics of the dwarfs likely to be discovered by DES and LSST, and project how these discoveries will impact Fermis sensitivity to dark matter. While we find that modest improvements are likely, the dwarf galaxies discovered by DES and LSST are unlikely to increase Fermis sensitivity by more than a factor of ~2-4. However, this outlook may be conservative, given that our model underpredicts the number of ultra-faint galaxies with large potential annihilation signals actually discovered in the Sloan Digital Sky Survey. Our simulation-based approach focusing on the Milky Way satellite population demographics complements existing empirically-based estimates.
The Laser Interferometer Space Antenna (LISA) will open three decades of gravitational wave (GW) spectrum between 0.1 and 100 mHz, the mHz band. This band is expected to be the richest part of the GW spectrum, in types of sources, numbers of sources, signal-to-noise ratios and discovery potential. When LISA opens the low-frequency window of the gravitational wave spectrum, around 2034, the surge of gravitational-wave astronomy will strongly compel a subsequent mission to further explore the frequency bands of the GW spectrum that can only be accessed from space. The 2020s is the time to start developing technology and studying mission concepts for a large-scale mission to be launched in the 2040s. The mission concept would then be proposed to Astro2030. Only space based missions can access the GW spectrum between 10 nHz and 1 Hz because of the Earths seismic noise. This white paper surveys the science in this band and mission concepts that could accomplish that science. The proposed small scale activity is a technology development program that would support a range of concepts and a mission concept study to choose a specific mission concept for Astro2030. In this white paper, we will refer to a generic GW mission beyond LISA as bLISA.
Astronomy is entering a new era of discovery, coincident with the establishment of new facilities for observation and simulation that will routinely generate petabytes of data. While an increasing reliance on automated data analysis is anticipated, a critical role will remain for visualization-based knowledge discovery. We have investigated scientific visualization applications in astronomy through an examination of the literature published during the last two decades. We identify the two most active fields for progress - visualization of large-N particle data and spectral data cubes - discuss open areas of research, and introduce a mapping between astronomical sources of data and data representations used in general purpose visualization tools. We discuss contributions using high performance computing architectures (e.g: distributed processing and GPUs), collaborative astronomy visualization, the use of workflow systems to store metadata about visualization parameters, and the use of advanced interaction devices. We examine a number of issues that may be limiting the spread of scientific visualization research in astronomy and identify six grand challenges for scientific visualization research in the Petascale Astronomy Era.
The deluge of data from time-domain surveys is rendering traditional human-guided data collection and inference techniques impractical. We propose a novel approach for conducting data collection for science inference in the era of massive large-scale surveys that uses value-based metrics to autonomously strategize and co-ordinate follow-up in real-time. We demonstrate the underlying principles in the Recommender Engine For Intelligent Transient Tracking (REFITT) that ingests live alerts from surveys and value-added inputs from data brokers to predict the future behavior of transients and design optimal data augmentation strategies given a set of scientific objectives. The prototype presented in this paper is tested to work given simulated Rubin Observatory Legacy Survey of Space and Time (LSST) core-collapse supernova (CC SN) light-curves from the PLAsTiCC dataset. CC SNe were selected for the initial development phase as they are known to be difficult to classify, with the expectation that any learning techniques for them should be at least as effective for other transients. We demonstrate the behavior of REFITT on a random LSST night given ~32000 live CC SNe of interest. The system makes good predictions for the photometric behavior of the events and uses them to plan follow-up using a simple data-driven metric. We argue that machine-directed follow-up maximizes the scientific potential of surveys and follow-up resources by reducing downtime and bias in data collection.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا