Do you want to publish a course? Click here

Community Challenges in the Era of Petabyte-Scale Sky Surveys

115   0   0.0 ( 0 )
 Added by Michael Kelley
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We outline the challenges faced by the planetary science community in the era of next-generation large-scale astronomical surveys, and highlight needs that must be addressed in order for the community to maximize the quality and quantity of scientific output from archival, existing, and future surveys, while satisfying NASAs and NSFs goals.



rate research

Read More

The field of astronomy has arrived at a turning point in terms of size and complexity of both datasets and scientific collaboration. Commensurately, algorithms and statistical models have begun to adapt --- e.g., via the onset of artificial intelligence --- which itself presents new challenges and opportunities for growth. This white paper aims to offer guidance and ideas for how we can evolve our technical and collaborative frameworks to promote efficient algorithmic development and take advantage of opportunities for scientific discovery in the petabyte era. We discuss challenges for discovery in large and complex data sets; challenges and requirements for the next stage of development of statistical methodologies and algorithmic tool sets; how we might change our paradigms of collaboration and education; and the ethical implications of scientists contributions to widely applicable algorithms and computational modeling. We start with six distinct recommendations that are supported by the commentary following them. This white paper is related to a larger corpus of effort that has taken place within and around the Petabytes to Science Workshops (https://petabytestoscience.github.io/).
109 - Roger P. Deane 2017
The past decade has seen significant advances in cm-wave VLBI extragalactic observations due to a wide range of technical successes, including the increase in processed field-of-view and bandwidth. The future inclusion of MeerKAT into global VLBI networks would provide further enhancement, particularly the dramatic sensitivity boost to >7000 km baselines. This will not be without its limitations, however, considering incomplete MeerKAT band overlap with current VLBI arrays and the small (real-time) field-of-view afforded by the phased up MeerKAT array. We provide a brief overview of the significant contributions MeerKAT-VLBI could make, with an emphasis on the scientific output of several MeerKAT extragalactic Large Survey Projects.
Current time-domain wide-field sky surveys generally operate with few-degree-sized fields and take many individual images to cover large sky areas each night. We present the design and project status of the Evryscope (wide-seer), which takes a different approach: using an array of 7cm telescopes to form a single wide-field-of-view pointed at every part of the accessible sky simultaneously and continuously. The Evryscope is a gigapixel-scale imager with a 9060 sq. deg. field of view and has an etendue three times larger than the Pan-STARRS sky survey. The system will search for transiting exoplanets around bright stars, M-dwarfs and white dwarfs, as well as detecting microlensing events, nearby supernovae, and gamma-ray burst afterglows. We present the current project status, including an update on the Evryscope prototype telescopes we have been operating for the last three years in the Canadian High Arctic.
WFIRST is NASAs first flagship mission with pre-defined core science programs to study dark energy and perform a statistical census of wide orbit exoplanets with a gravitational microlensing survey. Together, these programs are expected to use more than half of the prime mission observing time. Previously, only smaller, PI-led missions have had core programs that used such a large fraction of the observing time, and in many cases, the data from these PI-led missions was reserved for the PIs science team for a proprietary period that allowed the PIs team to make most of the major discoveries from the data. Such a procedure is not appropriate for a flagship mission, which should provide science opportunities to the entire astronomy community. For this reason, there will be no proprietary period for WFIRST data, but we argue that a larger effort to make WFIRST science accessible to the astronomy community is needed. We propose a plan to enhance community involvement in the WFIRST exoplanet microlensing survey in two different ways. First, we propose a set of high level data products that will enable astronomers without detailed microlensing expertise access to the statistical implications of the WFIRST exoplanet microlensing survey data. And second, we propose the formation of a WFIRST Exoplanet Microlensing Community Science Team that will open up participation in the development of the WFIRST exoplanet microlensing survey to the general astronomy community in collaboration for the NASA selected science team, which will have the responsibility to provide most of the high level data products. This community science team will be open to volunteers, but members should also have the opportunity to apply for funding.
Weak lensing peak counts are a powerful statistical tool for constraining cosmological parameters. So far, this method has been applied only to surveys with relatively small areas, up to several hundred square degrees. As future surveys will provide weak lensing datasets with size of thousands of square degrees, the demand on the theoretical prediction of the peak statistics will become heightened. In particular, large simulations of increased cosmological volume are required. In this work, we investigate the possibility of using simulations generated with the fast Comoving-Lagrangian acceleration (COLA) method, coupled to the convergence map generator Ufalcon, for predicting the peak counts. We examine the systematics introduced by the COLA method by comparing it with a full TreePM code. We find that for a 2000 deg$^2$ survey, the systematic error is much smaller than the statistical error. This suggests that the COLA method is able to generate promising theoretical predictions for weak lensing peaks. We also examine the constraining power of various configurations of data vectors, exploring the influence of splitting the sample into tomographic bins and combining different smoothing scales. We find the combination of smoothing scales to have the most constraining power, improving the constraints on the $S_8$ amplitude parameter by at least 40% compared to a single smoothing scale, with tomography brining only limited increase in measurement precision.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا