ترغب بنشر مسار تعليمي؟ اضغط هنا

End to end simulators: A flexible and scalable Cloud-Based architecture. Application to High Resolution Spectrographs ESPRESSO and ELT-HIRES

116   0   0.0 ( 0 )
 نشر من قبل Matteo Genoni
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Simulations of frames from existing and upcoming high-resolution spectrographs, targeted for high accuracy radial velocity measurements, are computationally demanding (both in time and space). We present in this paper an innovative approach based on both parallelization and distribution of the workload. By using NVIDIA CUDA custom-made kernels and state-of-the-art cloud-computing architectures in a Platform as a Service (PaaS) approach, we implemented a modular and scalable end-to-end simulator that is able to render synthetic frames with an accuracy of the order of few cm/sec, while keeping the computational time low. We applied our approach to two spectrographs. For VLT-ESPRESSO we give a sound comparison between the actual data and the simulations showing the obtained spectral formats and the recovered instrumental profile. We also simulate data for the upcoming HIRES at the ELT and investigate the overall performance in terms of computational time and scalability against the size of the problem. In addition we demonstrate the interface with data-reduction systems and we preliminary show that the data can be reduced successfully by existing methods.



قيم البحث

اقرأ أيضاً

HIRES will be the high-resolution spectrograph of the European Extremely Large Telescope at optical and near-infrared wavelengths. It consists of three fibre-fed spectrographs providing a wavelength coverage of 0.4-1.8 mic (goal 0.35-1.8 mic) at a sp ectral resolution of ~100,000. The fibre-feeding allows HIRES to have several, interchangeable observing modes including a SCAO module and a small diffraction-limited IFU in the NIR. Therefore, it will be able to operate both in seeing and diffraction-limited modes. ELT-HIRES has a wide range of science cases spanning nearly all areas of research in astrophysics and even fundamental physics. Some of the top science cases will be the detection of bio signatures from exoplanet atmospheres, finding the fingerprints of the first generation of stars (PopIII), tests on the stability of Natures fundamental couplings, and the direct detection of the cosmic acceleration. The HIRES consortium is composed of more than 30 institutes from 14 countries, forming a team of more than 200 scientists and engineers.
We present Espresso, an open-source, modular, extensible end-to-end neural automatic speech recognition (ASR) toolkit based on the deep learning library PyTorch and the popular neural machine translation toolkit fairseq. Espresso supports distributed training across GPUs and computing nodes, and features various decoding approaches commonly employed in ASR, including look-ahead word-based language model fusion, for which a fast, parallelized decoder is implemented. Espresso achieves state-of-the-art ASR performance on the WSJ, LibriSpeech, and Switchboard data sets among other end-to-end systems without data augmentation, and is 4--11x faster for decoding than similar systems (e.g. ESPnet).
The first generation of E-ELT instruments will include an optical-infrared High Resolution Spectrograph, conventionally indicated as EELT-HIRES, which will be capable of providing unique breakthroughs in the fields of exoplanets, star and planet form ation, physics and evolution of stars and galaxies, cosmology and fundamental physics. A 2-year long phase A study for EELT-HIRES has just started and will be performed by a consortium composed of institutes and organisations from Brazil, Chile, Denmark, France, Germany, Italy, Poland, Portugal, Spain, Sweden, Switzerland and United Kingdom. In this paper we describe the science goals and the preliminary technical concept for EELT-HIRES which will be developed during the phase A, as well as its planned development and consortium organisation during the study.
248 - M. Frailis , M. Maris , A. Zacchei 2010
The Level 1 of the Planck LFI Data Processing Centre (DPC) is devoted to the handling of the scientific and housekeeping telemetry. It is a critical component of the Planck ground segment which has to strictly commit to the project schedule to be rea dy for the launch and flight operations. In order to guarantee the quality necessary to achieve the objectives of the Planck mission, the design and development of the Level 1 software has followed the ESA Software Engineering Standards. A fundamental step in the software life cycle is the Verification and Validation of the software. The purpose of this work is to show an example of procedures, test development and analysis successfully applied to a key software project of an ESA mission. We present the end-to-end validation tests performed on the Level 1 of the LFI-DPC, by detailing the methods used and the results obtained. Different approaches have been used to test the scientific and housekeeping data processing. Scientific data processing has been tested by injecting signals with known properties directly into the acquisition electronics, in order to generate a test dataset of real telemetry data and reproduce as much as possible nominal conditions. For the HK telemetry processing, validation software have been developed to inject known parameter values into a set of real housekeeping packets and perform a comparison with the corresponding timelines generated by the Level 1. With the proposed validation and verification procedure, where the on-board and ground processing are viewed as a single pipeline, we demonstrated that the scientific and housekeeping processing of the Planck-LFI raw data is correct and meets the project requirements.
Recently, Transformer has gained success in automatic speech recognition (ASR) field. However, it is challenging to deploy a Transformer-based end-to-end (E2E) model for online speech recognition. In this paper, we propose the Transformer-based onlin e CTC/attention E2E ASR architecture, which contains the chunk self-attention encoder (chunk-SAE) and the monotonic truncated attention (MTA) based self-attention decoder (SAD). Firstly, the chunk-SAE splits the speech into isolated chunks. To reduce the computational cost and improve the performance, we propose the state reuse chunk-SAE. Sencondly, the MTA based SAD truncates the speech features monotonically and performs attention on the truncated features. To support the online recognition, we integrate the state reuse chunk-SAE and the MTA based SAD into online CTC/attention architecture. We evaluate the proposed online models on the HKUST Mandarin ASR benchmark and achieve a 23.66% character error rate (CER) with a 320 ms latency. Our online model yields as little as 0.19% absolute CER degradation compared with the offline baseline, and achieves significant improvement over our prior work on Long Short-Term Memory (LSTM) based online E2E models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا