ترغب بنشر مسار تعليمي؟ اضغط هنا

End-to-end Deep Learning Pipeline for Microwave Kinetic Inductance Detector (MKID) Resonator Identification and Tuning

344   0   0.0 ( 0 )
 نشر من قبل Neelay Fruitwala
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We present the development of a machine learning based pipeline to fully automate the calibration of the frequency comb used to read out optical/IR Microwave Kinetic Inductance Detector (MKID) arrays. This process involves determining the resonant frequency and optimal drive power of every pixel (i.e. resonator) in the array, which is typically done manually. Modern optical/IR MKID arrays, such as DARKNESS (DARK-speckle Near-infrared Energy-resolving Superconducting Spectrophotometer) and MEC (MKID Exoplanet Camera), contain 10-20,000 pixels, making the calibration process extremely time consuming; each 2000 pixel feedline requires 4-6 hours of manual tuning. Here we present a pipeline which uses a single convolutional neural network (CNN) to perform both resonator identification and tuning simultaneously. We find that our pipeline has performance equal to that of the manual tuning process, and requires just twelve minutes of computational time per feedline.

قيم البحث

اقرأ أيضاً

155 - Yuanyuan Shi , Bolun Xu 2021
This paper proposes a novel end-to-end deep learning framework that simultaneously identifies demand baselines and the incentive-based agent demand response model, from the net demand measurements and incentive signals. This learning framework is mod ularized as two modules: 1) the decision making process of a demand response participant is represented as a differentiable optimization layer, which takes the incentive signal as input and predicts users response; 2) the baseline demand forecast is represented as a standard neural network model, which takes relevant features and predicts users baseline demand. These two intermediate predictions are integrated, to form the net demand forecast. We then propose a gradient-descent approach that backpropagates the net demand forecast errors to update the weights of the agent model and the weights of baseline demand forecast, jointly. We demonstrate the effectiveness of our approach through computation experiments with synthetic demand response traces and a large-scale real world demand response dataset. Our results show that the approach accurately identifies the demand response model, even without any prior knowledge about the baseline demand.
We present the development of the End-to-End simulator for the SOXS instrument at the ESO-NTT 3.5-m telescope. SOXS will be a spectroscopic facility, made by two arms high efficiency spectrographs, able to cover the spectral range 350-2000 nm with re solving power R=4500. The E2E model allows to simulate the propagation of photons starting from the scientific target of interest up to the detectors. The outputs of the simulator are synthetic frames, which will be mainly exploited for optimizing the pipeline development and possibly assisting for proper alignment and integration phases in laboratory and at the telescope. In this paper, we will detail the architecture of the simulator and the computational model, which are strongly characterized by modularity and flexibility. Synthetic spectral formats, related to different seeing and observing conditions, and calibration frames to be ingested by the pipeline are also presented.
Realistic synthetic observations of theoretical source models are essential for our understanding of real observational data. In using synthetic data, one can verify the extent to which source parameters can be recovered and evaluate how various data corruption effects can be calibrated. These studies are important when proposing observations of new sources, in the characterization of the capabilities of new or upgraded instruments, and when verifying model-based theoretical predictions in a comparison with observational data. We present the SYnthetic Measurement creator for long Baseline Arrays (SYMBA), a novel synthetic data generation pipeline for Very Long Baseline Interferometry (VLBI) observations. SYMBA takes into account several realistic atmospheric, instrumental, and calibration effects. We used SYMBA to create synthetic observations for the Event Horizon Telescope (EHT), a mm VLBI array, which has recently captured the first image of a black hole shadow. After testing SYMBA with simple source and corruption models, we study the importance of including all corruption and calibration effects. Based on two example general relativistic magnetohydrodynamics (GRMHD) model images of M87, we performed case studies to assess the attainable image quality with the current and future EHT array for different weather conditions. The results show that the effects of atmospheric and instrumental corruptions on the measured visibilities are significant. Despite these effects, we demonstrate how the overall structure of the input models can be recovered robustly after performing calibration steps. With the planned addition of new stations to the EHT array, images could be reconstructed with higher angular resolution and dynamic range. In our case study, these improvements allowed for a distinction between a thermal and a non-thermal GRMHD model based on salient features in reconstructed images.
MeerKATHI is the current development name for a radio-interferometric data reduction pipeline, assembled by an international collaboration. We create a publicly available end-to-end continuum- and line imaging pipeline for MeerKAT and other radio tel escopes. We implement advanced techniques that are suitable for producing high-dynamic-range continuum images and spectroscopic data cubes. Using containerization, our pipeline is platform-independent. Furthermore, we are applying a standardized approach for using a number of different of advanced software suites, partly developed within our group. We aim to use distributed computing approaches throughout our pipeline to enable the user to reduce larger data sets like those provided by radio telescopes such as MeerKAT. The pipeline also delivers a set of imaging quality metrics that give the user the opportunity to efficiently assess the data quality.
368 - Yankun Xu , Jie Yang , Shiqi Zhao 2021
An accurate seizure prediction system enables early warnings before seizure onset of epileptic patients. It is extremely important for drug-refractory patients. Conventional seizure prediction works usually rely on features extracted from Electroence phalography (EEG) recordings and classification algorithms such as regression or support vector machine (SVM) to locate the short time before seizure onset. However, such methods cannot achieve high-accuracy prediction due to information loss of the hand-crafted features and the limited classification ability of regression and SVM algorithms. We propose an end-to-end deep learning solution using a convolutional neural network (CNN) in this paper. One and two dimensional kernels are adopted in the early- and late-stage convolution and max-pooling layers, respectively. The proposed CNN model is evaluated on Kaggle intracranial and CHB-MIT scalp EEG datasets. Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively. Comparison with state-of-the-art works indicates that the proposed model achieves exceeding prediction performance.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا