ترغب بنشر مسار تعليمي؟ اضغط هنا

RotNet: Fast and Scalable Estimation of Stellar Rotation Periods Using Convolutional Neural Networks

390   0   0.0 ( 0 )
 نشر من قبل J. Emmanuel Johnson
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Magnetic activity in stars manifests as dark spots on their surfaces that modulate the brightness observed by telescopes. These light curves contain important information on stellar rotation. However, the accurate estimation of rotation periods is computationally expensive due to scarce ground truth information, noisy data, and large parameter spaces that lead to degenerate solutions. We harness the power of deep learning and successfully apply Convolutional Neural Networks to regress stellar rotation periods from Kepler light curves. Geometry-preserving time-series to image transformations of the light curves serve as inputs to a ResNet-18 based architecture which is trained through transfer learning. The McQuillan catalog of published rotation periods is used as ansatz to groundtruth. We benchmark the performance of our method against a random forest regressor, a 1D CNN, and the Auto-Correlation Function (ACF) - the current standard to estimate rotation periods. Despite limiting our input to fewer data points (1k), our model yields more accurate results and runs 350 times faster than ACF runs on the same number of data points and 10,000 times faster than ACF runs on 65k data points. With only minimal feature engineering our approach has impressive accuracy, motivating the application of deep learning to regress stellar parameters on an even larger scale

قيم البحث

اقرأ أيضاً

We used a convolutional neural network to infer stellar rotation periods from a set of synthetic light curves simulated with realistic spot evolution patterns. We convolved these simulated light curves with real TESS light curves containing minimal i ntrinsic astrophysical variability to allow the network to learn TESS systematics and estimate rotation periods despite them. In addition to periods, we predict uncertainties via heteroskedastic regression to estimate the credibility of the period predictions. In the most credible half of the test data, we recover 10%-accurate periods for 46% of the targets, and 20%-accurate periods for 69% of the targets. Using our trained network, we successfully recover periods of real stars with literature rotation measurements, even past the 13.7-day limit generally encountered by TESS rotation searches using conventional period-finding techniques. Our method also demonstrates resistance to half-period aliases. We present the neural network and simulated training data, and introduce the software butterpy used to synthesize the light curves using realistic star spot evolution.
127 - Sercan O. Arik , Heewoo Jun , 2018
We propose the multi-head convolutional neural network (MCNN) architecture for waveform synthesis from spectrograms. Nonlinear interpolation in MCNN is employed with transposed convolution layers in parallel heads. MCNN achieves more than an order of magnitude higher compute intensity than commonly-used iterative algorithms like Griffin-Lim, yielding efficient utilization for modern multi-core processors, and very fast (more than 300x real-time) waveform synthesis. For training of MCNN, we use a large-scale speech recognition dataset and losses defined on waveforms that are related to perceptual audio quality. We demonstrate that MCNN constitutes a very promising approach for high-quality speech synthesis, without any iterative algorithms or autoregression in computations.
We measure rotation periods for 12151 stars in the Kepler field, based on the photometric variability caused by stellar activity. Our analysis returns stable rotation periods over at least six out of eight quarters of Kepler data. This large sample o f stars enables us to study the rotation periods as a function of spectral type. We find good agreement with previous studies and vsini measurements for F, G and K stars. Combining rotation periods, B-V color, and gyrochronology relations, we find that the cool stars in our sample are predominantly younger than ~1Gyr.
Due to the ever-expanding volume of observed spectroscopic data from surveys such as SDSS and LAMOST, it has become important to apply artificial intelligence (AI) techniques for analysing stellar spectra to solve spectral classification and regressi on problems like the determination of stellar atmospheric parameters Teff, log g, [Fe/H]. We propose an automated approach for the classification of stellar spectra in the optical region using Convolutional Neural Networks. Traditional machine learning (ML) methods with shallow architecture (usually up to 2 hidden layers) have been trained for these purposes in the past. However, deep learning methods with a larger number of hidden layers allow the use of finer details in the spectrum which results in improved accuracy and better generalisation. Studying finer spectral signatures also enables us to determine accurate differential stellar parameters and find rare objects. We examine various machine and deep learning algorithms like Artificial Neural Networks (ANN), Random Forest (RF), and Convolutional Neural Network (CNN) to classify stellar spectra using the Jacoby Atlas, ELODIE and MILES spectral libraries as training samples. We test the performance of the trained networks on the Indo-U.S. Library of Coude Feed Stellar Spectra (CFLIB). We show that using convolutional neural networks, we are able to lower the error up to 1.23 spectral sub-classes as compared to that of 2 sub-classes achieved in the past studies with ML approach. We further apply the trained model to classify stellar spectra retrieved from the SDSS database with SNR>20.
In this paper, we present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI). This multi-agent decentralized learning-to-control framework maps raw visual observations to agent actions, aided by lo cal communication among neighboring agents. Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning, as well as swarm-level communication, local information aggregation and agent action inference, respectively. By jointly training the CNN and GNN, image features and communication messages are learned in conjunction to better address the specific task. We use imitation learning to train the VGAI controller in an offline phase, relying on a centralized expert controller. This results in a learned VGAI controller that can be deployed in a distributed manner for online execution. Additionally, the controller exhibits good scaling properties, with training in smaller teams and application in larger teams. Through a multi-agent flocking application, we demonstrate that VGAI yields performance comparable to or better than other decentralized controllers, using only the visual input modality and without accessing precise location or motion state information.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا