Do you want to publish a course? Click here

Real-time solar image classification: assessing spectral, pixel-based approaches

69   0   0.0 ( 0 )
 Added by James Hughes
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

In order to utilize solar imagery for real-time feature identification and large-scale data science investigations of solar structures, we need maps of the Sun where phenomena, or themes, are labeled. Since solar imagers produce observations every few minutes, it is not feasible to label all images by hand. Here, we compare three machine learning algorithms performing solar image classification using extreme ultraviolet and Hydrogen-alpha images: a maximum likelihood model assuming a single normal probability distribution for each theme from Rigler et al. (2012), a maximum-likelihood model with an underlying Gaussian mixtures distribution, and a random forest model. We create a small database of expert-labeled maps to train and test these algorithms. Due to the ambiguity between the labels created by different experts, a collaborative labeling is used to include all inputs. We find the random forest algorithm performs the best amongst the three algorithms. The advantages of this algorithm are best highlighted in: comparison of outputs to hand-drawn maps; response to short-term variability; and tracking long-term changes on the Sun. Our work indicates that the next generation of solar image classification algorithms would benefit significantly from using spatial structure recognition, compared to only using spectral, pixel-by-pixel brightness distributions.



rate research

Read More

104 - A. Asensio Ramos 2018
The quality of images of the Sun obtained from the ground are severely limited by the perturbing effect of the turbulent Earths atmosphere. The post-facto correction of the images to compensate for the presence of the atmosphere require the combination of high-order adaptive optics techniques, fast measurements to freeze the turbulent atmosphere and very time consuming blind deconvolution algorithms. Under mild seeing conditions, blind deconvolution algorithms can produce images of astonishing quality. They can be very competitive with those obtained from space, with the huge advantage of the flexibility of the instrumentation thanks to the direct access to the telescope. In this contribution we leverage deep learning techniques to significantly accelerate the blind deconvolution process and produce corrected images at a peak rate of ~100 images per second. We present two different architectures that produce excellent image corrections with noise suppression while maintaining the photometric properties of the images. As a consequence, polarimetric signals can be obtained with standard polarimetric modulation without any significant artifact. With the expected improvements in computer hardware and algorithms, we anticipate that on-site real-time correction of solar images will be possible in the near future.
During laparoscopic surgery, context-aware assistance systems aim to alleviate some of the difficulties the surgeon faces. To ensure that the right information is provided at the right time, the current phase of the intervention has to be known. Real-time locating and classification the surgical tools currently in use are key components of both an activity-based phase recognition and assistance generation. In this paper, we present an image-based approach that detects and classifies tools during laparoscopic interventions in real-time. First, potential instrument bounding boxes are detected using a pixel-wise random forest segmentation. Each of these bounding boxes is then classified using a cascade of random forest. For this, multiple features, such as histograms over hue and saturation, gradients and SURF feature, are extracted from each detected bounding box. We evaluated our approach on five different videos from two different types of procedures. We distinguished between the four most common classes of instruments (LigaSure, atraumatic grasper, aspirator, clip applier) and background. Our method succesfully located up to 86% of all instruments respectively. On manually provided bounding boxes, we achieve a instrument type recognition rate of up to 58% and on automatically detected bounding boxes up to 49%. To our knowledge, this is the first approach that allows an image-based classification of surgical tools in a laparoscopic setting in real-time.
One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new Compressed Sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.
Recently, machine learning methods presented a viable solution for automated classification of image-based data in various research fields and business applications. Scientists require a fast and reliable solution to be able to handle the always growing enormous amount of data in astronomy. However, so far astronomers have been mainly classifying variable star light curves based on various pre-computed statistics and light curve parameters. In this work we use an image-based Convolutional Neural Network to classify the different types of variable stars. We used images of phase-folded light curves from the OGLE-III survey for training, validating and testing and used OGLE-IV survey as an independent data set for testing. After the training phase, our neural network was able to classify the different types between 80 and 99%, and 77-98% accuracy for OGLE-III and OGLE-IV, respectively.
123 - T. Rotter 2015
We present an empirical model based on the visible area covered by coronal holes close to the central meridian in order to predict the solar wind speed at 1 AU with a lead time up to four days in advance with a 1hr time resolution. Linear prediction functions are used to relate coronal hole areas to solar wind speed. The function parameters are automatically adapted by using the information from the previous 3 Carrington Rotations. Thus the algorithm automatically reacts on the changes of the solar wind speed during different phases of the solar cycle. The adaptive algorithm has been applied to and tested on SDO/AIA-193A observations and ACE measurements during the years 2011-2013, covering 41 Carrington Rotations. The solar wind speed arrival time is delayed and needs on average 4.02 +/- 0.5 days to reach Earth. The algorithm produces good predictions for the 156 solar wind high speed streams peak amplitudes with correlation coefficients of cc~0.60. For 80% of the peaks, the predicted arrival matches within a time window of 0.5 days of the ACE in situ measurements. The same algorithm, using linear predictions, was also applied to predict the magnetic field strength from coronal hole areas but did not give reliable predictions (cc~0.2).
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا