ترغب بنشر مسار تعليمي؟ اضغط هنا

Solving an elastic inverse problem using Convolutional Neural Networks

141   0   0.0 ( 0 )
 نشر من قبل Nachiket Gokhale
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We explore the application of a Convolutional Neural Network (CNN) to image the shear modulus field of an almost incompressible, isotropic, linear elastic medium in plane strain using displacement or strain field data. This problem is important in medicine because the shear modulus of suspicious and potentially cancerous growths in soft tissue is elevated by about an order of magnitude as compared to the background of normal tissue. Imaging the shear modulus field therefore can lead to high-contrast medical images. Our imaging problem is: Given a displacement or strain field (or its components), predict the corresponding shear modulus field. Our CNN is trained using 6000 training examples consisting of a displacement or strain field and a corresponding shear modulus field. We observe encouraging results which warrant further research and show the promise of this methodology.

قيم البحث

اقرأ أيضاً

During a tokamak discharge, the plasma can vary between different confinement regimes: Low (L), High (H) and, in some cases, a temporary (intermediate state), called Dithering (D). In addition, while the plasma is in H mode, Edge Localized Modes (ELM s) can occur. The automatic detection of changes between these states, and of ELMs, is important for tokamak operation. Motivated by this, and by recent developments in Deep Learning (DL), we developed and compared two methods for automatic detection of the occurrence of L-D-H transitions and ELMs, applied on data from the TCV tokamak. These methods consist in a Convolutional Neural Network (CNN) and a Convolutional Long Short Term Memory Neural Network (Conv-LSTM). We measured our results with regards to ELMs using ROC curves and Youdens score index, and regarding state detection using Cohens Kappa Index.
Caching of popular content closer to the mobile user can significantly increase overall user experience as well as network efficiency by decongesting backbone network segments in the case of congestion episodes. In order to find the optimal caching l ocations, many conventional approaches rely on solving a complex optimization problem that suffers from the curse of dimensionality, which may fail to support online decision making. In this paper we propose a framework to amalgamate model based optimization with data driven techniques by transforming an optimization problem to a grayscale image and train a convolutional neural network (CNN) to predict optimal caching location policies. The rationale for the proposed modelling comes from CNNs superiority to capture features in grayscale images reaching human level performance in image recognition problems. The CNN is trained with optimal solutions and numerical investigations reveal that the performance can increase by more than 400% compared to powerful randomized greedy algorithms. To this end, the proposed technique seems as a promising way forward to the holy grail aspect in resource orchestration which is providing high quality decision making in real time.
Selection of the correct convergence angle is essential for achieving the highest resolution imaging in scanning transmission electron microscopy (STEM). Use of poor heuristics, such as Rayleighs quarter-phase rule, to assess probe quality and uncert ainties in measurement of the aberration function result in incorrect selection of convergence angles and lower resolution. Here, we show that the Strehl ratio provides an accurate and efficient to calculate criteria for evaluating probe size for STEM. A convolutional neural network trained on the Strehl ratio is shown to outperform experienced microscopists at selecting a convergence angle from a single electron Ronchigram using simulated datasets. Generating tens of thousands of simulated Ronchigram examples, the network is trained to select convergence angles yielding probes on average 85% nearer to optimal size at millisecond speeds (0.02% human assessment time). Qualitative assessment on experimental Ronchigrams with intentionally introduced aberrations suggests that trends in the optimal convergence angle size are well modeled but high accuracy requires extensive training datasets. This near immediate assessment of Ronchigrams using the Strehl ratio and machine learning highlights a viable path toward rapid, automated alignment of aberration-corrected electron microscopes.
In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resol ution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton-proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural networks optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark-antiquark pairs produced in proton-proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.
Extracting relevant properties of empirical signals generated by nonlinear, stochastic, and high-dimensional systems is a challenge of complex systems research. Open questions are how to differentiate chaotic signals from stochastic ones, and how to quantify nonlinear and/or high-order temporal correlations. Here we propose a new technique to reliably address both problems. Our approach follows two steps: first, we train an artificial neural network (ANN) with flicker (colored) noise to predict the value of the parameter, $alpha$, that determines the strength of the correlation of the noise. To predict $alpha$ the ANN input features are a set of probabilities that are extracted from the time series by using symbolic ordinal analysis. Then, we input to the trained ANN the probabilities extracted from the time series of interest, and analyze the ANN output. We find that the $alpha$ value returned by the ANN is informative of the temporal correlations present in the time series. To distinguish between stochastic and chaotic signals, we exploit the fact that the difference between the permutation entropy (PE) of a given time series and the PE of flicker noise with the same $alpha$ parameter is small when the time series is stochastic, but it is large when the time series is chaotic. We validate our technique by analysing synthetic and empirical time series whose nature is well established. We also demonstrate the robustness of our approach with respect to the length of the time series and to the level of noise. We expect that our algorithm, which is freely available, will be very useful to the community.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا