Do you want to publish a course? Click here

Modeling Material Stress Using Integrated Gaussian Markov Random Fields

376   0   0.0 ( 0 )
 Added by Peter Marcy
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

The equations of a physical constitutive model for material stress within tantalum grains were solved numerically using a tetrahedrally meshed volume. The resulting output included a scalar vonMises stress for each of the more than 94,000 tetrahedra within the finite element discretization. In this paper, we define an intricate statistical model for the spatial field of vonMises stress which uses the given grain geometry in a fundamental way. Our model relates the three-dimensional field to integrals of latent stochastic processes defined on the vertices of the one- and two-dimensional grain boundaries. An intuitive neighborhood structure of said boundary nodes suggested the use of a latent Gaussian Markov random field (GMRF). However, despite the potential for computational gains afforded by GMRFs, the integral nature of our model and the sheer number of data points pose substantial challenges for a full Bayesian analysis. To overcome these problems and encourage efficient exploration of the posterior distribution, a number of techniques are now combined: parallel computing, sparse matrix methods, and a modification of a block update strategy within the sampling routine. In addition, we use an auxiliary variables approach to accommodate the presence of outliers in the data.

rate research

Read More

Identifying the most deprived regions of any country or city is key if policy makers are to design successful interventions. However, locating areas with the greatest need is often surprisingly challenging in developing countries. Due to the logistical challenges of traditional household surveying, official statistics can be slow to be updated; estimates that exist can be coarse, a consequence of prohibitive costs and poor infrastructures; and mass urbanisation can render manually surveyed figures rapidly out-of-date. Comparative judgement models, such as the Bradley--Terry model, offer a promising solution. Leveraging local knowledge, elicited via comparisons of different areas affluence, such models can both simplify logistics and circumvent biases inherent to house-hold surveys. Yet widespread adoption remains limited, due to the large amount of data existing approaches still require. We address this via development of a novel Bayesian Spatial Bradley--Terry model, which substantially decreases the amount of data comparisons required for effective inference. This model integrates a network representation of the city or country, along with assumptions of spatial smoothness that allow deprivation in one area to be informed by neighbouring areas. We demonstrate the practical effectiveness of this method, through a novel comparative judgement data set collected in Dar es Salaam, Tanzania.
Understanding centennial scale climate variability requires data sets that are accurate, long, continuous and of broad spatial coverage. Since instrumental measurements are generally only available after 1850, temperature fields must be reconstructed using paleoclimate archives, known as proxies. Various climate field reconstructions (CFR) methods have been proposed to relate past temperature to such proxy networks. In this work, we propose a new CFR method, called GraphEM, based on Gaussian Markov random fields embedded within an EM algorithm. Gaussian Markov random fields provide a natural and flexible framework for modeling high-dimensional spatial fields. At the same time, they provide the parameter reduction necessary for obtaining precise and well-conditioned estimates of the covariance structure, even in the sample-starved setting common in paleoclimate applications. In this paper, we propose and compare the performance of different methods to estimate the graphical structure of climate fields, and demonstrate how the GraphEM algorithm can be used to reconstruct past climate variations. The performance of GraphEM is compared to the widely used CFR method RegEM with regularization via truncated total least squares, using synthetic data. Our results show that GraphEM can yield significant improvements, with uniform gains over space, and far better risk properties. We demonstrate that the spatial structure of temperature fields can be well estimated by graphs where each neighbor is only connected to a few geographically close neighbors, and that the increase in performance is directly related to recovering the underlying sparsity in the covariance of the spatial field. Our work demonstrates how significant improvements can be made in climate reconstruction methods by better modeling the covariance structure of the climate field.
In this paper we describe a general probabilistic framework for modeling waveforms such as heartbeats from ECG data. The model is based on segmental hidden Markov models (as used in speech recognition) with the addition of random effects to the generative model. The random effects component of the model handles shape variability across different waveforms within a general class of waveforms of similar shape. We show that this probabilistic model provides a unified framework for learning these models from sets of waveform data as well as parsing, classification, and prediction of new waveforms. We derive a computationally efficient EM algorithm to fit the model on multiple waveforms, and introduce a scoring method that evaluates a test waveform based on its shape. Results on two real-world data sets demonstrate that the random effects methodology leads to improved accuracy (compared to alternative approaches) on classification and segmentation of real-world waveforms.
In this paper, we study the problem of inferring time-varying Markov random fields (MRF), where the underlying graphical model is both sparse and changes sparsely over time. Most of the existing methods for the inference of time-varying MRFs rely on the regularized maximum likelihood estimation (MLE), that typically suffer from weak statistical guarantees and high computational time. Instead, we introduce a new class of constrained optimization problems for the inference of sparsely-changing MRFs. The proposed optimization problem is formulated based on the exact $ell_0$ regularization, and can be solved in near-linear time and memory. Moreover, we show that the proposed estimator enjoys a provably small estimation error. As a special case, we derive sharp statistical guarantees for the inference of sparsely-changing Gaussian MRFs (GMRF) in the high-dimensional regime, showing that such problems can be learned with as few as one sample per time. Our proposed method is extremely efficient in practice: it can accurately estimate sparsely-changing graphical models with more than 500 million variables in less than one hour.
In many real-life scenarios, system failure depends on dynamic stress-strength interference, where strength degrades and stress accumulates concurrently over time. In this paper, we consider the problem of finding an optimal replacement strategy that balances the cost of replacement with the cost of failure and results in a minimum expected cost per unit time under cumulative damage model with strength degradation. The existing recommendations are applicable only under restricted distributional assumptions and/or with fixed strength. As theoretical evaluation of the expected cost per unit time turns out to be very complicated, a simulation-based algorithm is proposed to evaluate the expected cost rate and find the optimal replacement strategy. The proposed method is easy to implement having wider domain of application. For illustration, the proposed method is applied to real case studies on mailbox and cell-phone battery experiments.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا