No Arabic abstract
This paper presents a statistical framework enabling optimal sampling and robust analysis of fatigue data. We create protocols using Bayesian maximum entropy sampling, which build on the staircase and step methods, removing the requirement of prior knowledge of the fatigue strength distribution for data collection. Results show improved sampling efficiency and parameter estimation over the conventional approaches. Statistical methods for distinguishing between distribution types highlight the role of the protocol in model distinction. Experimental validation of the above work is performed, showing the applicability of the methods in laboratory testing.
A new phenomenological technique for using constant amplitude loading data to predict fatigue life from a variable amplitude strain history is presented. A critical feature of this reversal-by-reversal model is that the damage accumulation is inherently non-linear. The damage for a reversal in the variable amplitude loading history is predicted by approximating that the accumulated damage comes from a constant amplitude loading that has the strain range of the particular variable amplitude reversal. A key feature of this approach is that overloads at the beginning of the strain history have a more substantial impact on the total lifetime than overloads applied toward the end of the cycle life. This technique effectively incorporates the strain history in the damage prediction and has the advantage over other methods in that there are no fitting parameters that require substantial experimental data. The model presented here is validated using experimental variable amplitude fatigue data for three different metals.
In this paper, we propose a Bayesian Hypothesis Testing Algorithm (BHTA) for sparse representation. It uses the Bayesian framework to determine active atoms in sparse representation of a signal. The Bayesian hypothesis testing based on three assumptions, determines the active atoms from the correlations and leads to the activity measure as proposed in Iterative Detection Estimation (IDE) algorithm. In fact, IDE uses an arbitrary decreasing sequence of thresholds while the proposed algorithm is based on a sequence which derived from hypothesis testing. So, Bayesian hypothesis testing framework leads to an improved version of the IDE algorithm. The simulations show that Hard-version of our suggested algorithm achieves one of the best results in terms of estimation accuracy among the algorithms which have been implemented in our simulations, while it has the greatest complexity in terms of simulation time.
We consider sequential hypothesis testing between two quantum states using adaptive and non-adaptive strategies. In this setting, samples of an unknown state are requested sequentially and a decision to either continue or to accept one of the two hypotheses is made after each test. Under the constraint that the number of samples is bounded, either in expectation or with high probability, we exhibit adaptive strategies that minimize both types of misidentification errors. Namely, we show that these errors decrease exponentially (in the stopping time) with decay rates given by the measured relative entropies between the two states. Moreover, if we allow joint measurements on multiple samples, the rates are increased to the respective quantum relative entropies. We also fully characterize the achievable error exponents for non-adaptive strategies and provide numerical evidence showing that adaptive measurements are necessary to achieve our bounds under some additional assumptions.
The problem of verifying whether a multi-component system has anomalies or not is addressed. Each component can be probed over time in a data-driven manner to obtain noisy observations that indicate whether the selected component is anomalous or not. The aim is to minimize the probability of incorrectly declaring the system to be free of anomalies while ensuring that the probability of correctly declaring it to be safe is sufficiently large. This problem is modeled as an active hypothesis testing problem in the Neyman-Pearson setting. Component-selection and inference strategies are designed and analyzed in the non-asymptotic regime. For a specific class of homogeneous problems, stronger (with respect to prior work) non-asymptotic converse and achievability bounds are provided.
Reliable models of the thermodynamic properties of materials are critical for industrially relevant applications that require a good understanding of equilibrium phase diagrams, thermal and chemical transport, and microstructure evolution. The goal of thermodynamic models is to capture data from both experimental and computational studies and then make reliable predictions when extrapolating to new regions of parameter space. These predictions will be impacted by artifacts present in real data sets such as outliers, systematics errors and unreliable or missing uncertainty bounds. Such issues increase the probability of the thermodynamic model producing erroneous predictions. We present a Bayesian framework for the selection, calibration and quantification of uncertainty of thermodynamic property models. The modular framework addresses numerous concerns regarding thermodynamic models including thermodynamic consistency, robustness to outliers and systematic errors by the use of hyperparameter weightings and robust Likelihood and Prior distribution choices. Furthermore, the frameworks inherent transparency (e.g. our choice of probability functions and associated parameters) enables insights into the complex process of thermodynamic assessment. We introduce these concepts through examples where the true property model is known. In addition, we demonstrate the utility of the framework through the creation of a property model from a large set of experimental specific heat and enthalpy measurements of Hafnium metal from 0 to 4900K.