No Arabic abstract
The fitting of physical models is often done only using a single target observable. However, when multiple targets are considered, the fitting procedure becomes cumbersome, there being no easy way to quantify the robustness of the model for all different observables. Here, we illustrate that one can jointly search for the best model for each desired observable through multi-objective optimization. To do so we construct the Pareto front to study if there exists a set of parameters of the model that can jointly describe multiple, or all, observables. To alleviate the computational cost, the predicted error for each targeted objective is approximated with a Gaussian process model, as it is commonly done in the Bayesian optimization framework. We applied this methodology to improve three different models used in the simulation of stationary state $cis-trans$ photoisomerization of retinal in rhodopsin. Optimization was done with respect to different experimental measurements, including emission spectra, peak absorption frequencies for the $cis$ and $trans$ conformers, and the energy storage.
Particle accelerators require constant tuning during operation to meet beam quality, total charge and particle energy requirements for use in a wide variety of physics, chemistry and biology experiments. Maximizing the performance of an accelerator facility often necessitates multi-objective optimization, where operators must balance trade-offs between multiple objectives simultaneously, often using limited, temporally expensive beam observations. Usually, accelerator optimization problems are solved offline, prior to actual operation, with advanced beamline simulations and parallelized optimization methods (NSGA-II, Swarm Optimization). Unfortunately, it is not feasible to use these methods for online multi-objective optimization, since beam measurements can only be done in a serial fashion, and these optimization methods require a large number of measurements to converge to a useful solution.Here, we introduce a multi-objective Bayesian optimization scheme, which finds the full Pareto front of an accelerator optimization problem efficiently in a serialized manner and is thus a critical step towards practical online multi-objective optimization in accelerators.This method uses a set of Gaussian process surrogate models, along with a multi-objective acquisition function, which reduces the number of observations needed to converge by at least an order of magnitude over current methods.We demonstrate how this method can be modified to specifically solve optimization challenges posed by the tuning of accelerators.This includes the addition of optimization constraints, objective preferences and costs related to changing accelerator parameters.
Machine learning techniques have been developed to learn from complete data. When missing values exist in a dataset, the incomplete data should be preprocessed separately by removing data points with missing values or imputation. In this paper, we propose an online approach to handle missing values while a classification model is learnt. To reach this goal, we develop a multi-objective optimization model with two objective functions for imputation and model selection. We also propose three formulations for imputation objective function. We use an evolutionary algorithm based on NSGA II to find the optimal solutions as the Pareto solutions. We investigate the reliability and robustness of the proposed model using experiments by defining several scenarios in dealing with missing values and classification. We also describe how the proposed model can contribute to medical informatics. We compare the performance of three different formulations via experimental results. The proposed model results get validated by comparing with a comparable literature.
Beam quality optimization in mammography traditionally considers detection of a target obscured by quantum noise on a homogenous background. It can be argued that this scheme does not correspond well to the clinical imaging task because real mammographic images contain a complex superposition of anatomical structures, resulting in anatomical noise that may dominate over quantum noise. Using a newly developed spectral mammography system, we measured the correlation and magnitude of the anatomical noise in a set of mammograms. The results from these measurements were used as input to an observer-model optimization that included quantum noise as well as anatomical noise. We found that, within this framework, the detectability of tumors and microcalcifications behaved very differently with respect to beam quality and dose. The results for small microcalcifications were similar to what traditional optimization methods would yield, which is to be expected since quantum noise dominates over anatomical noise at high spatial frequencies. For larger tumors, however, low-frequency anatomical noise was the limiting factor. Because anatomical structure has similar energy dependence as tumor contrast, optimal x-ray energy was significantly higher and the useful energy region wider than traditional methods suggest. Measurements on a tissue phantom confirmed these theoretical results. Furthermore, since quantum noise constitutes only a small fraction of the noise, the dose could be reduced substantially without sacrificing tumor detectability. Exposure settings used clinically are therefore not necessarily optimal for this imaging task. The impact of these findings on the mammographic imaging task as a whole is, however, at this stage unclear.
Particle accelerators are invaluable tools for research in the basic and applied sciences, in fields such as materials science, chemistry, the biosciences, particle physics, nuclear physics and medicine. The design, commissioning, and operation of accelerator facilities is a non-trivial task, due to the large number of control parameters and the complex interplay of several conflicting design goals. We propose to tackle this problem by means of multi-objective optimization algorithms which also facilitate a parallel deployment. In order to compute solutions in a meaningful time frame a fast and scalable software framework is required. In this paper, we present the implementation of such a general-purpose framework for simulation-based multi-objective optimization methods that allows the automatic investigation of optimal sets of machine parameters. The implementation is based on a master/slave paradigm, employing several masters that govern a set of slaves executing simulations and performing optimization tasks. Using evolutionary algorithms as the optimizer and OPAL as the forward solver, validation experiments and results of multi-objective optimization problems in the domain of beam dynamics are presented. The high charge beam line at the Argonne Wakefield Accelerator Facility was used as the beam dynamics model. The 3D beam size, transverse momentum, and energy spread were optimized.
The photoisomerization reaction of the retinal chromophore in rhodopsin was computationally studied using a two-state two-mode model coupled to thermal baths. Reaction quantum yields at the steady state (10 ps and beyond) were found to be considerably different than their transient values, suggesting a weak correlation between transient and steady-state dynamics in these systems. Significantly, the steady-state quantum yield was highly sensitive to minute changes in system parameters, while transient dynamics was nearly unaffected. Correlation of such sensitivity with standard level spacing statistics of the nonadiabatic vibronic system suggests a possible origin in quantum chaos. The feasibility of experimental observation of this phenomenon and its implications in condensed-phase photochemistry and biological light sensing are discussed.