ترغب بنشر مسار تعليمي؟ اضغط هنا

Simultaneous least squares fitter based on the Lagrange multiplier method

144   0   0.0 ( 0 )
 نشر من قبل Yinghui Guan
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the $chi^2$ minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Lagrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the $D^{0}-bar{D}^{0}$ mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.



قيم البحث

اقرأ أيضاً

There are plenty of applications and analysis for time-independent elliptic partial differential equations in the literature hinting at the benefits of overtesting by using more collocation conditions than the number of basis functions. Overtesting n ot only reduces the problem size, but is also known to be necessary for stability and convergence of widely used unsymmetric Kansa-type strong-form collocation methods. We consider kernel-based meshfree methods, which is a method of lines with collocation and overtesting spatially, for solving parabolic partial differential equations on surfaces without parametrization. In this paper, we extend the time-independent convergence theories for overtesting techniques to the parabolic equations on smooth and closed surfaces.
112 - Jiawang Nie , Li Wang , Jane Ye 2020
This paper studies bilevel polynomial optimization problems. To solve them, we give a method based on polynomial optimization relaxations. Each relaxation is obtained from the Kurash-Kuhn-Tucker (KKT) conditions for the lower level optimization and t he exchange technique for semi-infinite programming. For KKT conditions, Lagrange multipliers are represented as polynomial or rational functions. The Moment-SOS relaxations are used to solve the polynomial optimizattion relaxations. Under some general assumptions, we prove the convergence of the algorithm for solving bilevel polynomial optimization problems. Numerical experiments are presented to show the efficiency of the method.
Wireless sensor network has recently received much attention due to its broad applicability and ease-of-installation. This paper is concerned with a distributed state estimation problem, where all sensor nodes are required to achieve a consensus esti mation. The weighted least squares (WLS) estimator is an appealing way to handle this problem since it does not need any prior distribution information. To this end, we first exploit the equivalent relation between the information filter and WLS estimator. Then, we establish an optimization problem under the relation coupled with a consensus constraint. Finally, the consensus-based distributed WLS problem is tackled by the alternating direction method of multiplier (ADMM). Numerical simulation together with theoretical analysis testify the convergence and consensus estimations between nodes.
We study the Yang-Mills theory and quantum gravity at finite temperature, in the presence of Lagrange multiplier fields. These restrict the path integrals to field configurations that obey the classical equations of motion. This has the effect of dou bling the usual one-loop thermal contributions and of suppressing all radiative corrections at higher loop order. Such theories are renormalizable at all temperatures. Some consequences of this result in quantum gravity are briefly examined.
Tensor decompositions, such as CANDECOMP/PARAFAC (CP), are widely used in a variety of applications, such as chemometrics, signal processing, and machine learning. A broadly used method for computing such decompositions relies on the Alternating Leas t Squares (ALS) algorithm. When the number of components is small, regardless of its implementation, ALS exhibits low arithmetic intensity, which severely hinders its performance and makes GPU offloading ineffective. We observe that, in practice, experts often have to compute multiple decompositions of the same tensor, each with a small number of components (typically fewer than 20), to ultimately find the best ones to use for the application at hand. In this paper, we illustrate how multiple decompositions of the same tensor can be fused together at the algorithmic level to increase the arithmetic intensity. Therefore, it becomes possible to make efficient use of GPUs for further speedups; at the same time the technique is compatible with many enhancements typically used in ALS, such as line search, extrapolation, and non-negativity constraints. We introduce the Concurrent ALS algorithm and library, which offers an interface to Matlab, and a mechanism to effectively deal with the issue that decompositions complete at different times. Experimental results on artificial and real datasets demonstrate a shorter time to completion due to increased arithmetic intensity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا