ترغب بنشر مسار تعليمي؟ اضغط هنا

Error bounds, facial residual functions and applications to the exponential cone

38   0   0.0 ( 0 )
 نشر من قبل Bruno F. Louren\\c{c}o
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We construct a general framework for deriving error bounds for conic feasibility problems. In particular, our approach allows one to work with cones that fail to be amenable or even to have computable projections, two previously challenging barriers. For the purpose, we first show how error bounds may be constructed using objects called facial residual functions. Then, we develop several tools to compute facial residual functions even in the absence of closed form expressions for the projections onto the cones. We demonstrate the use and power of our results by computing tight error bounds for the exponential cone feasibility problem. Interestingly, we discover a natural example for which the tightest error bound is related to the Boltzmann-Shannon entropy. We were also able to produce an example of sets for which a H{o}lderian error bound holds but the supremum of the set of admissible exponents is not itself an admissible exponent.

قيم البحث

اقرأ أيضاً

100 - Zhenyu Cai 2020
Noise in quantum hardware remains the biggest roadblock for the implementation of quantum computers. To fight the noise in the practical application of near-term quantum computers, instead of relying on quantum error correction which requires large q ubit overhead, we turn to quantum error mitigation, in which we make use of extra measurements. Error extrapolation is an error mitigation technique that has been successfully implemented experimentally. Numerical simulation and heuristic arguments have indicated that exponential curves are effective for extrapolation in the large circuit limit with an expected circuit error count around unity. In this article, we extend this to multi-exponential error extrapolation and provide more rigorous proof for its effectiveness under Pauli noise. This is further validated via our numerical simulations, showing orders of magnitude improvements in the estimation accuracy over single-exponential extrapolation. Moreover, we develop methods to combine error extrapolation with two other error mitigation techniques: quasi-probability and symmetry verification, through exploiting features of these individual techniques. As shown in our simulation, our combined method can achieve low estimation bias with a sampling cost multiple times smaller than quasi-probability while without needing to be able to adjust the hardware error rate as required in canonical error extrapolation.
155 - Boris Ryabko 2020
We describe and explore so-called linear hash functions and show how they can be used to build error detection and correction codes. The method can be applied for different types of errors (for example, burst errors). When the method is applied to a model where number of distorted letters is limited, the obtained estimate of its performance is slightly better than the known Varshamov-Gilbert bound. We also describe random code whose performance is close to the same boundary, but its construction is much simpler. In some cases the obtained methods are simpler and more flexible than the known ones. In particular, the complexity of the obtained error detection code and the well-known CRC code is close, but the proposed code, unlike CRC, can detect with certainty errors whose number does not exceed a predetermined limit.
116 - C. Vallee , C. Zalinescu 2015
A formula for the sub-differential of the sum of a series of convex functions defined on a Banach space was provided by X. Y. Zheng in 1998. In this paper, besides a slight extension to locally convex spaces of Zhengs results, we provide a formula fo r the conjugate of a countable sum of convex functions. Then we use these results for calculating the sub-differentials and the conjugates in two situations related to entropy minimization, and we study a concrete example met in Statistical Physics.
The Generalized Lax Conjecture asks whether every hyperbolicity cone is a section of a semidefinite cone of sufficiently high dimension. We prove that the space of hyperbolicity cones of hyperbolic polynomials of degree $d$ in $n$ variables contains $(n/d)^{Omega(d)}$ pairwise distant cones in a certain metric, and therefore that any semidefinite representation of such cones must have dimension at least $(n/d)^{Omega(d)}$ (even if a small approximation is allowed). The proof contains several ingredients of independent interest, including the identification of a large subspace in which the elementary symmetric polynomials lie in the relative interior of the set of hyperbolic polynomials, and quantitati
102 - Warren Adams , Akshay Gupte , 2017
Convex hulls of monomials have been widely studied in the literature, and monomial convexifications are implemented in global optimization software for relaxing polynomials. However, there has been no study of the error in the global optimum from suc h approaches. We give bounds on the worst-case error for convexifying a monomial over subsets of $[0,1]^n$. This implies additive error bounds for relaxing a polynomial optimization problem by convexifying each monomial separately. Our main error bounds depend primarily on the degree of the monomial, making them easy to compute. Since monomial convexification studies depend on the bounds on the associated variables, in the second part, we conduct an error analysis for a multilinear monomial over two different types of box constraints. As part of this analysis, we also derive the convex hull of a multilinear monomial over $[-1,1]^n$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا