ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimization of entropy functionals

120   0   0.0 ( 0 )
 نشر من قبل Christian Leonard
 تاريخ النشر 2007
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Christian Leonard




اسأل ChatGPT حول البحث

Entropy functionals (i.e. convex integral functionals) and extensions of these functionals are minimized on convex sets. This paper is aimed at reducing as much as possible the assumptions on the constraint set. Dual equalities and characterizations of the minimizers are obtained with weak constraint qualifications.



قيم البحث

اقرأ أيضاً

We survey optimization problems that involve the cardinality of variable vectors in constraints or the objective function. We provide a unified viewpoint on the general problem classes and models, and give concrete examples from diverse application f ields such as signal and image processing, portfolio selection, or machine learning. The paper discusses general-purpose modeling techniques and broadly applicable as well as problem-specific exact and heuristic solution approaches. While our perspective is that of mathematical optimization, a main goal of this work is to reach out to and build bridges between the different communities in which cardinality optimization problems are frequently encountered. In particular, we highlight that modern mixed-integer programming, which is often regarded as impractical due to commonly unsatisfactory behavior of black-box solvers applied to generic problem formulations, can in fact produce provably high-quality or even optimal solutions for cardinality optimization problems, even in large-scale real-world settings. Achieving such performance typically draws on the merits of problem-specific knowledge that may stem from different fields of application and, e.g., shed light on structural properties of a model or its solutions, or lead to the development of efficient heuristics; we also provide some illustrative examples.
176 - C. Vallee , C. Zalinescu 2015
A formula for the sub-differential of the sum of a series of convex functions defined on a Banach space was provided by X. Y. Zheng in 1998. In this paper, besides a slight extension to locally convex spaces of Zhengs results, we provide a formula fo r the conjugate of a countable sum of convex functions. Then we use these results for calculating the sub-differentials and the conjugates in two situations related to entropy minimization, and we study a concrete example met in Statistical Physics.
We propose a new viewpoint on variational mean-field games with diffusion and quadratic Hamiltonian. We show the equivalence of such mean-field games with a relative entropy minimization at the level of probabilities on curves. We also address the ti me-discretization of such problems, establish $Gamma$-convergence results as the time step vanishes and propose an efficient algorithm relying on this entropic interpretation as well as on the Sinkhorn scaling algorithm.
We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1-regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict linear speedups, up to a problem-dependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1.
Feature selection, in the context of machine learning, is the process of separating the highly predictive feature from those that might be irrelevant or redundant. Information theory has been recognized as a useful concept for this task, as the predi ction power stems from the correlation, i.e., the mutual information, between features and labels. Many algorithms for feature selection in the literature have adopted the Shannon-entropy-based mutual information. In this paper, we explore the possibility of using Renyi min-entropy instead. In particular, we propose an algorithm based on a notion of conditional Renyi min-entropy that has been recently adopted in the field of security and privacy, and which is strictly related to the Bayes error. We prove that in general the two approaches are incomparable, in the sense that we show that we can construct datasets on which the Renyi-based algorithm performs better than the corresponding Shannon-based one, and datasets on which the situation is reversed. In practice, however, when considering datasets of real data, it seems that the Renyi-based algorithm tends to outperform the other one. We have effectuate several experiments on the BASEHOCK, SEMEION, and GISETTE datasets, and in all of them we have indeed observed that the Renyi-based algorithm gives better results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا