ترغب بنشر مسار تعليمي؟ اضغط هنا

According to the ErdH{o}s discrepancy conjecture, for any infinite $pm 1$ sequence, there exists a homogeneous arithmetic progression of unbounded discrepancy. In other words, for any $pm 1$ sequence $(x_1,x_2,...)$ and a discrepancy $C$, there exist integers $m$ and $d$ such that $|sum_{i=1}^m x_{i cdot d}| > C$. This is an $80$-year-old open problem and recent development proved that this conjecture is true for discrepancies up to $2$. Paul ErdH{o}s also conjectured that this property of unbounded discrepancy even holds for the restricted case of completely multiplicative sequences (CMSs), namely sequences $(x_1,x_2,...)$ where $x_{a cdot b} = x_{a} cdot x_{b}$ for any $a,b geq 1$. The longest CMS with discrepancy $2$ has been proven to be of size $246$. In this paper, we prove that any completely multiplicative sequence of size $127,646$ or more has discrepancy at least $4$, proving the ErdH{o}s discrepancy conjecture for CMSs of discrepancies up to $3$. In addition, we prove that this bound is tight and increases the size of the longest known sequence of discrepancy $3$ from $17,000$ to $127,645$. Finally, we provide inductive construction rules as well as streamlining methods to improve the lower bounds for sequences of higher discrepancies.
Many probabilistic inference tasks involve summations over exponentially large sets. Recently, it has been shown that these problems can be reduced to solving a polynomial number of MAP inference queries for a model augmented with randomly generated parity constraints. By exploiting a connection with max-likelihood decoding of binary codes, we show that these optimizations are computationally hard. Inspired by iterative message passing decoding algorithms, we propose an Integer Linear Programming (ILP) formulation for the problem, enhanced with new sparsification techniques to improve decoding performance. By solving the ILP through a sequence of LP relaxations, we get both lower and upper bounds on the partition function, which hold with high probability and are much tighter than those obtained with variational methods.
We consider the problem of sampling from solutions defined by a set of hard constraints on a combinatorial space. We propose a new sampling technique that, while enforcing a uniform exploration of the search space, leverages the reasoning power of a systematic constraint solver in a black-box scheme. We present a series of challenging domains, such as energy barriers and highly asymmetric spaces, that reveal the difficulties introduced by hard constraints. We demonstrate that standard approaches such as Simulated Annealing and Gibbs Sampling are greatly affected, while our new technique can overcome many of these difficulties. Finally, we show that our sampling scheme naturally defines a new approximate model counting technique, which we empirically show to be very accurate on a range of benchmark problems.
In this paper we introduce a class of Markov decision processes that arise as a natural model for many renewable resource allocation problems. Upon extending results from the inventory control literature, we prove that they admit a closed form soluti on and we show how to exploit this structure to speed up its computation. We consider the application of the proposed framework to several problems arising in very different domains, and as part of the ongoing effort in the emerging field of Computational Sustainability we discuss in detail its application to the Northern Pacific Halibut marine fishery. Our approach is applied to a model based on real world data, obtaining a policy with a guaranteed lower bound on the utility function that is structurally very different from the one currently employed.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا