Do you want to publish a course? Click here

Efficient Mobius Transformations and their applications to Dempster-Shafer Theory: Clarification and implementation

98   0   0.0 ( 0 )
 Added by Maxime Chaveroche
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Dempster-Shafer Theory (DST) generalizes Bayesian probability theory, offering useful additional information, but suffers from a high computational burden. A lot of work has been done to reduce the complexity of computations used in information fusion with Dempsters rule. The main approaches exploit either the structure of Boolean lattices or the information contained in belief sources. Each has its merits depending on the situation. In this paper, we propose sequences of graphs for the computation of the zeta and Mobius transformations that optimally exploit both the structure of distributive semilattices and the information contained in belief sources. We call them the Efficient Mobius Transformations (EMT). We show that the complexity of the EMT is always inferior to the complexity of algorithms that consider the whole lattice, such as the Fast Mobius Transform (FMT) for all DST transformations. We then explain how to use them to fuse two belief sources. More generally, our EMTs apply to any function in any finite distributive lattice, focusing on a meet-closed or join-closed subset. This article extends our work published at the international conference on Scalable Uncertainty Management (SUM). It clarifies it, brings some minor corrections and provides implementation details such as data structures and algorithms applied to DST.



rate research

Read More

Dempster-Shafer Theory (DST) generalizes Bayesian probability theory, offering useful additional information, but suffers from a much higher computational burden. A lot of work has been done to reduce the time complexity of information fusion with Dempsters rule, which is a pointwise multiplication of two zeta transforms, and optimal general algorithms have been found to get the complete definition of these transforms. Yet, it is shown in this paper that the zeta transform and its inverse, the Mobius transform, can be exactly simplified, fitting the quantity of information contained in belief functions. Beyond that, this simplification actually works for any function on any partially ordered set. It relies on a new notion that we call focal point and that constitutes the smallest domain on which both the zeta and Mobius transforms can be defined. We demonstrate the interest of these general results for DST, not only for the reduction in complexity of most transformations between belief representations and their fusion, but also for theoretical purposes. Indeed, we provide a new generalization of the conjunctive decomposition of evidence and formulas uncovering how each decomposition weight is tied to the corresponding mass function.
The article provides a review of the publications on the current trends and developments in Dempster-Shafer theory and its different applications in science, engineering, and technologies. The review took account of the following provisions with a focus on some specific aspects of the theory. Firstly, the article considers the research directions whose results are known not only in scientific and academic community but understood by a wide circle of potential designers and developers of advanced engineering solutions and technologies. Secondly, the article shows the theory applications in some important areas of human activity such as manufacturing systems, diagnostics of technological processes, materials and products, building and construction, product quality control, economic and social systems. The particular attention is paid to the current state of research in the domains under consideration and, thus, the papers published, as a rule, in recent years and presenting the achievements of modern research on Dempster-Shafer theory and its application are selected and analyzed.
In analogy with the regularity lemma of Szemeredi, regularity lemmas for polynomials shown by Green and Tao (Contrib. Discrete Math. 2009) and by Kaufman and Lovett (FOCS 2008) modify a given collection of polynomials calF = {P_1,...,P_m} to a new collection calF so that the polynomials in calF are pseudorandom. These lemmas have various applications, such as (special cases) of Reed-Muller testing and worst-case to average-case reductions for polynomials. However, the transformation from calF to calF is not algorithmic for either regularity lemma. We define new notions of regularity for polynomials, which are analogous to the above, but which allow for an efficient algorithm to compute the pseudorandom collection calF. In particular, when the field is of high characteristic, in polynomial time, we can refine calF into calF where every nonzero linear combination of polynomials in calF has desirably small Gowers norm. Using the algorithmic regularity lemmas, we show that if a polynomial P of degree d is within (normalized) Hamming distance 1-1/|F| -eps of some unknown polynomial of degree k over a prime field F (for k < d < |F|), then there is an efficient algorithm for finding a degree-k polynomial Q, which is within distance 1-1/|F| -eta of P, for some eta depending on eps. This can be thought of as decoding the Reed-Muller code of order k beyond the list decoding radius (finding one close codeword), when the received word P itself is a polynomial of degree d (with k < d < |F|). We also obtain an algorithmic version of the worst-case to average-case reductions by Kaufman and Lovett. They show that if a polynomial of degree d can be weakly approximated by a polynomial of lower degree, then it can be computed exactly using a collection of polynomials of degree at most d-1. We give an efficient (randomized) algorithm to find this collection.
Equality and disjointness are two of the most studied problems in communication complexity. They have been studied for both classical and also quantum communication and for various models and modes of communication. Buhrman et al. [Buh98] proved that the exact quantum communication complexity for a promise version of the equality problem is ${bf O}(log {n})$ while the classical deterministic communication complexity is $n+1$ for two-way communication, which was the first impressively large (exponential) gap between quantum and classical (deterministic and probabilistic) communication complexity. If an error is tolerated, both quantum and probabilistic communication complexities for equality are ${bf O}(log {n})$. However, even if an error is tolerated, the gaps between quantum (probabilistic) and deterministic complexity are not larger than quadratic for the disjointness problem. It is therefore interesting to ask whether there are some promis
93 - Bin Fu , Pengfei Gu , 2018
We introduce polyhedra circuits. Each polyhedra circuit characterizes a geometric region in $mathbb{R}^d$. They can be applied to represent a rich class of geometric objects, which include all polyhedra and the union of a finite number of polyhedra. They can be used to approximate a large class of $d$-dimensional manifolds in $mathbb{R}^d$. Barvinok developed polynomial time algorithms to compute the volume of a rational polyhedra, and to count the number of lattice points in a rational polyhedra in a fixed dimensional space $mathbb{R}^d$ with a fix $d$. Define $T_V(d,, n)$ be the polynomial time in $n$ to compute the volume of one rational polyhedra, $T_L(d,, n)$ be the polynomial time in $n$ to count the number of lattice points in one rational polyhedra with $d$ be a fixed dimensional number, $T_I(d,, n)$ be the polynomial time in $n$ to solve integer linear programming time with $d$ be the fixed dimensional number, where $n$ is the total number of linear inequalities from input polyhedra. We develop algorithms to count the number of lattice points in the geometric region determined by a polyhedra circuit in $Oleft(ndcdot r_d(n)cdot T_V(d,, n)right)$ time and to compute the volume of the geometric region determined by a polyhedra circuit in $Oleft(ncdot r_d(n)cdot T_I(d,, n)+r_d(n)T_L(d,, n)right)$ time, where $n$ is the number of input linear inequalities, $d$ is number of variables and $r_d(n)$ be the maximal number of regions that $n$ linear inequalities with $d$ variables partition $mathbb{R}^d$.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا