Do you want to publish a course? Click here

A Gaussian fixed point random walk

258   0   0.0 ( 0 )
 Added by Yang P. Liu
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this note, we design a discrete random walk on the real line which takes steps $0, pm 1$ (and one with steps in ${pm 1, 2}$) where at least $96%$ of the signs are $pm 1$ in expectation, and which has $mathcal{N}(0,1)$ as a stationary distribution. As an immediate corollary, we obtain an online version of Banaszczyks discrepancy result for partial colorings and $pm 1, 2$ signings. Additionally, we recover linear time algorithms for logarithmic bounds for the Koml{o}s conjecture in an oblivious online setting.



rate research

Read More

Motivated by storage applications, we study the following data structure problem: An encoder wishes to store a collection of jointly-distributed files $overline{X}:=(X_1,X_2,ldots, X_n) sim mu$ which are emph{correlated} ($H_mu(overline{X}) ll sum_i H_mu(X_i)$), using as little (expected) memory as possible, such that each individual file $X_i$ can be recovered quickly with few (ideally constant) memory accesses. In the case of independent random files, a dramatic result by Pat (FOCS08) and subsequently by Dodis, Pat and Thorup (STOC10) shows that it is possible to store $overline{X}$ using just a emph{constant} number of extra bits beyond the information-theoretic minimum space, while at the same time decoding each $X_i$ in constant time. However, in the (realistic) case where the files are correlated, much weaker results are known, requiring at least $Omega(n/polylg n)$ extra bits for constant decoding time, even for simple joint distributions $mu$. We focus on the natural case of compressingemph{Markov chains}, i.e., storing a length-$n$ random walk on any (possibly directed) graph $G$. Denoting by $kappa(G,n)$ the number of length-$n$ walks on $G$, we show that there is a succinct data structure storing a random walk using $lg_2 kappa(G,n) + O(lg n)$ bits of space, such that any vertex along the walk can be decoded in $O(1)$ time on a word-RAM. For the harder task of matching the emph{point-wise} optimal space of the walk, i.e., the empirical entropy $sum_{i=1}^{n-1} lg (deg(v_i))$, we present a data structure with $O(1)$ extra bits at the price of $O(lg n)$ decoding time, and show that any improvement on this would lead to an improved solution on the long-standing Dictionary problem. All of our data structures support the emph{online} version of the problem with constant update and query time.
One of the most studied models of SAT is random SAT. In this model, instances are composed from clauses chosen uniformly randomly and independently of each other. This model may be unsatisfactory in that it fails to describe various features of SAT instances, arising in real-world applications. Various modifications have been suggested to define models of industrial SAT. Here, we focus mainly on the aspect of community structure. Namely, here the set of variables consists of a number of disjoint communities, and clauses tend to consist of variables from the same community. Thus, we suggest a model of random industrial SAT, in which the central generalization with respect to random SAT is the additional community structure. There has been a lot of work on the satisfiability threshold of random $k$-SAT, starting with the calculation of the threshold of $2$-SAT, up to the recent result that the threshold exists for sufficiently large $k$. In this paper, we endeavor to study the satisfiability threshold for the proposed model of random industrial SAT. Our main result is that the threshold in this model tends to be smaller than its counterpart for random SAT. Moreover, under some conditions, this threshold even vanishes.
174 - Dehua Cheng , Yu Cheng , Yan Liu 2015
We consider a fundamental algorithmic question in spectral graph theory: Compute a spectral sparsifier of random-walk matrix-polynomial $$L_alpha(G)=D-sum_{r=1}^dalpha_rD(D^{-1}A)^r$$ where $A$ is the adjacency matrix of a weighted, undirected graph, $D$ is the diagonal matrix of weighted degrees, and $alpha=(alpha_1...alpha_d)$ are nonnegative coefficients with $sum_{r=1}^dalpha_r=1$. Recall that $D^{-1}A$ is the transition matrix of random walks on the graph. The sparsification of $L_alpha(G)$ appears to be algorithmically challenging as the matrix power $(D^{-1}A)^r$ is defined by all paths of length $r$, whose precise calculation would be prohibitively expensive. In this paper, we develop the first nearly linear time algorithm for this sparsification problem: For any $G$ with $n$ vertices and $m$ edges, $d$ coefficients $alpha$, and $epsilon > 0$, our algorithm runs in time $O(d^2mlog^2n/epsilon^{2})$ to construct a Laplacian matrix $tilde{L}=D-tilde{A}$ with $O(nlog n/epsilon^{2})$ non-zeros such that $tilde{L}approx_{epsilon}L_alpha(G)$. Matrix polynomials arise in mathematical analysis of matrix functions as well as numerical solutions of matrix equations. Our work is particularly motivated by the algorithmic problems for speeding up the classic Newtons method in applications such as computing the inverse square-root of the precision matrix of a Gaussian random field, as well as computing the $q$th-root transition (for $qgeq1$) in a time-reversible Markov model. The key algorithmic step for both applications is the construction of a spectral sparsifier of a constant degree random-walk matrix-polynomials introduced by Newtons method. Our algorithm can also be used to build efficient data structures for effective resistances for multi-step time-reversible Markov models, and we anticipate that it could be useful for other tasks in network analysis.
190 - U. Haboeck 2008
We show that the twisted planar random walk - which results by summing up stationary increments rotated by multiples of a fixed angle - is recurrent under diverse assumptions on the increment process. For example, if the increment process is alpha-mixing and of finite second moment, then the twisted random walk is recurrent for every angle fixed choice of the angle out of a set of full Lebesgue measure, no matter how slow the mixing coefficients decay.
Many statistical inference problems correspond to recovering the values of a set of hidden variables from sparse observations on them. For instance, in a planted constraint satisfaction problem such as planted 3-SAT, the clauses are sparse observations from which the hidden assignment is to be recovered. In the problem of community detection in a stochastic block model, the community labels are hidden variables that are to be recovered from the edges of the graph. Inspired by ideas from statistical physics, the presence of a stable fixed point for belief propogation has been widely conjectured to characterize the computational tractability of these problems. For community detection in stochastic block models, many of these predictions have been rigorously confirmed. In this work, we consider a general model of statistical inference problems that includes both community detection in stochastic block models, and all planted constraint satisfaction problems as special cases. We carry out the cavity method calculations from statistical physics to compute the regime of parameters where detection and recovery should be algorithmically tractable. At precisely the predicted tractable regime, we give: (i) a general polynomial-time algorithm for the problem of detection: distinguishing an input with a planted signal from one without; (ii) a general polynomial-time algorithm for the problem of recovery: outputting a vector that correlates with the hidden assignment significantly better than a random guess would.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا