ترغب بنشر مسار تعليمي؟ اضغط هنا

Since its introduction Boson Sampling has been the subject of intense study in the world of quantum computing. The task is to sample independently from the set of all $n times n$ submatrices built from possibly repeated rows of a larger $m times n$ c omplex matrix according to a probability distribution related to the permanents of the submatrices. Experimental systems exploiting quantum photonic effects can in principle perform the task at great speed. In the framework of classical computing, Aaronson and Arkhipov (2011) showed that exact Boson Sampling problem cannot be solved in polynomial time unless the polynomial hierarchy collapses to the third level. Indeed for a number of years the fastest known exact classical algorithm ran in $O({m+n-1 choose n} n 2^n )$ time per sample, emphasising the potential speed advantage of quantum computation. The advantage was reduced by Clifford and Clifford (2018) who gave a significantly faster classical solution taking $O(n 2^n + operatorname{poly}(m,n))$ time and linear space, matching the complexity of computing the permanent of a single matrix when $m$ is polynomial in $n$. We continue by presenting an algorithm for Boson Sampling whose average-case time complexity is much faster when $m$ is proportional to $n$. In particular, when $m = n$ our algorithm runs in approximately $O(ncdot1.69^n)$ time on average. This result further increases the problem size needed to establish quantum computational supremacy via Boson Sampling.
Ulam has defined a history-dependent random sequence of integers by the recursion $X_{n+1}$ $= X_{U(n)}+X_{V(n)}, n geqslant r$ where $U(n)$ and $V(n)$ are independently and uniformly distributed on ${1,dots,n}$, and the initial sequence, $X_1=x_1,do ts,X_r=x_r$, is fixed. We consider the asymptotic properties of this sequence as $n to infty$, showing, for example, that $n^{-2} sum_{k=1}^n X_k$ converges to a non-degenerate random variable. We also consider the moments and auto-covariance of the process, showing, for example, that when the initial condition is $x_1 =1$ with $r =1$, then $lim_{nto infty} n^{-2} E X^2_n = (2 pi)^{-1} sinh(pi)$; and that for large $m < n$, we have $(m n)^{-1} E X_m X_n doteq (3 pi)^{-1} sinh(pi).$ We further consider new random adding processes where changes occur independently at discrete times with probability $p$, or where changes occur continuously at jump times of an independent Poisson process. The processes are shown to have properties similar to those of the discrete time process with $p=1$, and to be readily generalised to a wider range of related sequences.
We consider random processes that are history-dependent, in the sense that the distribution of the next step of the process at any time depends upon the entire past history of the process. In general, therefore, the Markov property cannot hold, but i t is shown that a suitable sub-class of such processes can be seen as directed Markov processes, subordinate to a random non-Markov directing process whose properties we explore in detail. This enables us to describe the behaviour of the subordinated process of interest. Some examples, including reverting random walks and a reverting branching process, are given.
We study the classical complexity of the exact Boson Sampling problem where the objective is to produce provably correct random samples from a particular quantum mechanical distribution. The computational framework was proposed by Aaronson and Arkhip ov in 2011 as an attainable demonstration of `quantum supremacy, that is a practical quantum computing experiment able to produce output at a speed beyond the reach of classical (that is non-quantum) computer hardware. Since its introduction Boson Sampling has been the subject of intense international research in the world of quantum computing. On the face of it, the problem is challenging for classical computation. Aaronson and Arkhipov show that exact Boson Sampling is not efficiently solvable by a classical computer unless $P^{#P} = BPP^{NP}$ and the polynomial hierarchy collapses to the third level. The fastest known exact classical algorithm for the standard Boson Sampling problem takes $O({m + n -1 choose n} n 2^n )$ time to produce samples for a system with input size $n$ and $m$ output modes, making it infeasible for anything but the smallest values of $n$ and $m$. We give an algorithm that is much faster, running in $O(n 2^n + operatorname{poly}(m,n))$ time and $O(m)$ additional space. The algorithm is simple to implement and has low constant factor overheads. As a consequence our classical algorithm is able to solve the exact Boson Sampling problem for system sizes far beyond current photonic quantum computing experimentation, thereby significantly reducing the likelihood of achieving near-term quantum supremacy in the context of Boson Sampling.
We consider a class of pattern matching problems where a normalising transformation is applied at every alignment. Normalised pattern matching plays a key role in fields as diverse as image processing and musical information processing where applicat ion specific transformations are often applied to the input. By considering the class of polynomial transformations of the input, we provide fast algorithms and the first lower bounds for both new and old problems. Given a pattern of length m and a longer text of length n where both are assumed to contain integer values only, we first show O(n log m) time algorithms for pattern matching under linear transformations even when wildcard symbols can occur in the input. We then show how to extend the technique to polynomial transformations of arbitrary degree. Next we consider the problem of finding the minimum Hamming distance under polynomial transformation. We show that, for any epsilon>0, there cannot exist an O(n m^(1-epsilon)) time algorithm for additive and linear transformations conditional on the hardness of the classic 3SUM problem. Finally, we consider a version of the Hamming distance problem under additive transformations with a bound k on the maximum distance that need be reported. We give a deterministic O(nk log k) time solution which we then improve by careful use of randomisation to O(n sqrt(k log k) log n) time for sufficiently small k. Our randomised solution outputs the correct answer at every position with high probability.
This paper considers the problem of cardinality estimation in data stream applications. We present a statistical analysis of probabilistic counting algorithms, focusing on two techniques that use pseudo-random variates to form low-dimensional data sk etches. We apply conventional statistical methods to compare probabilistic algorithms based on storing either selected order statistics, or random projections. We derive estimators of the cardinality in both cases, and show that the maximal-term estimator is recursively computable and has exponentially decreasing error bounds. Furthermore, we show that the estimators have comparable asymptotic efficiency, and explain this result by demonstrating an unexpected connection between the two approaches.
326 - Peter Clifford 2009
We consider the problem of approximating the empirical Shannon entropy of a high-frequency data stream under the relaxed strict-turnstile model, when space limitations make exact computation infeasible. An equivalent measure of entropy is the Renyi e ntropy that depends on a constant alpha. This quantity can be estimated efficiently and unbiasedly from a low-dimensional synopsis called an alpha-stable data sketch via the method of compressed counting. An approximation to the Shannon entropy can be obtained from the Renyi entropy by taking alpha sufficiently close to 1. However, practical guidelines for parameter calibration with respect to alpha are lacking. We avoid this problem by showing that the random variables used in estimating the Renyi entropy can be transformed to have a proper distributional limit as alpha approaches 1: the maximally skewed, strictly stable distribution with alpha = 1 defined on the entire real line. We propose a family of asymptotically unbiased log-mean estimators of the Shannon entropy, indexed by a constant zeta > 0, that can be computed in a single-pass algorithm to provide an additive approximation. We recommend the log-mean estimator with zeta = 1 that has exponentially decreasing tail bounds on the error probability, asymptotic relative efficiency of 0.932, and near-optimal computational complexity.
In recent years, large high-dimensional data sets have become commonplace in a wide range of applications in science and commerce. Techniques for dimension reduction are of primary concern in statistical analysis. Projection methods play an important role. We investigate the use of projection algorithms that exploit properties of the alpha-stable distributions. We show that l_{alpha} distances and quasi-distances can be recovered from random projections with full statistical efficiency by L-estimation. The computational requirements of our algorithm are modest; after a once-and-for-all calculation to determine an array of length k, the algorithm runs in O(k) time for each distance, where k is the reduced dimension of the projection.
376 - Peter Clifford 2003
Motivated by Stanleys results in cite{St02}, we generalize the rank of a partition $lambda$ to the rank of a shifted partition $S(lambda)$. We show that the number of bars required in a minimal bar tableau of $S(lambda)$ is max$(o, e + (ell(lambda) m athrm{mod} 2))$, where $o$ and $e$ are the number of odd and even rows of $lambda$. As a consequence we show that the irreducible projective characters of $S_n$ vanish on certain conjugacy classes. Another corollary is a lower bound on the degree of the terms in the expansion of Schurs $Q_{lambda}$ symmetric functions in terms of the power sum symmetric functions.
We give a basis for the space V spanned by the lowest degree part hat{s}_lambda of the expansion of the Schur symmetric functions s_lambda in terms of power sums, where we define the degree of the power sum p_i to be 1. In particular, the dimension o f the subspace V_n spanned by those hat{s}_lambda for which lambda is a partition of n is equal to the number of partitions of n whose parts differ by at least 2. We also show that a symmetric function closely related to hat{s}_lambda has the same coefficients when expanded in terms of power sums or augmented monomial symmetric functions. Proofs are based on the theory of minimal border strip decompositions of Young diagrams.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا