ترغب بنشر مسار تعليمي؟ اضغط هنا

Prediction with Restricted Resources and Finite Automata

57   0   0.0 ( 0 )
 نشر من قبل Finn Macleod Dr
 تاريخ النشر 2008
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

We obtain an index of the complexity of a random sequence by allowing the role of the measure in classical probability theory to be played by a function we call the generating mechanism. Typically, this generating mechanism will be a finite automata. We generate a set of biased sequences by applying a finite state automata with a specified number, $m$, of states to the set of all binary sequences. Thus we can index the complexity of our random sequence by the number of states of the automata. We detail optimal algorithms to predict sequences generated in this way.

قيم البحث

اقرأ أيضاً

A recently proposed phase-estimation protocol that is based on measuring the parity of a two-mode squeezed-vacuum state at the output of a Mach-Zehnder interferometer shows that Cram{e}r-Rao bound sensitivity can be obtained [P. M. Anisimov, et al., Phys. Rev. Lett. {bf104}, 103602 (2010)]. This sensitivity, however, is expected in the case of an infinite number of parity measurements made on an infinite number of photons. Here we consider the case of a finite number of parity measurements and a finite number of photons, implemented with photon-number-resolving detectors. We use Bayesian analysis to characterize the sensitivity of the phase estimation in this scheme. We have found that our phase estimation becomes biased near 0 or $pi/2$ phase values. Yet there is an in-between region where the bias becomes negligible. In this region, our phase estimation scheme saturates the Cram{e}r-Rao bound and beats the shot-noise limit.
The goal of ordinal embedding is to represent items as points in a low-dimensional Euclidean space given a set of constraints in the form of distance comparisons like item $i$ is closer to item $j$ than item $k$. Ordinal constraints like this often c ome from human judgments. To account for errors and variation in judgments, we consider the noisy situation in which the given constraints are independently corrupted by reversing the correct constraint with some probability. This paper makes several new contributions to this problem. First, we derive prediction error bounds for ordinal embedding with noise by exploiting the fact that the rank of a distance matrix of points in $mathbb{R}^d$ is at most $d+2$. These bounds characterize how well a learned embedding predicts new comparative judgments. Second, we investigate the special case of a known noise model and study the Maximum Likelihood estimator. Third, knowledge of the noise model enables us to relate prediction errors to embedding accuracy. This relationship is highly non-trivial since we show that the linear map corresponding to distance comparisons is non-invertible, but there exists a nonlinear map that is invertible. Fourth, two new algorithms for ordinal embedding are proposed and evaluated in experiments.
Discovering the underlying behavior of complex systems is an important topic in many science and engineering disciplines. In this paper, we propose a novel neural network framework, finite difference neural networks (FDNet), to learn partial differen tial equations from data. Specifically, our proposed finite difference inspired network is designed to learn the underlying governing partial differential equations from trajectory data, and to iteratively estimate the future dynamical behavior using only a few trainable parameters. We illustrate the performance (predictive power) of our framework on the heat equation, with and without noise and/or forcing, and compare our results to the Forward Euler method. Moreover, we show the advantages of using a Hessian-Free Trust Region method to train the network.
A practical quantum key distribution (QKD) protocol necessarily runs in finite time and, hence, only a finite amount of communication is exchanged. This is in contrast to most of the standard results on the security of QKD, which only hold in the lim it where the number of transmitted signals approaches infinity. Here, we analyze the security of QKD under the realistic assumption that the amount of communication is finite. At the level of the general formalism, we present new results that help simplifying the actual implementation of QKD protocols: in particular, we show that symmetrization steps, which are required by certain security proofs (e.g., proofs based on de Finettis representation theorem), can be omitted in practical implementations. Also, we demonstrate how two-way reconciliation protocols can be taken into account in the security analysis. At the level of numerical estimates, we present the bounds with finite resources for ``device-independent security against collective attacks.
High-dimensional settings, where the data dimension ($d$) far exceeds the number of observations ($n$), are common in many statistical and machine learning applications. Methods based on $ell_1$-relaxation, such as Lasso, are very popular for sparse recovery in these settings. Restricted Eigenvalue (RE) condition is among the weakest, and hence the most general, condition in literature imposed on the Gram matrix that guarantees nice statistical properties for the Lasso estimator. It is natural to ask: what families of matrices satisfy the RE condition? Following a line of work in this area, we construct a new broad ensemble of dependent random design matrices that have an explicit RE bound. Our construction starts with a fixed (deterministic) matrix $X in mathbb{R}^{n times d}$ satisfying a simple stable rank condition, and we show that a matrix drawn from the distribution $X Phi^top Phi$, where $Phi in mathbb{R}^{m times d}$ is a subgaussian random matrix, with high probability, satisfies the RE condition. This construction allows incorporating a fixed matrix that has an easily {em verifiable} condition into the design process, and allows for generation of {em compressed} design matrices that have a lower storage requirement than a standard design matrix. We give two applications of this construction to sparse linear regression problems, including one to a compressed sparse regression setting where the regression algorithm only has access to a compressed representation of a fixed design matrix $X$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا