ترغب بنشر مسار تعليمي؟ اضغط هنا

The cavity method for analysis of large-scale penalized regression

81   0   0.0 ( 0 )
 نشر من قبل Anirvan M. Sengupta
 تاريخ النشر 2015
والبحث باللغة English




اسأل ChatGPT حول البحث

Penalized regression methods aim to retrieve reliable predictors among a large set of putative ones from a limited amount of measurements. In particular, penalized regression with singular penalty functions is important for sparse reconstruction algorithms. For large-scale problems, these algorithms exhibit sharp phase transition boundaries where sparse retrieval breaks down. Large optimization problems associated with sparse reconstruction have been analyzed in the literature by setting up corresponding statistical mechanical models at a finite temperature. Using replica method for mean field approximation, and subsequently taking a zero temperature limit, this approach reproduces the algorithmic phase transition boundaries. Unfortunately, the replica trick and the non-trivial zero temperature limit obscure the underlying reasons for the failure of a sparse reconstruction algorithm, and of penalized regression methods, in general. In this paper, we employ the ``cavity method to give an alternative derivation of the mean field equations, working directly in the zero-temperature limit. This derivation provides insight into the origin of the different terms in the self-consistency conditions. The cavity method naturally involves a quantity, the average local susceptibility, whose behavior distinguishes different phases in this system. This susceptibility can be generalized for analysis of a broader class of sparse reconstruction algorithms.

قيم البحث

اقرأ أيضاً

We discuss the universality of the L1 recovery threshold in compressed sensing. Previous studies in the fields of statistical mechanics and random matrix integration have shown that L1 recovery under a random matrix with orthogonal symmetry has a uni versal threshold. This indicates that the threshold of L1 recovery under a non-orthogonal random matrix differs from the universal one. Taking this into account, we use a simple random matrix without orthogonal symmetry, where the random entries are not independent, and show analytically that the threshold of L1 recovery for such a matrix does not coincide with the universal one. The results of an extensive numerical experiment are in good agreement with the analytical results, which validates our methodology. Though our analysis is based on replica heuristics in statistical mechanics and is not rigorous, the findings nevertheless support the fact that the universality of the threshold is strongly related to the symmetry of the random matrix.
We present a novel framework exploiting the cascade of phase transitions occurring during a simulated annealing of the Expectation-Maximisation algorithm to cluster datasets with multi-scale structures. Using the weighted local covariance, we can ext ract, a posteriori and without any prior knowledge, information on the number of clusters at different scales together with their size. We also study the linear stability of the iterative scheme to derive the threshold at which the first transition occurs and show how to approximate the next ones. Finally, we combine simulated annealing together with recent developments of regularised Gaussian mixture models to learn a principal graph from spatially structured datasets that can also exhibit many scales.
Complexity of patterns is a key information for human brain to differ objects of about the same size and shape. Like other innate human senses, the complexity perception cannot be easily quantified. We propose a transparent and universal machine meth od for estimating structural (effective) complexity of two- and three-dimensional patterns that can be straightforwardly generalized onto other classes of objects. It is based on multi-step renormalization of the pattern of interest and computing the overlap between neighboring renormalized layers. This way, we can define a single number characterizing the structural complexity of an object. We apply this definition to quantify complexity of various magnetic patterns and demonstrate that not only does it reflect the intuitive feeling of what is complex and what is simple, but also can be used to accurately detect different phase transitions and gain information about dynamics of non-equilibrium systems. When employed for that, the proposed scheme is much simpler and numerically cheaper than the standard methods based on computing correlation functions or using machine learning techniques.
Large-scale quantum devices provide insights beyond the reach of classical simulations. However, for a reliable and verifiable quantum simulation, the building blocks of the quantum device require exquisite benchmarking. This benchmarking of large sc ale dynamical quantum systems represents a major challenge due to lack of efficient tools for their simulation. Here, we present a scalable algorithm based on neural networks for Hamiltonian tomography in out-of-equilibrium quantum systems. We illustrate our approach using a model for a forefront quantum simulation platform: ultracold atoms in optical lattices. Specifically, we show that our algorithm is able to reconstruct the Hamiltonian of an arbitrary size quasi-1D bosonic system using an accessible amount of experimental measurements. We are able to significantly increase the previously known parameter precision.
148 - John Z. Imbrie 2014
A new KAM-style proof of Anderson localization is obtained. A sequence of local rotations is defined, such that off-diagonal matrix elements of the Hamiltonian are driven rapidly to zero. This leads to the first proof via multi-scale analysis of expo nential decay of the eigenfunction correlator (this implies strong dynamical localization). The method has been used in recent work on many-body localization [arXiv:1403.7837].
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا