ترغب بنشر مسار تعليمي؟ اضغط هنا

Cascade of Phase Transitions for Multi-Scale Clustering

98   0   0.0 ( 0 )
 نشر من قبل Tony Bonnaire
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a novel framework exploiting the cascade of phase transitions occurring during a simulated annealing of the Expectation-Maximisation algorithm to cluster datasets with multi-scale structures. Using the weighted local covariance, we can extract, a posteriori and without any prior knowledge, information on the number of clusters at different scales together with their size. We also study the linear stability of the iterative scheme to derive the threshold at which the first transition occurs and show how to approximate the next ones. Finally, we combine simulated annealing together with recent developments of regularised Gaussian mixture models to learn a principal graph from spatially structured datasets that can also exhibit many scales.

قيم البحث

اقرأ أيضاً

We study the problem of detecting a structured, low-rank signal matrix corrupted with additive Gaussian noise. This includes clustering in a Gaussian mixture model, sparse PCA, and submatrix localization. Each of these problems is conjectured to exhi bit a sharp information-theoretic threshold, below which the signal is too weak for any algorithm to detect. We derive upper and lower bounds on these thresholds by applying the first and second moment methods to the likelihood ratio between these planted models and null models where the signal matrix is zero. Our bounds differ by at most a factor of root two when the rank is large (in the clustering and submatrix localization problems, when the number of clusters or blocks is large) or the signal matrix is very sparse. Moreover, our upper bounds show that for each of these problems there is a significant regime where reliable detection is information- theoretically possible but where known algorithms such as PCA fail completely, since the spectrum of the observed matrix is uninformative. This regime is analogous to the conjectured hard but detectable regime for community detection in sparse graphs.
We consider the problem of Gaussian mixture clustering in the high-dimensional limit where the data consists of $m$ points in $n$ dimensions, $n,m rightarrow infty$ and $alpha = m/n$ stays finite. Using exact but non-rigorous methods from statistical physics, we determine the critical value of $alpha$ and the distance between the clusters at which it becomes information-theoretically possible to reconstruct the membership into clusters better than chance. We also determine the accuracy achievable by the Bayes-optimal estimation algorithm. In particular, we find that when the number of clusters is sufficiently large, $r > 4 + 2 sqrt{alpha}$, there is a gap between the threshold for information-theoretically optimal performance and the threshold at which known algorithms succeed.
Complexity of patterns is a key information for human brain to differ objects of about the same size and shape. Like other innate human senses, the complexity perception cannot be easily quantified. We propose a transparent and universal machine meth od for estimating structural (effective) complexity of two- and three-dimensional patterns that can be straightforwardly generalized onto other classes of objects. It is based on multi-step renormalization of the pattern of interest and computing the overlap between neighboring renormalized layers. This way, we can define a single number characterizing the structural complexity of an object. We apply this definition to quantify complexity of various magnetic patterns and demonstrate that not only does it reflect the intuitive feeling of what is complex and what is simple, but also can be used to accurately detect different phase transitions and gain information about dynamics of non-equilibrium systems. When employed for that, the proposed scheme is much simpler and numerically cheaper than the standard methods based on computing correlation functions or using machine learning techniques.
71 - Leo Radzihovsky 1997
We study the shape, elasticity and fluctuations of the recently predicted (cond-mat/9510172) and subsequently observed (in numerical simulations) (cond-mat/9705059) tubule phase of anisotropic membranes, as well as the phase transitions into and out of it. This novel phase lies between the previously predicted flat and crumpled phases, both in temperature and in its physical properties: it is crumpled in one direction, and extended in the other. Its shape and elastic properties are characterized by a radius of gyration exponent $ u$ and an anisotropy exponent $z$. We derive scaling laws for the radius of gyration $R_G(L_perp,L_y)$ (i.e. the average thickness) of the tubule about a spontaneously selected straight axis and for the tubule undulations $h_{rms}(L_perp,L_y)$ transverse to its average extension. For phantom (i.e. non-self-avoiding) membranes, we predict $ u=1/4$, $z=1/2$ and $eta_kappa=0$, exactly, in excellent agreement with simulations. For membranes embedded in the space of dimension $d<11$, self-avoidance greatly swells the tubule and suppresses its wild transverse undulations, changing its shape exponents $ u$ and $z$. We give detailed scaling results for the shape of the tubule of an arbitrary aspect ratio and compute a variety of correlation functions, as well as the anomalous elasticity of the tubules. Finally we present a scaling theory for the shape of the membrane and its specific heat near the continuous transitions into and out of the tubule phase and perform detailed renormalization group calculations for the crumpled-to-tubule transition for phantom membranes.
Neural networks have been shown to perform incredibly well in classification tasks over structured high-dimensional datasets. However, the learning dynamics of such networks is still poorly understood. In this paper we study in detail the training dy namics of a simple type of neural network: a single hidden layer trained to perform a classification task. We show that in a suitable mean-field limit this case maps to a single-node learning problem with a time-dependent dataset determined self-consistently from the average nodes population. We specialize our theory to the prototypical case of a linearly separable dataset and a linear hinge loss, for which the dynamics can be explicitly solved. This allow us to address in a simple setting several phenomena appearing in modern networks such as slowing down of training dynamics, crossover between rich and lazy learning, and overfitting. Finally, we asses the limitations of mean-field theory by studying the case of large but finite number of nodes and of training samples.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا