ترغب بنشر مسار تعليمي؟ اضغط هنا

Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization

294   0   0.0 ( 0 )
 نشر من قبل Stanislaw Jastrzebski
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The early phase of training a deep neural network has a dramatic effect on the local curvature of the loss function. For instance, using a small learning rate does not guarantee stable optimization because the optimization trajectory has a tendency to steer towards regions of the loss surface with increasing local curvature. We ask whether this tendency is connected to the widely observed phenomenon that the choice of the learning rate strongly influences generalization. We first show that stochastic gradient descent (SGD) implicitly penalizes the trace of the Fisher Information Matrix (FIM), a measure of the local curvature, from the start of training. We argue it is an implicit regularizer in SGD by showing that explicitly penalizing the trace of the FIM can significantly improve generalization. We highlight that poor final generalization coincides with the trace of the FIM attaining a large value early in training, to which we refer as catastrophic Fisher explosion. Finally, to gain insight into the regularization effect of penalizing the trace of the FIM, we show that it limits memorization by reducing the learning speed of examples with noisy labels more than that of the examples with clean labels.



قيم البحث

اقرأ أيضاً

Deep domain adaptation models learn a neural network in an unlabeled target domain by leveraging the knowledge from a labeled source domain. This can be achieved by learning a domain-invariant feature space. Though the learned representations are sep arable in the source domain, they usually have a large variance and samples with different class labels tend to overlap in the target domain, which yields suboptimal adaptation performance. To fill the gap, a Fisher loss is proposed to learn discriminative representations which are within-class compact and between-class separable. Experimental results on two benchmark datasets show that the Fisher loss is a general and effective loss for deep domain adaptation. Noticeable improvements are brought when it is used together with widely adopted transfer criteria, including MMD, CORAL and domain adversarial loss. For example, an absolute improvement of 6.67% in terms of the mean accuracy is attained when the Fisher loss is used together with the domain adversarial loss on the Office-Home dataset.
Fisher forecasts are a common tool in cosmology with applications ranging from survey planning to the development of new cosmological probes. While frequently adopted, they are subject to numerical instabilities that need to be carefully investigated to ensure accurate and reproducible results. This research note discusses these challenges using the example of a weak lensing data vector and proposes procedures that can help in their solution.
70 - Alexander Soen , Ke Sun 2021
The Fisher information matrix (FIM) has been applied to the realm of deep learning. It is closely related to the loss landscape, the variance of the parameters, second order optimization, and deep learning theory. The exact FIM is either unavailable in closed form or too expensive to compute. In practice, it is almost always estimated based on empirical samples. We investigate two such estimators based on two equivalent representations of the FIM. They are both unbiased and consistent with respect to the underlying true FIM. Their estimation quality is characterized by their variance given in closed form. We bound their variances and analyze how the parametric structure of a deep neural network can impact the variance. We discuss the meaning of this variance measure and our bounds in the context of deep learning.
238 - L. Raul Abramo 2011
Starting from the Fisher matrix for counts in cells, I derive the full Fisher matrix for surveys of multiple tracers of large-scale structure. The key assumption is that the inverse of the covariance of the galaxy counts is given by the naive matrix inverse of the covariance in a mixed position-space and Fourier-space basis. I then compute the Fisher matrix for the power spectrum in bins of the three-dimensional wavenumber k; the Fisher matrix for functions of position x (or redshift z) such as the linear bias of the tracers and/or the growth function; and the cross-terms of the Fisher matrix that expresses the correlations between estimations of the power spectrum and estimations of the bias. When the bias and growth function are fully specified, and the Fourier-space bins are large enough that the covariance between them can be neglected, the Fisher matrix for the power spectrum reduces to the widely used result that was first derived by Feldman, Kaiser and Peacock (1994). Assuming isotropy, an exact calculation of the Fisher matrix can be performed in the case of a constant-density, volume-limited survey. I then show how the exact Fisher matrix in the general case can be obtained in terms of a series of volume-limited surveys.
This paper focuses on estimating probability distributions over the set of 3D rotations ($SO(3)$) using deep neural networks. Learning to regress models to the set of rotations is inherently difficult due to differences in topology between $mathbb{R} ^N$ and $SO(3)$. We overcome this issue by using a neural network to output the parameters for a matrix Fisher distribution since these parameters are homeomorphic to $mathbb{R}^9$. By using a negative log likelihood loss for this distribution we get a loss which is convex with respect to the network outputs. By optimizing this loss we improve state-of-the-art on several challenging applicable datasets, namely Pascal3D+, ModelNet10-$SO(3)$ and UPNA head pose.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا