ترغب بنشر مسار تعليمي؟ اضغط هنا

Convergence rates in $ell^1$-regularization when the basis is not smooth enough

102   0   0.0 ( 0 )
 نشر من قبل Jens Flemming
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Sparsity promoting regularization is an important technique for signal reconstruction and several other ill-posed problems. Theoretical investigation typically bases on the assumption that the unknown solution has a sparse representation with respect to a fixed basis. We drop this sparsity assumption and provide error estimates for non-sparse solutions. After discussing a result in this direction published earlier by one of the authors and coauthors we prove a similar error estimate under weaker assumptions. Two examples illustrate that this set of weaker assumptions indeed covers additional situations which appear in applications.

قيم البحث

اقرأ أيضاً

Many structured prediction tasks in machine vision have a collection of acceptable answers, instead of one definitive ground truth answer. Segmentation of images, for example, is subject to human labeling bias. Similarly, there are multiple possible pixel values that could plausibly complete occluded image regions. State-of-the art supervised learning methods are typically optimized to make a single test-time prediction for each query, failing to find other modes in the output space. Existing methods that allow for sampling often sacrifice speed or accuracy. We introduce a simple method for training a neural network, which enables diverse structured predictions to be made for each test-time query. For a single input, we learn to predict a range of possible answers. We compare favorably to methods that seek diversity through an ensemble of networks. Such stochastic multiple choice learning faces mode collapse, where one or more ensemble members fail to receive any training signal. Our best performing solution can be deployed for various tasks, and just involves small modifications to the existing single-mode architecture, loss function, and training regime. We demonstrate that our method results in quantitative improvements across three challenging tasks: 2D image completion, 3D volume estimation, and flow prediction.
103 - Junsoo Park , Max Dylla , Yi Xia 2020
Band convergence is considered a clear benefit to thermoelectric performance because it increases the charge carrier concentration for a given Fermi level, which typically enhances charge conductivity while preserving the Seebeck coefficient. However , this advantage hinges on the assumption that interband scattering of carriers is weak or insignificant. With first-principles treatment of electron-phonon scattering in CaMg$_{2}$Sb$_{2}$-CaZn$_{2}$Sb$_{2}$ Zintl system and full Heusler Sr$_{2}$SbAu, we demonstrate that the benefit of band convergence can be intrinsically negated by interband scattering depending on the manner in which bands converge. In the Zintl alloy, band convergence does not improve weighted mobility or the density-of-states effective mass. We trace the underlying reason to the fact that the bands converge at one k-point, which induces strong interband scattering of both the deformation-potential and the polar-optical kinds. The case contrasts with band convergence at distant k-points (as in the full Heusler), which better preserves the single-band scattering behavior thereby successfully leading to improved performance. Therefore, we suggest that band convergence as thermoelectric design principle is best suited to cases in which it occurs at distant k-points.
We apply the pigeonhole principle to show that there must exist Boolean functions on 7 inputs with a multiplicative complexity of at least 7, i.e., that cannot be computed with only 6 multiplications in the Galois field with two elements.
Pulsar glitches are traditionally viewed as a manifestation of vortex dynamics associated with a neutron superfluid reservoir confined to the inner crust of the star. In this Letter we show that the non-dissipative entrainment coupling between the ne utron superfluid and the nuclear lattice leads to a less mobile crust superfluid, effectively reducing the moment of inertia associated with the angular momentum reservoir. Combining the latest observational data for prolific glitching pulsars with theoretical results for the crust entrainment we find that the required superfluid reservoir exceeds that available in the crust. This challenges our understanding of the glitch phenomenon, and we discuss possible resolutions to the problem.
Statistical models of natural stimuli provide an important tool for researchers in the fields of machine learning and computational neuroscience. A canonical way to quantitatively assess and compare the performance of statistical models is given by t he likelihood. One class of statistical models which has recently gained increasing popularity and has been applied to a variety of complex data are deep belief networks. Analyses of these models, however, have been typically limited to qualitative analyses based on samples due to the computationally intractable nature of the model likelihood. Motivated by these circumstances, the present article provides a consistent estimator for the likelihood that is both computationally tractable and simple to apply in practice. Using this estimator, a deep belief network which has been suggested for the modeling of natural image patches is quantitatively investigated and compared to other models of natural image patches. Contrary to earlier claims based on qualitative results, the results presented in this article provide evidence that the model under investigation is not a particularly good model for natural images
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا