ترغب بنشر مسار تعليمي؟ اضغط هنا

Fantastic Generalization Measures and Where to Find Them

116   0   0.0 ( 0 )
 نشر من قبل Hossein Mobahi
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Generalization of deep networks has been of great interest in recent years, resulting in a number of theoretically and empirically motivated complexity measures. However, most papers proposing such measures study only a small set of models, leaving open the question of whether the conclusion drawn from those experiments would remain valid in other settings. We present the first large scale study of generalization in deep networks. We investigate more then 40 complexity measures taken from both theoretical bounds and empirical studies. We train over 10,000 convolutional networks by systematically varying commonly used hyperparameters. Hoping to uncover potentially causal relationships between each measure and generalization, we analyze carefully controlled experiments and show surprising failures of some measures as well as promising measures for further research.



قيم البحث

اقرأ أيضاً

Quantum interference on the kagome lattice generates electronic bands with narrow bandwidth, called flat bands. Crystal structures incorporating this lattice can host strong electron correlations with non-standard ingredients, but only if these bands lie at the Fermi level. In the six compounds with the CoSn structure type (FeGe, FeSn, CoSn, NiIn, RhPb, and PtTl) the transition metals form a kagome lattice. The two iron variants are robust antiferromagnets so we focus on the latter four and investigate their thermodynamic and transport properties. We consider these results and calculated band structures to locate and characterize the flat bands in these materials. We propose that CoSn and RhPb deserve the communitys attention for exploring flat band physics.
75 - John Antoniadis 2020
While the majority of massive stars have a stellar companion, most pulsars appear to be isolated. Taken at face value, this suggests that most massive binaries break apart due to strong natal kicks received in supernova explosions. However, the obser ved binary fraction can still be subject to strong selection effects, as monitoring of newly discovered pulsars is rarely carried out for long enough to conclusively rule out multiplicity. Here, we use the second Gaia Data Release (DR2) to search for companions to 1534 rotation-powered pulsars with positions known to better than 0.5 arcseconds. We find 22 matches to known pulsars, including one not reported elsewhere, and 8 new possible companions to young pulsars. We examine the photometric and kinematic properties of these systems and provide empirical relations for identifying Gaia sources with potential millisecond pulsar companions. Our results confirm that the observed multiplicity fraction is small. However, we show that the number of binaries below the sensitivity of Gaia and radio timing in our sample could still be significantly higher. We constrain the binary fraction of young pulsars to be $f_{rm young}^{rm true}leq 5.3(8.3)%$ under realistic(conservative) assumptions for the binary properties and current sensitivity thresholds. For massive stars ($geq 10$ M$_{odot}$) in particular, we find $f_{rm OB}^{rm true}leq 3.7%$ which sets a firm independent upper limit on the galactic neutron-star merger rate, $leq 7.2times 10^{-4}$ yr$^{-1}$. Ongoing and future projects such as the CHIME/pulsar program, MeerTime, HIRAX and ultimately the SKA, will significantly improve these constraints in the future.
83 - John Antoniadis 2020
The Early Gaia Data Release 3 (EDR3) provides precise astrometry for nearly 1.5 billion sources across the entire sky. A few tens of these are associated with neutron stars in the Milky Way and Magellanic Clouds. Here, we report on a search for EDR3 counterparts to known rotation-powered pulsars using the method outlined in Antoniadis (2021). A cross-correlation between EDR3 and the ATNF pulsar catalogue identifies 41 close astrometric pairs ($< 0.5$ arcsec at the reference epoch of the pulsar position). Twenty six of these are related to previously-known optical counterparts, while the rest are candidate pairs that require further follow-up. Highlights include the Crab Pulsar (PSR B0531+21), for which EDR3 yields a distance of $2.08^{+0.78}_{-0.45}$ kpc (or $2.00_{-0.38}^{+0.56}$ kpc taking into account the dispersion-measure prior; errors indicate 95% confidence limits) and PSR J1638-4608, a pulsar thus-far considered to be isolated that lies within 0.056 arcsec of a Gaia source.
A single space-based gravitational wave detector will push the boundaries of astronomy and fundamental physics. Having a network of two or more detectors would significantly improve source localization. Here we consider how dual networks of space-bas ed detectors would improve parameter estimation of massive black hole binaries. We consider two scenarios: a network comprised of the Laser Interferometer Space Antenna (LISA) and an additional LISA-like heliocentric detector (e.g. Taiji); and a network comprised of LISA with an an additional geocentric detector (e.g. TianQin). We use Markov chain Monte Carlo techniques and Fisher matrix estimates to explore the impact of a two detector network on sky localization and distance determination. The impact on other source parameters is also studied. With the addition of a Taiji or TianQin, we find orders of magnitude improvements in sky localization for the more massive MBHBs, while also seeing improvements for lower mass systems, and for other source parameters.
When primed with only a handful of training samples, very large pretrained language models such as GPT-3, have shown competitive results when compared to fully-supervised fine-tuned large pretrained language models. We demonstrate that the order in w hich the samples are provided can be the difference between near state-of-the-art and random guess performance: Essentially some permutations are fantastic and some not. We analyse this phenomenon in detail, establishing that: it is present across model sizes (even for the largest current models), it is not related to a specific subset of samples, and that a given good permutation for one model is not transferable to another. While one could use a development set to determine which permutations are performant, this would deviate from the few-shot setting as it requires additional annotated data. Instead, we use the generative nature of the language models to construct an artificial development set and based on entropy statistics of the candidate permutations from this set we identify performant prompts. Our method improves upon GPT-family models by on average 13% relative across eleven different established text classification tasks.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا