ترغب بنشر مسار تعليمي؟ اضغط هنا

Centralizers in good groups are good

230   0   0.0 ( 0 )
 نشر من قبل Nathaniel Stapleton
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We modify the transchromatic character maps to land in a faithfully flat extension of Morava E-theory. Our construction makes use of the interaction between topological and algebraic localization and completion. As an application we prove that centralizers of tuples of commuting prime-power order elements in good groups are good and we compute a new example.



قيم البحث

اقرأ أيضاً

Recent years have witnessed the rapid advance in neural machine translation (NMT), the core of which lies in the encoder-decoder architecture. Inspired by the recent progress of large-scale pre-trained language models on machine translation in a limi ted scenario, we firstly demonstrate that a single language model (LM4MT) can achieve comparable performance with strong encoder-decoder NMT models on standard machine translation benchmarks, using the same training data and similar amount of model parameters. LM4MT can also easily utilize source-side texts as additional supervision. Though modeling the source- and target-language texts with the same mechanism, LM4MT can provide unified representations for both source and target sentences, which can better transfer knowledge across languages. Extensive experiments on pivot-based and zero-shot translation tasks show that LM4MT can outperform the encoder-decoder NMT model by a large margin.
We propose a new framework for reasoning about generalization in deep learning. The core idea is to couple the Real World, where optimizers take stochastic gradient steps on the empirical loss, to an Ideal World, where optimizers take steps on the po pulation loss. This leads to an alternate decomposition of test error into: (1) the Ideal World test error plus (2) the gap between the two worlds. If the gap (2) is universally small, this reduces the problem of generalization in offline learning to the problem of optimization in online learning. We then give empirical evidence that this gap between worlds can be small in realistic deep learning settings, in particular supervised image classification. For example, CNNs generalize better than MLPs on image distributions in the Real World, but this is because they optimize faster on the population loss in the Ideal World. This suggests our framework is a useful tool for understanding generalization in deep learning, and lays a foundation for future research in the area.
In a triangulated category, cofibre fill-ins always exist. Neeman showed that there is always at least one good fill-in, i.e., one whose mapping cone is exact. Verdier constructed a fill-in of a particular form in his proof of the $4 times 4$ lemma, which we call Verdier good. We show that for several classes of morphisms of exact triangles, the notions of good and Verdier good agree. We prove a lifting criterion for commutative squares in terms of (Verdier) good fill-ins. Using our results on good fill-ins, we also prove a pasting lemma for homotopy cartesian squares.
The article deals with profinite groups in which the centralizers are abelian (CA-groups), that is, with profinite commutativity-transitive groups. It is shown that such groups are virtually pronilpotent. More precisely, let G be a profinite CA-group . It is shown that G has a normal open subgroup N which is either abelian or pro-p. Further, a rather detailed information about the finite quotient G/N is obtained.
In GFG automata, it is possible to resolve nondeterminism in a way that only depends on the past and still accepts all the words in the language. The motivation for GFG automata comes from their adequacy for games and synthesis, wherein general nonde terminism is inappropriate. We continue the ongoing effort of studying the power of nondeterminism in GFG automata. Initial indications have hinted that every GFG automaton embodies a deterministic one. Today we know that this is not the case, and in fact GFG automata may be exponentially more succinct than deterministic ones. We focus on the typeness question, namely the question of whether a GFG automaton with a certain acceptance condition has an equivalent GFG automaton with a weaker acceptance condition on the same structure. Beyond the theoretical interest in studying typeness, its existence implies efficient translations among different acceptance conditions. This practical issue is of special interest in the context of games, where the Buchi and co-Buchi conditions admit memoryless strategies for both players. Typeness is known to hold for deterministic automata and not to hold for general nondeterministic automata. We show that GFG automata enjoy the benefits of typeness, similarly to the case of deterministic automata. In particular, when Rabin or Streett GFG automata have equivalent Buchi or co-Buchi GFG automata, respectively, then such equivalent automata can be defined on a substructure of the original automata. Using our typeness results, we further study the place of GFG automata in between deterministic and nondeterministic ones. Specifically, considering automata complementation, we show that GFG automata lean toward nondeterministic ones, admitting an exponential state blow-up in the complementation of a Streett automaton into a Rabin automaton, as opposed to the constant blow-up in the deterministic case.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا