ترغب بنشر مسار تعليمي؟ اضغط هنا

The combinatorics of Jeff Remmel

81   0   0.0 ( 0 )
 نشر من قبل Sergey Kitaev
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We give a brief overview of the life and combinatorics of Jeff Remmel, a mathematician with successful careers in both logic and combinatorics.



قيم البحث

اقرأ أيضاً

207 - David Ellis 2021
The study of intersection problems in Extremal Combinatorics dates back perhaps to 1938, when Paul ErdH{o}s, Chao Ko and Richard Rado proved the (first) `ErdH{o}s-Ko-Rado theorem on the maximum possible size of an intersecting family of $k$-element s ubsets of a finite set. Since then, a plethora of results of a similar flavour have been proved, for a range of different mathematical structures, using a wide variety of different methods. Structures studied in this context have included families of vector subspaces, families of graphs, subsets of finite groups with given group actions, and of course uniform hypergraphs with stronger or weaker intersection conditions imposed. The methods used have included purely combinatorial ones such as shifting/compressions, algebraic methods (including linear-algebraic, Fourier analytic and representation-theoretic), and more recently, analytic, probabilistic and regularity-type methods. As well as being natural problems in their own right, intersection problems have connections with many other parts of Combinatorics and with Theoretical Computer Science (and indeed with many other parts of Mathematics), both through the results themselves, and the methods used. In this survey paper, we discuss both old and new results (and both old and new methods), in the field of intersection problems. Many interesting open problems remain; we will discuss several. For expositional and pedagogical purposes, we also take this opportunity to give slightly streamlin
Background: We study the sparsification of dynamic programming folding algorithms of RNA structures. Sparsification applies to the mfe-folding of RNA structures and can lead to a significant reduction of time complexity. Results: We analyze the spars ification of a particular decomposition rule, $Lambda^*$, that splits an interval for RNA secondary and pseudoknot structures of fixed topological genus. Essential for quantifying the sparsification is the size of its so called candidate set. We present a combinatorial framework which allows by means of probabilities of irreducible substructures to obtain the expected size of the set of $Lambda^*$-candidates. We compute these expectations for arc-based energy models via energy-filtered generating functions (GF) for RNA secondary structures as well as RNA pseudoknot structures. For RNA secondary structures we also consider a simplified loop-energy model. This combinatorial analysis is then compared to the expected number of $Lambda^*$-candidates obtained from folding mfe-structures. In case of the mfe-folding of RNA secondary structures with a simplified loop energy model our results imply that sparsification provides a reduction of time complexity by a constant factor of 91% (theory) versus a 96% reduction (experiment). For the full loop-energy model there is a reduction of 98% (experiment).
We give a geometric realization of the polyhedra governed by the structure of associative algebras with co-inner products, or more precisely, governed by directed planar trees. Our explicit realization of these polyhedra, which include the associahed ra in a special case, shows in particular that these polyhedra are homeomorphic to balls. We also calculate the number of vertices of the lowest generalized associahedra, giving appropriate generalizations of the Catalan numbers.
We present a motivated exposition of the proof of the following Tverberg Theorem: For every integers $d,r$ any $(d+1)(r-1)+1$ points in $mathbb R^d$ can be decomposed into $r$ groups such that all the $r$ convex hulls of the groups have a common poin t. The proof is by well-known reduction to the Barany Theorem. However, our exposition is easier to grasp because additional constructions (of an embedding $mathbb R^dsubsetmathbb R^{d+1}$, of vectors $varphi_{j,i}$ and statement of the Barany Theorem) are not introduced in advance in a non-motivated way, but naturally appear in an attempt to construct the required decomposition. This attempt is based on rewriting several equalities between vectors as one equality between vectors of higher dimension.
The finite colliding bullets problem is the following simple problem: consider a gun, whose barrel remains in a fixed direction; let $(V_i)_{1le ile n}$ be an i.i.d. family of random variables with uniform distribution on $[0,1]$; shoot $n$ bullets o ne after another at times $1,2,dots, n$, where the $i$th bullet has speed $V_i$. When two bullets collide, they both annihilate. We give the distribution of the number of surviving bullets, and in some generalisation of this model. While the distribution is relatively simple (and we found a number of bold claims online), our proof is surprisingly intricate and mixes combinatorial and geometric arguments; we argue that any rigorous argument must very likely be rather elaborate.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا