ترغب بنشر مسار تعليمي؟ اضغط هنا

Foundations of geometric approximate group theory

88   0   0.0 ( 0 )
 نشر من قبل Matthew Cordes
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop the foundations of a geometric theory of countably-infinite approximate groups, extending work of Bjorklund and the second-named author. Our theory is based on the notion of a quasi-isometric quasi-action (qiqac) of an approximate group on a metric space. More specifically, we introduce a geometric notion of finite generation for approximate group and prove that every geometrically finitely-generated approximate group admits a geometric qiqac on a proper geodesic metric space. We then show that all such spaces are quasi-isometric, hence can be used to associate a canonical QI type with every geometrically finitely-generated approximate group. This in turn allows us to define geometric invariants of approximate groups using QI invariants of metric spaces. Among the invariants we consider are asymptotic dimension, finiteness properties, numbers of ends and growth type. A particular focus is on qiqacs on hyperbolic spaces. Our strongest results are obtained for approximate groups which admit a geometric qiqac on a proper geodesic hyperbolic space. For such ``hyperbolic approximate groups we establish a number of fundamental properties in analogy with the case of hyperbolic groups. For example, we show that their asymptotic dimension is one larger than the topological dimension of their Gromov boundary and that - under some mild assumption of being ``non-elementary - they have exponential growth and act minimally on their Gromov boundary. We also study convex cocompact qiqacs on hyperbolic spaces. Using the theory of Morse boundaries, we extend some of our results concerning qiqacs on hyperbolic spaces to qiqacs on proper geodesic metric spaces with non-trivial Morse boundary.



قيم البحث

اقرأ أيضاً

A geometric setup for control theory is presented. The argument is developed through the study of the extremals of action functionals defined on piecewise differentiable curves, in the presence of differentiable non-holonomic constraints. Special emp hasis is put on the tensorial aspects of the theory. To start with, the kinematical foundations, culminating in the so called variational equation, are put on geometrical grounds, via the introduction of the concept of infinitesimal control . On the same basis, the usual classification of the extremals of a variational problem into normal and abnormal ones is also rationalized, showing the existence of a purely kinematical algorithm assigning to each admissible curve a corresponding abnormality index, defined in terms of a suitable linear map. The whole machinery is then applied to constrained variational calculus. The argument provides an interesting revisitation of Pontryagin maximum principle and of the Erdmann-Weierstrass corner conditions, as well as a proof of the classical Lagrange multipliers method and a local interpretation of Pontryagins equations as dynamical equations for a free (singular) Hamiltonian system. As a final, highly non-trivial topic, a sufficient condition for the existence of finite deformations with fixed endpoints is explicitly stated and proved.
171 - Nic Koban , Peter Wong 2012
In this paper, we compute the {Sigma}^n(G) and {Omega}^n(G) invariants when 1 rightarrow H rightarrow G rightarrow K rightarrow 1 is a short exact sequence of finitely generated groups with K finite. We also give sufficient conditions for G to have t he R_{infty} property in terms of {Omega}^n(H) and {Omega}^n(K) when either K is finite or the sequence splits. As an application, we construct a group F rtimes? Z_2 where F is the R. Thompsons group F and show that F rtimes Z_2 has the R_{infty} property while F is not characteristic.
168 - Ce Ju 2020
The purpose of this paper is to write a complete survey of the (spectral) manifold learning methods and nonlinear dimensionality reduction (NLDR) in data reduction. The first two NLDR methods in history were respectively published in Science in 2000 in which they solve the similar reduction problem of high-dimensional data endowed with the intrinsic nonlinear structure. The intrinsic nonlinear structure is always interpreted as a concept in manifolds from geometry and topology in theoretical mathematics by computer scientists and theoretical physicists. In 2001, the concept of Manifold Learning first appears as an NLDR method called Laplacian Eigenmaps purposed by Belkin and Niyogi. In the typical manifold learning setup, the data set, also called the observation set, is distributed on or near a low dimensional manifold $M$ embedded in $mathbb{R}^D$, which yields that each observation has a $D$-dimensional representation. The goal of (spectral) manifold learning is to reduce these observations as a compact lower-dimensional representation based on the geometric information. The reduction procedure is called the (spectral) manifold learning method. In this paper, we derive each (spectral) manifold learning method with the matrix and operator representation, and we then discuss the convergence behavior of each method in a geometric uniform language. Hence, we name the survey Geometric Foundations of Data Reduction.
168 - Nic Koban , Peter Wong 2011
In this note, we compute the {Sigma}^1(G) invariant when 1 {to} H {to} G {to} K {to} 1 is a short exact sequence of finitely generated groups with K finite. As an application, we construct a group F semidirect Z_2 where F is the R. Thompsons group F and show that F semidirect Z_2 has the R-infinity property while F is not characteristic. Furthermore, we construct a finite extension G with finitely generated commutator subgroup G but has a finite index normal subgroup H with infinitely generated H.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا