No Arabic abstract
The design and implementation of parallel algorithms is a fundamental task in computer algebra. Combining the computer algebra system Singular and the workflow management system GPI-Space, we have developed an infrastructure for massively parallel computations in commutative algebra and algebraic geometry. In this note, we give an overview on the current capabilities of this framework by looking into three sample applications: determining smoothness of algebraic varieties, computing GIT-fans in geometric invariant theory, and determining tropicalizations. These applications employ algorithmic methods originating from commutative algebra, sheaf structures on manifolds, local geometry, convex geometry, group theory, and combinatorics, illustrating the potential of the framework in further problems in computer algebra.
The Massively Parallel Computation (MPC) model serves as a common abstraction of many modern large-scale data processing frameworks, and has been receiving increasingly more attention over the past few years, especially in the context of classical graph problems. So far, the only way to argue lower bounds for this model is to condition on conjectures about the hardness of some specific problems, such as graph connectivity on promise graphs that are either one cycle or two cycles, usually called the one cycle vs. two cycles problem. This is unlike the traditional arguments based on conjectures about complexity classes (e.g., $textsf{P} eq textsf{NP}$), which are often more robust in the sense that refuting them would lead to groundbreaking algorithms for a whole bunch of problems. In this paper we present connections between problems and classes of problems that allow the latter type of arguments. These connections concern the class of problems solvable in a sublogarithmic amount of rounds in the MPC model, denoted by $textsf{MPC}(o(log N))$, and some standard classes concerning space complexity, namely $textsf{L}$ and $textsf{NL}$, and suggest conjectures that are robust in the sense that refuting them would lead to many surprisingly fast new algorithms in the MPC model. We also obtain new conditional lower bounds, and prove new reductions and equivalences between problems in the MPC model.
This is an expanded version of the two papers Interpolation of Varieties of Minimal Degree and Interpolation Problems: Del Pezzo Surfaces. It is well known that one can find a rational normal curve in $mathbb P^n$ through $n+3$ general points. More recently, it was shown that one can always find nonspecial curves through the expected number of general points and linear spaces. After some expository material regarding scrolls, we consider the generalization of this question to varieties of all dimensions and explain why smooth varieties of minimal degree satisfy interpolation. We give twenty-two equivalent formulations of interpolation. We also classify when Castelnuovo curves satisfy weak interpolation. In the appendix, we prove that del Pezzo surfaces satisfy weak interpolation. Our techniques for proving interpolation include deformation theory, degeneration and specialization, and association.
We give scheme-theoretic descriptions of the category of fibre functors on the categories of sheaves associated to the Zariski, Nisnevich, etale, rh, cdh, ldh, eh, qfh, and h topologies on the category of separated schemes of finite type over a separated noetherian base. Combined with a theorem of Deligne on the existence of enough points, this provides an algebro-geometric description of a conservative family of fibre functors on these categories of sheaves. As an example of an application we show direct image along a closed immersion is exact for all these topologies except qfh. The methods are transportable to other categories of sheaves as well.
Particle-in-cell methods couple mesh-based methods for the solution of continuum mechanics problems, with the ability to advect and evolve particles. They have a long history and many applications in scientific computing. However, they have most often only been implemented for either sequential codes, or parallel codes with static meshes that are statically partitioned. In contrast, many mesh-based codes today use adaptively changing, dynamically partitioned meshes, and can scale to thousands or tens of thousands of processors. Consequently, there is a need to revisit the data structures and algorithms necessary to use particle methods with modern, mesh-based methods. Here we review commonly encountered requirements of particle-in-cell methods, and describe efficient ways to implement them in the context of large-scale parallel finite-element codes that use dynamically changing meshes. We also provide practical experience for how to address bottlenecks that impede the efficient implementation of these algorithms and demonstrate with numerical tests both that our algorithms can be implemented with optimal complexity and that they are suitable for very large-scale, practical applications. We provide a reference implementation in ASPECT, an open source code for geodynamic mantle-convection simulations built on the deal.II library.
We develop the framework for augmented homotopical algebraic geometry. This is an extension of homotopical algebraic geometry, which itself is a homotopification of classical algebraic geometry. To do so, we define the notion of augmentation categories, which are a special class of generalised Reedy categories. For an augmentation category, we prove the existence of a closed Quillen model structure on the presheaf category which is compatible with the Kan-Quillen model structure on simplicial sets. Moreover, we use the concept of augmented hypercovers to define a local model structure on the category of augmented presheaves. We prove that crossed simplicial groups, and the planar rooted tree category are examples of augmentation categories. Finally, we introduce a method for generating new examples from old via a categorical pushout construction.