ترغب بنشر مسار تعليمي؟ اضغط هنا

AlSub: Fully Parallel and Modular Subdivision

173   0   0.0 ( 0 )
 نشر من قبل Daniel Mlakar
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In recent years, mesh subdivision---the process of forging smooth free-form surfaces from coarse polygonal meshes---has become an indispensable production instrument. Although subdivision performance is crucial during simulation, animation and rendering, state-of-the-art approaches still rely on serial implementations for complex parts of the subdivision process. Therefore, they often fail to harness the power of modern parallel devices, like the graphics processing unit (GPU), for large parts of the algorithm and must resort to time-consuming serial preprocessing. In this paper, we show that a complete parallelization of the subdivision process for modern architectures is possible. Building on sparse matrix linear algebra, we show how to structure the complete subdivision process into a sequence of algebra operations. By restructuring and grouping these operations, we adapt the process for different use cases, such as regular subdivision of dynamic meshes, uniform subdivision for immutable topology, and feature-adaptive subdivision for efficient rendering of animated models. As the same machinery is used for all use cases, identical subdivision results are achieved in all parts of the production pipeline. As a second contribution, we show how these linear algebra formulations can effectively be translated into efficient GPU kernels. Applying our strategies to $sqrt{3}$, Loop and Catmull-Clark subdivision shows significant speedups of our approach compared to state-of-the-art solutions, while we completely avoid serial preprocessing.



قيم البحث

اقرأ أيضاً

This paper introduces Neural Subdivision, a novel framework for data-driven coarse-to-fine geometry modeling. During inference, our method takes a coarse triangle mesh as input and recursively subdivides it to a finer geometry by applying the fixed t opological updates of Loop Subdivision, but predicting vertex positions using a neural network conditioned on the local geometry of a patch. This approach enables us to learn complex non-linear subdivision schemes, beyond simple linear averaging used in classical techniques. One of our key contributions is a novel self-supervised training setup that only requires a set of high-resolution meshes for learning network weights. For any training shape, we stochastically generate diverse low-resolution discretizations of coarse counterparts, while maintaining a bijective mapping that prescribes the exact target position of every new vertex during the subdivision process. This leads to a very efficient and accurate loss function for conditional mesh generation, and enables us to train a method that generalizes across discretizations and favors preserving the manifold structure of the output. During training we optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category. Our network encodes patch geometry in a local frame in a rotation- and translation-invariant manner. Jointly, these design choices enable our method to generalize well, and we demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
132 - Chenkai Xu , Yaqi He , Hui Hu 2021
In this paper, we propose a stochastic geometric iterative method to approximate the high-resolution 3D models by finite Loop subdivision surfaces. Given an input mesh as the fitting target, the initial control mesh is generated using the mesh simpli fication algorithm. Then, our method adjusts the control mesh iteratively to make its finite Loop subdivision surface approximates the input mesh. In each geometric iteration, we randomly select part of points on the subdivision surface to calculate the difference vectors and distribute the vectors to the control points. Finally, the control points are updated by adding the weighted average of these difference vectors. We prove the convergence of our method and verify it by demonstrating error curves in the experiment. In addition, compared with an existing geometric iterative method, our method has a faster fitting speed and higher fitting precision.
179 - David Lowry-Duda 2020
We examine several currently used techniques for visualizing complex-valued functions applied to modular forms. We plot several examples and study the benefits and limitations of each technique. We then introduce a method of visualization that can ta ke advantage of colormaps in Pythons matplotlib library, describe an implementation, and give more examples. Much of this discussion applies to general visualizations of complex-valued functions in the plane.
In this paper, we propose a parallel and scalable approach for geodesic distance computation on triangle meshes. Our key observation is that the recovery of geodesic distance with the heat method from [Crane et al. 2013] can be reformulated as optimi zation of its gradients subject to integrability, which can be solved using an efficient first-order method that requires no linear system solving and converges quickly. Afterward, the geodesic distance is efficiently recovered by parallel integration of the optimized gradients in breadth-first order. Moreover, we employ a similar breadth-first strategy to derive a parallel Gauss-Seidel solver for the diffusion step in the heat method. To further lower the memory consumption from gradient optimization on faces, we also propose a formulation that optimizes the projected gradients on edges, which reduces the memory footprint by about 50%. Our approach is trivially parallelizable, with a low memory footprint that grows linearly with respect to the model size. This makes it particularly suitable for handling large models. Experimental results show that it can efficiently compute geodesic distance on meshes with more than 200 million vertices on a desktop PC with 128GB RAM, outperforming the original heat method and other state-of-the-art geodesic distance solvers.
We develop a novel parallel resampling algorithm for fully parallelized particle filters, which is designed with GPUs (graphics processing units) or similar parallel computing devices in mind. With our new algorithm, a full cycle of particle filterin g (computing the value of the likelihood for each particle, constructing the cumulative distribution function (CDF) for resampling, resampling the particles with the CDF, and propagating new particles for the next cycle) can be executed in a massively and completely parallel manner. One of the advantages of our algorithm is that every single numerical computation or memory access related to the particle filtering is executed solely inside the GPU in parallel, and no data transfer between the GPUs device memory and the CPUs host memory occurs unless for further processing, so that it can circumvent the limited memory bandwidth between the GPU and the CPU. To demonstrate the advantage of our parallel algorithm, we conducted a Monte Carlo experiment in which we apply the parallel algorithm as well as conventional sequential algorithms for estimation of a simple state space model via particle learning, and compare them in terms of execution time. The results show that the parallel algorithm is far superior to the sequential algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا