ترغب بنشر مسار تعليمي؟ اضغط هنا

Temporal Parameter-free Deep Skinning of Animated Meshes

231   0   0.0 ( 0 )
 نشر من قبل Ioannis Fudos
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In computer graphics, animation compression is essential for efficient storage, streaming and reproduction of animated meshes. Previous work has presented efficient techniques for compression by deriving skinning transformations and weights using clustering of vertices based on geometric features of vertices over time. In this work we present a novel approach that assigns vertices to bone-influenced clusters and derives weights using deep learning through a training set that consists of pairs of vertex trajectories (temporal vertex sequences) and the corresponding weights drawn from fully rigged animated characters. The approximation error of the resulting linear blend skinning scheme is significantly lower than the error of competent previous methods by producing at the same time a minimal number of bones. Furthermore, the optimal set of transformation and vertices is derived in fewer iterations due to the better initial positioning in the multidimensional variable space. Our method requires no parameters to be determined or tuned by the user during the entire process of compressing a mesh animation sequence.

قيم البحث

اقرأ أيضاً

130 - Andrew Jacobsen , Alan Chan 2021
Reinforcement learning lies at the intersection of several challenges. Many applications of interest involve extremely large state spaces, requiring function approximation to enable tractable computation. In addition, the learner has only a single st ream of experience with which to evaluate a large number of possible courses of action, necessitating algorithms which can learn off-policy. However, the combination of off-policy learning with function approximation leads to divergence of temporal difference methods. Recent work into gradient-based temporal difference methods has promised a path to stability, but at the cost of expensive hyperparameter tuning. In parallel, progress in online learning has provided parameter-free methods that achieve minimax optimal guarantees up to logarithmic terms, but their application in reinforcement learning has yet to be explored. In this work, we combine these two lines of attack, deriving parameter-free, gradient-based temporal difference algorithms. Our algorithms run in linear time and achieve high-probability convergence guarantees matching those of GTD2 up to $log$ factors. Our experiments demonstrate that our methods maintain high prediction performance relative to fully-tuned baselines, with no tuning whatsoever.
Polygon meshes are an efficient representation of 3D geometry, and are of central importance in computer graphics, robotics and games development. Existing learning-based approaches have avoided the challenges of working with 3D meshes, instead using alternative object representations that are more compatible with neural architectures and training approaches. We present an approach which models the mesh directly, predicting mesh vertices and faces sequentially using a Transformer-based architecture. Our model can condition on a range of inputs, including object classes, voxels, and images, and because the model is probabilistic it can produce samples that capture uncertainty in ambiguous scenarios. We show that the model is capable of producing high-quality, usable meshes, and establish log-likelihood benchmarks for the mesh-modelling task. We also evaluate the conditional models on surface reconstruction metrics against alternative methods, and demonstrate competitive performance despite not training directly on this task.
Minimizing the Gaussian curvature of meshes can play a fundamental role in 3D mesh processing. However, there is a lack of computationally efficient and robust Gaussian curvature optimization method. In this paper, we present a simple yet effective m ethod that can efficiently reduce Gaussian curvature for 3D meshes. We first present the mathematical foundation of our method. Then, we introduce a simple and robust implicit Gaussian curvature optimization method named Gaussian Curvature Filter (GCF). GCF implicitly minimizes Gaussian curvature without the need to explicitly calculate the Gaussian curvature itself. GCF is highly efficient and this method can be used in a large range of applications that involve Gaussian curvature. We conduct extensive experiments to demonstrate that GCF significantly outperforms state-of-the-art methods in minimizing Gaussian curvature, and geometric feature preserving soothing on 3D meshes. GCF program is available at https://github.com/tangwenming/GCF-filter.
In this paper, we introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud. Instead of explicitly specifying a prior that encodes the expected shape properties, the prior is defined automatically using the input p oint cloud, which we refer to as a self-prior. The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network. We optimize the network weights to deform an initial mesh to shrink-wrap a single input point cloud. This explicitly considers the entire reconstructed shape, since shared local kernels are calculated to fit the overall object. The convolutional kernels are optimized globally across the entire shape, which inherently encourages local-scale geometric self-similarity across the shape surface. We show that shrink-wrapping a point cloud with a self-prior converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima. While the performance of traditional reconstruction approaches degrades in non-ideal conditions that are often present in real world scanning, i.e., unoriented normals, noise and missing (low density) parts, Point2Mesh is robust to non-ideal conditions. We demonstrate the performance of Point2Mesh on a large variety of shapes with varying complexity.
Using (casual) images to texture 3D models is a common way to create realistic 3D models, which is a very important task in computer graphics. However, if the shape of the casual image does not look like the target model or the target mapping area, t he textured model will become strange since the image will be distorted very much. In this paper, we present a novel texturing and deforming approach for mapping the pattern and shape of a casual image to a 3D model at the same time based on an alternating least-square approach. Through a photogrammetric method, we project the target model onto the source image according to the estimated camera model. Then, the target model is deformed according to the shape of the source image using a surface-based deformation method while minimizing the image distortion simultaneously. The processes are performed iteratively until convergence. Hence, our method can achieve texture mapping, shape deformation, and detail-preserving at once, and can obtain more reasonable texture mapped results than traditional methods.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا