ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning to Infer Semantic Parameters for 3D Shape Editing

121   0   0.0 ( 0 )
 نشر من قبل Fangyin Wei
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Many applications in 3D shape design and augmentation require the ability to make specific edits to an objects semantic parameters (e.g., the pose of a persons arm or the length of an airplanes wing) while preserving as much existing details as possible. We propose to learn a deep network that infers the semantic parameters of an input shape and then allows the user to manipulate those parameters. The network is trained jointly on shapes from an auxiliary synthetic template and unlabeled realistic models, ensuring robustness to shape variability while relieving the need to label realistic exemplars. At testing time, edits within the parameter space drive deformations to be applied to the original shape, which provides semantically-meaningful manipulation while preserving the details. This is in contrast to prior methods that either use autoencoders with a limited latent-space dimensionality, failing to preserve arbitrary detail, or drive deformations with purely-geometric controls, such as cages, losing the ability to update local part regions. Experiments with datasets of chairs, airplanes, and human bodies demonstrate that our method produces more natural edits than prior work.



قيم البحث

اقرأ أيضاً

Inferring programs which generate 2D and 3D shapes is important for reverse engineering, editing, and more. Training such inference models is challenging due to the lack of paired (shape, program) data in most domains. A popular approach is to pre-tr ain a model on synthetic data and then fine-tune on real shapes using slow, unstable reinforcement learning. In this paper, we argue that self-training is a viable alternative for fine-tuning such models. Self-training is a semi-supervised learning paradigm where a model assigns pseudo-labels to unlabeled data, and then retrains with (data, pseudo-label) pairs as the new ground truth. We show that for constructive solid geometry and assembly-based modeling, self-training outperforms state-of-the-art reinforcement learning approaches. Additionally, shape program inference has a unique property that circumvents a potential downside of self-training (incorrect pseudo-label assignment): inferred programs are executable. For a given shape from our distribution of interest $mathbf{x}^*$ and its predicted program $mathbf{z}$, one can execute $mathbf{z}$ to obtain a shape $mathbf{x}$ and train on $(mathbf{z}, mathbf{x})$ pairs, rather than $(mathbf{z}, mathbf{x}^*)$ pairs. We term this procedure latent execution self training (LEST). We demonstrate that self training infers shape programs with higher shape reconstruction accuracy and converges significantly faster than reinforcement learning approaches, and in some domains, LEST can further improve this performance.
Caricature is an artistic drawing created to abstract or exaggerate facial features of a person. Rendering visually pleasing caricatures is a difficult task that requires professional skills, and thus it is of great interest to design a method to aut omatically generate such drawings. To deal with large shape changes, we propose an algorithm based on a semantic shape transform to produce diverse and plausible shape exaggerations. Specifically, we predict pixel-wise semantic correspondences and perform image warping on the input photo to achieve dense shape transformation. We show that the proposed framework is able to render visually pleasing shape exaggerations while maintaining their facial structures. In addition, our model allows users to manipulate the shape via the semantic map. We demonstrate the effectiveness of our approach on a large photograph-caricature benchmark dataset with comparisons to the state-of-the-art methods.
Point signature, a representation describing the structural neighborhood of a point in 3D shapes, can be applied to establish correspondences between points in 3D shapes. Conventional methods apply a weight-sharing network, e.g., any kind of graph ne ural networks, across all neighborhoods to directly generate point signatures and gain the generalization ability by extensive training over a large amount of training samples from scratch. However, these methods lack the flexibility in rapidly adapting to unseen neighborhood structures and thus generalizes poorly on new point sets. In this paper, we propose a novel meta-learning based 3D point signature model, named 3Dmetapointsignature (MEPS) network, that is capable of learning robust point signatures in 3D shapes. By regarding each point signature learning process as a task, our method obtains an optimized model over the best performance on the distribution of all tasks, generating reliable signatures for new tasks, i.e., signatures of unseen point neighborhoods. Specifically, the MEPS consists of two modules: a base signature learner and a meta signature learner. During training, the base-learner is trained to perform specific signature learning tasks. In the meantime, the meta-learner is trained to update the base-learner with optimal parameters. During testing, the meta-learner that is learned with the distribution of all tasks can adaptively change parameters of the base-learner, accommodating to unseen local neighborhoods. We evaluate the MEPS model on two datasets, e.g., FAUST and TOSCA, for dense 3Dshape correspondence. Experimental results demonstrate that our method not only gains significant improvements over the baseline model and achieves state-of-the-art results, but also is capable of handling unseen 3D shapes.
We propose a system for surface completion and inpainting of 3D shapes using generative models, learnt on local patches. Our method uses a novel encoding of height map based local patches parameterized using 3D mesh quadrangulation of the low resolut ion input shape. This provides us sufficient amount of local 3D patches to learn a generative model for the task of repairing moderate sized holes. Following the ideas from the recent progress in 2D inpainting, we investigated both linear dictionary based model and convolutional denoising autoencoders based model for the task for inpainting, and show our results to be better than the previous geometry based method of surface inpainting. We validate our method on both synthetic shapes and real world scans.
Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-dat a depth prediction training, and possible unknown camera focal length. We investigate this problem in detail, and propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image, and then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape. In addition, we propose an image-level normalized regression loss and a normal-based geometry loss to enhance depth prediction models trained on mixed datasets. We test our depth model on nine unseen datasets and achieve state-of-the-art performance on zero-shot dataset generalization. Code is available at: https://git.io/Depth
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا