Do you want to publish a course? Click here

ZoomOut: Spectral Upsampling for Efficient Shape Correspondence

130   0   0.0 ( 0 )
 Added by Jing Ren
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We present a simple and efficient method for refining maps or correspondences by iterative upsampling in the spectral domain that can be implemented in a few lines of code. Our main observation is that high quality maps can be obtained even if the input correspondences are noisy or are encoded by a small number of coefficients in a spectral basis. We show how this approach can be used in conjunction with existing initialization techniques across a range of application scenarios, including symmetry detection, map refinement across complete shapes, non-rigid partial shape matching and function transfer. In each application we demonstrate an improvement with respect to both the quality of the results and the computational speed compared to the best competing methods, with up to two orders of magnitude speed-up in some applications. We also demonstrate that our method is both robust to noisy input and is scalable with respect to shape complexity. Finally, we present a theoretical justification for our approach, shedding light on structural properties of functional maps.

rate research

Read More

Point cloud upsampling is vital for the quality of the mesh in three-dimensional reconstruction. Recent research on point cloud upsampling has achieved great success due to the development of deep learning. However, the existing methods regard point cloud upsampling of different scale factors as independent tasks. Thus, the methods need to train a specific model for each scale factor, which is both inefficient and impractical for storage and computation in real applications. To address this limitation, in this work, we propose a novel method called ``Meta-PU to firstly support point cloud upsampling of arbitrary scale factors with a single model. In the Meta-PU method, besides the backbone network consisting of residual graph convolution (RGC) blocks, a meta-subnetwork is learned to adjust the weights of the RGC blocks dynamically, and a farthest sampling block is adopted to sample different numbers of points. Together, these two blocks enable our Meta-PU to continuously upsample the point cloud with arbitrary scale factors by using only a single model. In addition, the experiments reveal that training on multiple scales simultaneously is beneficial to each other. Thus, Meta-PU even outperforms the existing methods trained for a specific scale factor only.
120 - Kai Bai , Wei Li , Mathieu Desbrun 2019
Simulating turbulent smoke flows is computationally intensive due to their intrinsic multiscale behavior, thus requiring relatively high resolution grids to fully capture their complexity. For iterative editing or simply faster generation of smoke flows, dynamic upsampling of an input low-resolution numerical simulation is an attractive, yet currently unattainable goal. In this paper, we propose a novel dictionary-based learning approach to the dynamic upsampling of smoke flows. For each frame of an input coarse animation, we seek a sparse representation of small, local velocity patches of the flow based on an over-complete dictionary, and use the resulting sparse coefficients to generate a high-resolution smoke animation sequence. We propose a novel dictionary-based neural network which learns both a fast evaluation of sparse patch encoding and a dictionary of corresponding coarse and fine patches from a sequence of example simulations computed with any numerical solver. Our upsampling network then injects into coarse input sequences physics-driven fine details, unlike most previous approaches that only employed fast procedural models to add high frequency to the input. We present a variety of upsampling results for smoke flows and offer comparisons to their corresponding high-resolution simulations to demonstrate the effectiveness of our approach.
We consider the problem of establishing dense correspondences within a set of related shapes of strongly varying geometry. For such input, traditional shape matching approaches often produce unsatisfactory results. We propose an ensemble optimization method that improves given coarse correspondences to obtain dense correspondences. Following ideas from minimum description length approaches, it maximizes the compactness of the induced shape space to obtain high-quality correspondences. We make a number of improvements that are important for computer graphics applications: Our approach handles meshes of general topology and handles partial matching between input of varying topology. To this end we introduce a novel part-based generative statistical shape model. We develop a novel analysis algorithm that learns such models from training shapes of varying topology. We also provide a novel synthesis method that can generate new instances with varying part layouts and subject to generic variational constraints. In practical experiments, we obtain a substantial improvement in correspondence quality over state-of-the-art methods. As example application, we demonstrate a system that learns shape families as assemblies of deformable parts and permits real-time editing with continuous and discrete variability.
Existing online 3D shape repositories contain thousands of 3D models but lack photorealistic appearance. We present an approach to automatically assign high-quality, realistic appearance models to large scale 3D shape collections. The key idea is to jointly leverage three types of online data -- shape collections, material collections, and photo collections, using the photos as reference to guide assignment of materials to shapes. By generating a large number of synthetic renderings, we train a convolutional neural network to classify materials in real photos, and employ 3D-2D alignment techniques to transfer materials to different parts of each shape model. Our system produces photorealistic, relightable, 3D shapes (PhotoShapes).
A popular way to create detailed yet easily controllable 3D shapes is via procedural modeling, i.e. generating geometry using programs. Such programs consist of a series of instructions along with their associated parameter values. To fully realize the benefits of this representation, a shape program should be compact and only expose degrees of freedom that allow for meaningful manipulation of output geometry. One way to achieve this goal is to design higher-level macro operators that, when executed, expand into a series of commands from the base shape modeling language. However, manually authoring such macros, much like shape programs themselves, is difficult and largely restricted to domain experts. In this paper, we present ShapeMOD, an algorithm for automatically discovering macros that are useful across large datasets of 3D shape programs. ShapeMOD operates on shape programs expressed in an imperative, statement-based language. It is designed to discover macros that make programs more compact by minimizing the number of function calls and free parameters required to represent an input shape collection. We run ShapeMOD on multiple collections of programs expressed in a domain-specific language for 3D shape structures. We show that it automatically discovers a concise set of macros that abstract out common structural and parametric patterns that generalize over large shape collections. We also demonstrate that the macros found by ShapeMOD improve performance on downstream tasks including shape generative modeling and inferring programs from point clouds. Finally, we conduct a user study that indicates that ShapeMODs discovered macros make interactive shape editing more efficient.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا