Do you want to publish a course? Click here

ClipGen: A Deep Generative Model for Clipart Vectorization and Synthesis

315   0   0.0 ( 0 )
 Added by I-Chao Shen
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

This paper presents a novel deep learning-based approach for automatically vectorizing and synthesizing the clipart of man-made objects. Given a raster clipart image and its corresponding object category (e.g., airplanes), the proposed method sequentially generates new layers, each of which is composed of a new closed path filled with a single color. The final result is obtained by compositing all layers together into a vector clipart image that falls into the target category. The proposed approach is based on an iterative generative model that (i) decides whether to continue synthesizing a new layer and (ii) determines the geometry and appearance of the new layer. We formulated a joint loss function for training our generative model, including the shape similarity, symmetry, and local curve smoothness losses, as well as vector graphics rendering accuracy loss for synthesizing clipart recognizable by humans. We also introduced a collection of man-made object clipart, ClipNet, which is composed of closed-path layers, and two designed preprocessing tasks to clean up and enrich the original raw clipart. To validate the proposed approach, we conducted several experiments and demonstrated its ability to vectorize and synthesize various clipart categories. We envision that our generative model can facilitate efficient and intuitive clipart designs for novice users and graphic designers.



rate research

Read More

We present an assistive system for clipart design by providing visual scaffolds from the unseen viewpoints. Inspired by the artists creation process, our system constructs the visual scaffold by first synthesizing the reference 3D shape of the input clipart and rendering it from the desired viewpoint. The critical challenge of constructing this visual scaffold is to generate a reference 3Dshape that matches the users expectation in terms of object sizing and positioning while preserving the geometric style of the input clipart. To address this challenge, we propose a user-assisted curve extrusion method to obtain the reference 3D shape.We render the synthesized reference 3D shape with consistent style into the visual scaffold. By following the generated visual scaffold, the users can efficiently design clipart with their desired viewpoints. The user study conducted by an intuitive user interface and our generated visual scaffold suggests that the users are able to design clipart from different viewpoints while preserving the original geometric style without losing its original shape.
Recently, deep generative adversarial networks for image generation have advanced rapidly; yet, only a small amount of research has focused on generative models for irregular structures, particularly meshes. Nonetheless, mesh generation and synthesis remains a fundamental topic in computer graphics. In this work, we propose a novel framework for synthesizing geometric textures. It learns geometric texture statistics from local neighborhoods (i.e., local triangular patches) of a single reference 3D model. It learns deep features on the faces of the input triangulation, which is used to subdivide and generate offsets across multiple scales, without parameterization of the reference or target mesh. Our network displaces mesh vertices in any direction (i.e., in the normal and tangential direction), enabling synthesis of geometric textures, which cannot be expressed by a simple 2D displacement map. Learning and synthesizing on local geometric patches enables a genus-oblivious framework, facilitating texture transfer between shapes of different genus.
Reconstructing 3D human faces in the wild with the 3D Morphable Model (3DMM) has become popular in recent years. While most prior work focuses on estimating more robust and accurate geometry, relatively little attention has been paid to improving the quality of the texture model. Meanwhile, with the advent of Generative Adversarial Networks (GANs), there has been great progress in reconstructing realistic 2D images. Recent work demonstrates that GANs trained with abundant high-quality UV maps can produce high-fidelity textures superior to those produced by existing methods. However, acquiring such high-quality UV maps is difficult because they are expensive to acquire, requiring laborious processes to refine. In this work, we present a novel UV map generative model that learns to generate diverse and realistic synthetic UV maps without requiring high-quality UV maps for training. Our proposed framework can be trained solely with in-the-wild images (i.e., UV maps are not required) by leveraging a combination of GANs and a differentiable renderer. Both quantitative and qualitative evaluations demonstrate that our proposed texture model produces more diverse and higher fidelity textures compared to existing methods.
We introduce ABC-Dataset, a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms. As a use case for our dataset, we perform a large-scale benchmark for estimation of surface normals, comparing existing data driven methods and evaluating their performance against both the ground truth and traditional normal estimation methods.
We propose a novel deep generative model based on causal convolutions for multi-subject motion modeling and synthesis, which is inspired by the success of WaveNet in multi-subject speech synthesis. However, it is nontrivial to adapt WaveNet to handle high-dimensional and physically constrained motion data. To this end, we add an encoder and a decoder to the WaveNet to translate the motion data into features and back to the predicted motions. We also add 1D convolution layers to take skeleton configuration as an input to model skeleton variations across different subjects. As a result, our network can scale up well to large-scale motion data sets across multiple subjects and support various applications, such as random and controllable motion synthesis, motion denoising, and motion completion, in a unified way. Complex motions, such as punching, kicking and, kicking while punching, are also well handled. Moreover, our network can synthesize motions for novel skeletons not in the training dataset. After fine-tuning the network with a few motion data of the novel skeleton, it is able to capture the personalized style implied in the motion and generate high-quality motions for the skeleton. Thus, it has the potential to be used as a pre-trained network in few-shot learning for motion modeling and synthesis. Experimental results show that our model can effectively handle the variation of skeleton configurations, and it runs fast to synthesize different types of motions on-line. We also perform user studies to verify that the quality of motions generated by our network is superior to the motions of state-of-the-art human motion synthesis methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا