ترغب بنشر مسار تعليمي؟ اضغط هنا

Virtual try-on is a promising application of computer graphics and human computer interaction that can have a profound real-world impact especially during this pandemic. Existing image-based works try to synthesize a try-on image from a single image of a target garment, but it inherently limits the ability to react to possible interactions. It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand. In this paper, we propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images. Our workflow is composed of two parts: garment capturing and clothed person image synthesis. We designed an actuated mannequin and an efficient capturing process that collects the detailed deformations of the target garments under diverse body sizes and poses. Furthermore, we proposed to use a custom-designed measurement garment, and we captured paired images of the measurement garment and the target garments. We then learn a mapping between the measurement garment and the target garments using deep image-to-image translation. The customer can then try on the target garments interactively during online shopping.
314 - I-Chao Shen , Bing-Yu Chen 2021
This paper presents a novel deep learning-based approach for automatically vectorizing and synthesizing the clipart of man-made objects. Given a raster clipart image and its corresponding object category (e.g., airplanes), the proposed method sequent ially generates new layers, each of which is composed of a new closed path filled with a single color. The final result is obtained by compositing all layers together into a vector clipart image that falls into the target category. The proposed approach is based on an iterative generative model that (i) decides whether to continue synthesizing a new layer and (ii) determines the geometry and appearance of the new layer. We formulated a joint loss function for training our generative model, including the shape similarity, symmetry, and local curve smoothness losses, as well as vector graphics rendering accuracy loss for synthesizing clipart recognizable by humans. We also introduced a collection of man-made object clipart, ClipNet, which is composed of closed-path layers, and two designed preprocessing tasks to clean up and enrich the original raw clipart. To validate the proposed approach, we conducted several experiments and demonstrated its ability to vectorize and synthesize various clipart categories. We envision that our generative model can facilitate efficient and intuitive clipart designs for novice users and graphic designers.
We present an assistive system for clipart design by providing visual scaffolds from the unseen viewpoints. Inspired by the artists creation process, our system constructs the visual scaffold by first synthesizing the reference 3D shape of the input clipart and rendering it from the desired viewpoint. The critical challenge of constructing this visual scaffold is to generate a reference 3Dshape that matches the users expectation in terms of object sizing and positioning while preserving the geometric style of the input clipart. To address this challenge, we propose a user-assisted curve extrusion method to obtain the reference 3D shape.We render the synthesized reference 3D shape with consistent style into the visual scaffold. By following the generated visual scaffold, the users can efficiently design clipart with their desired viewpoints. The user study conducted by an intuitive user interface and our generated visual scaffold suggests that the users are able to design clipart from different viewpoints while preserving the original geometric style without losing its original shape.
Generative image modeling techniques such as GAN demonstrate highly convincing image generation result. However, user interaction is often necessary to obtain the desired results. Existing attempts add interactivity but require either tailored archit ectures or extra data. We present a human-in-the-optimization method that allows users to directly explore and search the latent vector space of generative image modeling. Our system provides multiple candidates by sampling the latent vector space, and the user selects the best blending weights within the subspace using multiple sliders. In addition, the user can express their intention through image editing tools. The system samples latent vectors based on inputs and presents new candidates to the user iteratively. An advantage of our formulation is that one can apply our method to arbitrary pre-trained model without developing specialized architecture or data. We demonstrate our method with various generative image modeling applications, and show superior performance in a comparative user study with prior art iGAN.
In recent years, personalized fabrication has received considerable attention because of the widespread use of consumer-level three-dimensional (3D) printers. However, such 3D printers have drawbacks, such as long production time and limited output s ize, which hinder large-scale rapid-prototyping. In this paper, for the time- and cost-effective fabrication of large-scale objects, we propose a hybrid 3D fabrication method that combines 3D printing and the Zometool construction set, which is a compact, sturdy, and reusable structure for infill fabrication. The proposed method significantly reduces fabrication cost and time by printing only thin 3D outer shells. In addition, we design an optimization framework to generate both a Zometool structure and printed surface partitions by optimizing several criteria, including printability, material cost, and Zometool structure complexity. Moreover, we demonstrate the effectiveness of the proposed method by fabricating various large-scale 3D models.
Using (casual) images to texture 3D models is a common way to create realistic 3D models, which is a very important task in computer graphics. However, if the shape of the casual image does not look like the target model or the target mapping area, t he textured model will become strange since the image will be distorted very much. In this paper, we present a novel texturing and deforming approach for mapping the pattern and shape of a casual image to a 3D model at the same time based on an alternating least-square approach. Through a photogrammetric method, we project the target model onto the source image according to the estimated camera model. Then, the target model is deformed according to the shape of the source image using a surface-based deformation method while minimizing the image distortion simultaneously. The processes are performed iteratively until convergence. Hence, our method can achieve texture mapping, shape deformation, and detail-preserving at once, and can obtain more reasonable texture mapped results than traditional methods.
In the past few years, deep reinforcement learning has been proven to solve problems which have complex states like video games or board games. The next step of intelligent agents would be able to generalize between tasks, and using prior experience to pick up new skills more quickly. However, most reinforcement learning algorithms for now are often suffering from catastrophic forgetting even when facing a very similar target task. Our approach enables the agents to generalize knowledge from a single source task, and boost the learning progress with a semisupervised learning method when facing a new task. We evaluate this approach on Atari games, which is a popular reinforcement learning benchmark, and show that it outperforms common baselines based on pre-training and fine-tuning.
Qi-Wa refers to the up curl on the lengths of handscrolls and hanging scrolls, which has troubled Chinese artisans and emperors for as long as the art of painting and calligraphy exists. This warp is unwelcomed not only for aesthetic reasons, but its potential damage to the fiber and ink. Although it is generally treated as a part of the cockling and curling due to climate, mounting procedures, and conservation conditions, we emphasize that the intrinsic curvature incurred from the storage is in fact the main cause of Qi-Wa. The Qi-Wa height is determined by experiments to obey scaling relations with the length, width, curvature, and thickness of the scroll, which are supported by Molecular Dynamics Simulation and theoretic derivations. This understanding helps us come up with plausible remedies to mitigate Qi-Wa. All proposals are tested on real mounted paper and in simulations. Due to the general nature of this warp, we believe the lessons learnt from studying ancient Chinese scrolls can be applied to modern technologies such as the development of flexible electronic paper and computer screen.
We calculate the ground-state properties of fermionic dipolar atoms or molecules in a one-dimensional double-tube potential by using the Luttinger liquid theory and the density matrix renormalization-group calculation. When the external field is appl ied near a magic angle with respect to the double-tube plane, the long-ranged dipolar interaction can generate a spontaneous correlation between fermions in different tubes, even when the bare intertube tunneling rate is negligibly small. Such interaction-induced correlation strongly enhances the contrast of the interference fringes and therefore can be easily observed in the standard time-of-flight experiment.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا