ترغب بنشر مسار تعليمي؟ اضغط هنا

Hair-GANs: Recovering 3D Hair Structure from a Single Image

62   0   0.0 ( 0 )
 نشر من قبل Meng Zhang
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We introduce Hair-GANs, an architecture of generative adversarial networks, to recover the 3D hair structure from a single image. The goal of our networks is to build a parametric transformation from 2D hair maps to 3D hair structure. The 3D hair structure is represented as a 3D volumetric field which encodes both the occupancy and the orientation information of the hair strands. Given a single hair image, we first align it with a bust model and extract a set of 2D maps encoding the hair orientation information in 2D, along with the bust depth map to feed into our Hair-GANs. With our generator network, we compute the 3D volumetric field as the structure guidance for the final hair synthesis. The modeling results not only resemble the hair in the input image but also possesses many vivid details in other views. The efficacy of our method is demonstrated by using a variety of hairstyles and comparing with the prior art.



قيم البحث

اقرأ أيضاً

We present sketchhair, a deep learning based tool for interactive modeling of 3D hair from 2D sketches. Given a 3D bust model as reference, our sketching system takes as input a user-drawn sketch (consisting of hair contour and a few strokes indicati ng the hair growing direction within a hair region), and automatically generates a 3D hair model, which matches the input sketch both globally and locally. The key enablers of our system are two carefully designed neural networks, namely, S2ONet, which converts an input sketch to a dense 2D hair orientation field; and O2VNet, which maps the 2D orientation field to a 3D vector field. Our system also supports hair editing with additional sketches in new views. This is enabled by another deep neural network, V2VNet, which updates the 3D vector field with respect to the new sketches. All the three networks are trained with synthetic data generated from a 3D hairstyle database. We demonstrate the effectiveness and expressiveness of our tool using a variety of hairstyles and also compare our method with prior art.
Recent deep generative models allow real-time generation of hair images from sketch inputs. Existing solutions often require a user-provided binary mask to specify a target hair shape. This not only costs users extra labor but also fails to capture c omplicated hair boundaries. Those solutions usually encode hair structures via orientation maps, which, however, are not very effective to encode complex structures. We observe that colored hair sketches already implicitly define target hair shapes as well as hair appearance and are more flexible to depict hair structures than orientation maps. Based on these observations, we present SketchHairSalon, a two-stage framework for generating realistic hair images directly from freehand sketches depicting desired hair structure and appearance. At the first stage, we train a network to predict a hair matte from an input hair sketch, with an optional set of non-hair strokes. At the second stage, another network is trained to synthesize the structure and appearance of hair images from the input sketch and the generated matte. To make the networks in the two stages aware of long-term dependency of strokes, we apply self-attention modules to them. To train these networks, we present a new dataset containing thousands of annotated hair sketch-image pairs and corresponding hair mattes. Two efficient methods for sketch completion are proposed to automatically complete repetitive braided parts and hair strokes, respectively, thus reducing the workload of users. Based on the trained networks and the two sketch completion strategies, we build an intuitive interface to allow even novice users to design visually pleasing hair images exhibiting various hair structures and appearance via freehand sketches. The qualitative and quantitative evaluations show the advantages of the proposed system over the existing or alternative solutions.
In this paper, we propose a generic neural-based hair rendering pipeline that can synthesize photo-realistic images from virtual 3D hair models. Unlike existing supervised translation methods that require model-level similarity to preserve consistent structure representation for both real images and fake renderings, our method adopts an unsupervised solution to work on arbitrary hair models. The key component of our method is a shared latent space to encode appearance-invariant structure information of both domains, which generates realistic renderings conditioned by extra appearance inputs. This is achieved by domain-specific pre-disentangled structure representation, partially shared domain encoder layers and a structure discriminator. We also propose a simple yet effective temporal conditioning method to enforce consistency for video sequence generation. We demonstrate the superiority of our method by testing it on a large number of portraits and comparing it with alternative baselines and state-of-the-art unsupervised image translation methods.
115 - Lam Hui , Daniel Kabat , Xinyu Li 2019
We show that a black hole surrounded by scalar dark matter develops scalar hair. This is the generalization of a phenomenon pointed out by Jacobson, that a minimally coupled scalar with a non-trivial time dependence far away from the black hole would endow the black hole with hair. In our case, the time dependence arises from the oscillation of a scalar field with a non-zero mass. We systematically explore the scalar profile around the black hole for different scalar masses. In the small mass limit, the scalar field has a $1/r$ component at large radius $r$, consistent with Jacobsons result. In the large mass limit (with the Compton wavelength of order of the horizon or smaller), the scalar field has a $1/r^{3/4}$ profile yielding a pile-up close to the horizon, while distinctive nodes occur for intermediate masses. Thus, the dark matter profile around a black hole, while challenging to measure, contains information about the dark matter particle mass. As an application, we consider the case of the supermassive black hole at the center of M87, recently imaged by the Event Horizon Telescope. Its horizon size is roughly the Compton wavelength of a scalar particle of mass $10^{-20}$ eV. We consider the implications of the expected scalar pile-up close to the horizon, for fuzzy dark matter at a mass of $10^{-20}$ eV or below.
The membrane paradigm posits that black hole microstates are dynamical degrees of freedom associated with a physical membrane vanishingly close to the black holes event horizon. The soft hair paradigm postulates that black holes can be equipped with zero-energy charges associated with residual diffeomorphisms that label near horizon degrees of freedom. In this essay we argue that the latter paradigm implies the former. More specifically, we exploit suitable near horizon boundary conditions that lead to an algebra of `soft hair charges containing infinite copies of the Heisenberg algebra, associated with area-preserving shear deformations of black hole horizons. We employ the near horizon soft hair and its Heisenberg algebra to provide a formulation of the membrane paradigm and show how it accounts for black hole entropy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا