ترغب بنشر مسار تعليمي؟ اضغط هنا

Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally. Despite progress on 3D scanning and modeling of human bodies, there is still no technology that can easily turn a static scan into an animatab le avatar. Automating the creation of such avatars would enable many applications in games, social networking, animation, and AR/VR to name a few. The key problem is one of representation. Standard 3D meshes are widely used in modeling the minimally-clothed body but do not readily capture the complex topology of clothing. Recent interest has shifted to implicit surface models for this task but they are computationally heavy and lack compatibility with existing 3D tools. What is needed is a 3D representation that can capture varied topology at high resolution and that can be learned from data. We argue that this representation has been with us all along -- the point cloud. Point clouds have properties of both implicit and explicit representations that we exploit to model 3D garment geometry on a human body. We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits. The network is trained from 3D point clouds of many types of clothing, on many bodies, in many poses, and learns to model pose-dependent clothing deformations. The geometry feature can be optimized to fit a previously unseen scan of a person in clothing, enabling the scan to be reposed realistically. Our model demonstrates superior quantitative and qualitative results in both multi-outfit modeling and unseen outfit animation. The code is available for research purposes.
In this paper, we aim to create generalizable and controllable neural signed distance fields (SDFs) that represent clothed humans from monocular depth observations. Recent advances in deep learning, especially neural implicit representations, have en abled human shape reconstruction and controllable avatar generation from different sensor inputs. However, to generate realistic cloth deformations from novel input poses, watertight meshes or dense full-body scans are usually needed as inputs. Furthermore, due to the difficulty of effectively modeling pose-dependent cloth deformations for diverse body shapes and cloth types, existing approaches resort to per-subject/cloth-type optimization from scratch, which is computationally expensive. In contrast, we propose an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images. We achieve this by using meta-learning to learn an initialization of a hypernetwork that predicts the parameters of neural SDFs. The hypernetwork is conditioned on human poses and represents a clothed neural avatar that deforms non-rigidly according to the input poses. Meanwhile, it is meta-learned to effectively incorporate priors of diverse body shapes and cloth types and thus can be much faster to fine-tune, compared to models trained from scratch. We qualitatively and quantitatively show that our approach outperforms state-of-the-art approaches that require complete meshes as inputs while our approach requires only depth frames as inputs and runs orders of magnitudes faster. Furthermore, we demonstrate that our meta-learned hypernetwork is very robust, being the first to generate avatars with realistic dynamic cloth deformations given as few as 8 monocular depth frames.
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies. To enable learning, the choice of representation is the key. Recent work uses neural networks t o parameterize local surface elements. This approach captures locally coherent geometry and non-planar details, can deal with varying topology, and does not require registered training data. However, naively using such methods to model 3D clothed humans fails to capture fine-grained local deformations and generalizes poorly. To address this, we present three key innovations: First, we deform surface elements based on a human body model such that large-scale deformations caused by articulation are explicitly separated from topological changes and local clothing deformations. Second, we address the limitations of existing neural surface elements by regressing local geometry from local features, significantly improving the expressiveness. Third, we learn a pose embedding on a 2D parameterization space that encodes posed body geometry, improving generalization to unseen poses by reducing non-local spurious correlations. We demonstrate the efficacy of our surface representation by learning models of complex clothing from point clouds. The clothing can change topology and deviate from the topology of the body. Once learned, we can animate previously unseen motions, producing high-quality point clouds, from which we generate realistic images with neural rendering. We assess the importance of each technical contribution and show that our approach outperforms the state-of-the-art methods in terms of reconstruction accuracy and inference time. The code is available for research purposes at https://qianlim.github.io/SCALE .
We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar. These avatars are driven by pose parameters and have realistic clothing that moves and deforms naturally. SCA Nimate does not rely on a customized mesh template or surface mesh registration. We observe that fitting a parametric 3D body model, like SMPL, to a clothed human scan is tractable while surface registration of the body topology to the scan is often not, because clothing can deviate significantly from the body shape. We also observe that articulated transformations are invertible, resulting in geometric cycle consistency in the posed and unposed shapes. These observations lead us to a weakly supervised learning method that aligns scans into a canonical pose by disentangling articulated deformations without template-based surface registration. Furthermore, to complete missing regions in the aligned scans while modeling pose-dependent deformations, we introduce a locally pose-aware implicit function that learns to complete and model geometry with learned pose correctives. In contrast to commonly used global pose embeddings, our local pose conditioning significantly reduces long-range spurious correlations and improves generalization to unseen poses, especially when training data is limited. Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar. We demonstrate our approach on various clothing types with different amounts of training data, outperforming existing solutions and other variants in terms of fidelity and generality in every setting. The code is available at https://scanimate.is.tue.mpg.de.
Quasi-two dimensional quantum magnetism is clearly highly correlated with superconducting ground states in cuprate-based High T$_c$ superconductivity. Three dimensional, commensurate long range magnetic order in La$_2$CuO$_4$ quickly evolves to quasi -two dimensional, incommensurate correlations on doping with mobile holes, and superconducting ground states follow for x as small as 0.05 in the La$_{2-x}$Sr$_x$/Ba$_x$CuO$_4$ family of superconductors. It has long been known that the onset of superconducting ground states in these systems is coincident with a remarkable rotation of the incommensurate spin order from diagonal stripes below x = 0.05, to parallel stripes above. However, little is known about the spin correlations at optimal and high doping levels, where the dome of superconductivity draws to an end. Here we present new elastic and inelastic neutron scattering measurements on single crystals of La$_{1.6-x}$Nd$_{0.4}$Sr$_x$CuO$_4$ with x = 0.125, 0.19, 0.24 and 0.26, and show that two dimensional, quasi-static, parallel spin stripes are observed to onset at temperatures such that the parallel spin stripe phase envelopes all superconducting ground states in this system. Parallel spin stripes stretch across 0.05 < < 0.26, with rapidly decreasing moment size and onset temperatures for x > 0.125. We also show that the low energy, parallel spin stripe fluctuations for optimally doped x = 0.19 display dynamic spectral weight which grows with decreasing temperature and saturates below its superconducting T$_c$. The elastic order parameter for x = 0.19 also shows plateau behavior coincident with the onset of superconductivity. This set of observations assert the foundational role played by two dimensional parallel spin stripe order and fluctuations in High T$_c$ cuprate superconductivity.
One branch of the La-214 family of cuprate superconductors, La1.6-xNd0.4SrxCuO4 (Nd-LSCO), has been of significant and sustained interest, in large part because it displays the full complexity of the phase diagram for canonical hole-doped, high Tc su perconductivity, while also displaying relatively low superconducting critical temperatures. The low superconducting Tcs imply that experimentally accessible magnetic fields can suppress the superconductivity to zero temperature. In particular, this has enabled various transport and thermodynamic studies of the T = 0 ground state in Nd-LSCO, free of superconductivity, across the critical doping p* = 0.23 where the pseudogap phase ends. The strong dependence of its superconducting properties on its crystal symmetry has itself motivated careful studies of the Nd-LSCO structural phase diagram. This paper provides a systematic study and summary of the materials preparation and characterization of both single crystal and polycrystalline samples of Nd-LSCO. Single-phase polycrystalline samples with x spanning the range from 0.01 to 0.40 have been synthesized, and large single crystals of Nd-LSCO for select x across the region (0.07, 0.12, 0.17, 0.19, 0.225, 0.24, and 0.26) were grown by the optical floating zone method. Systematic neutron and X-ray diffraction studies on these samples were performed at both low and room temperatures, 10 K and 300 K, respectively. These studies allowed us to follow the various structural phase transitions and propose an updated structural phase diagram for Nd-LSCO. In particular, we found that the low-temperature tetragonal (LTT) phase ends at a critical doping pLTT = 0.255(5), clearly separated from p*.
High fidelity digital 3D environments have been proposed in recent years, however, it remains extremely challenging to automatically equip such environment with realistic human bodies. Existing work utilizes images, depth or semantic maps to represen t the scene, and parametric human models to represent 3D bodies. While being straightforward, their generated human-scene interactions are often lack of naturalness and physical plausibility. Our key observation is that humans interact with the world through body-scene contact. To synthesize realistic human-scene interactions, it is essential to effectively represent the physical contact and proximity between the body and the world. To that end, we propose a novel interaction generation method, named PLACE (Proximity Learning of Articulation and Contact in 3D Environments), which explicitly models the proximity between the human body and the 3D scene around it. Specifically, given a set of basis points on a scene mesh, we leverage a conditional variational autoencoder to synthesize the minimum distances from the basis points to the human body surface. The generated proximal relationship exhibits which region of the scene is in contact with the person. Furthermore, based on such synthesized proximity, we are able to effectively obtain expressive 3D human bodies that interact with the 3D scene naturally. Our perceptual study shows that PLACE significantly improves the state-of-the-art method, approaching the realism of real human-scene interaction. We believe our method makes an important step towards the fully automatic synthesis of realistic 3D human bodies in 3D scenes. The code and model are available for research at https://sanweiliti.github.io/PLACE/PLACE.html.
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and vi deos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.
We present detailed calculations on resonances in rotationally and spin-orbit inelastic scattering of OH ($X,^2Pi, j=3/2, F_1, f$) radicals with He and Ne atoms. We calculate new emph{ab initio} potential energy surfaces for OH-He, and the cross sect ions derived from these surfaces compare favorably with the recent crossed beam scattering experiment of Kirste emph{et al.} [Phys. Rev. A textbf{82}, 042717 (2010)]. We identify both shape and Feshbach resonances in the integral and differential state-to-state scattering cross sections, and we discuss the prospects for experimentally observing scattering resonances using Stark decelerated beams of OH radicals.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا