ترغب بنشر مسار تعليمي؟ اضغط هنا

Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally. Despite progress on 3D scanning and modeling of human bodies, there is still no technology that can easily turn a static scan into an animatab le avatar. Automating the creation of such avatars would enable many applications in games, social networking, animation, and AR/VR to name a few. The key problem is one of representation. Standard 3D meshes are widely used in modeling the minimally-clothed body but do not readily capture the complex topology of clothing. Recent interest has shifted to implicit surface models for this task but they are computationally heavy and lack compatibility with existing 3D tools. What is needed is a 3D representation that can capture varied topology at high resolution and that can be learned from data. We argue that this representation has been with us all along -- the point cloud. Point clouds have properties of both implicit and explicit representations that we exploit to model 3D garment geometry on a human body. We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits. The network is trained from 3D point clouds of many types of clothing, on many bodies, in many poses, and learns to model pose-dependent clothing deformations. The geometry feature can be optimized to fit a previously unseen scan of a person in clothing, enabling the scan to be reposed realistically. Our model demonstrates superior quantitative and qualitative results in both multi-outfit modeling and unseen outfit animation. The code is available for research purposes.
Room temperature two-dimensional (2D) ferromagnetism is highly desired in practical spintronics applications. Recently, 1T phase CrTe2 (1T-CrTe2) nanosheets with five and thicker layers have been successfully synthesized, which all exhibit the proper ties of ferromagnetic (FM) metals with Curie temperatures around 305 K. However, whether the ferromagnetism therein can be maintained when continuously reducing the nanosheets thickness to monolayer limit remains unknown. Here, through first-principles calculations, we explore the evolution of magnetic properties of 1 to 6 layers CrTe2 nanosheets and several interesting points are found: First, unexpectedly, monolayer CrTe2 prefers a zigzag antiferromagnetic (AFM) state with its energy much lower than that of FM state. Second, in 2 to 4 layers CrTe2, both the intralayer and interlayer magnetic coupling are AFM. Last, when the number of layers is equal to or greater than five, the intralayer and interlayer magnetic coupling become FM. Theoretical analysis reveals that the in-plane lattice contraction of few layer CrTe2 compared to bulk is the main factor producing intralayer AFM-FM magnetic transition. At the same time, as long as the intralayer coupling gets FM, the interlayer coupling will concomitantly switch from AFM to FM. Such highly thickness dependent magnetism provides a new perspective to control the magnetic properties of 2D materials.
Learning to model and reconstruct humans in clothing is challenging due to articulation, non-rigid deformation, and varying clothing types and topologies. To enable learning, the choice of representation is the key. Recent work uses neural networks t o parameterize local surface elements. This approach captures locally coherent geometry and non-planar details, can deal with varying topology, and does not require registered training data. However, naively using such methods to model 3D clothed humans fails to capture fine-grained local deformations and generalizes poorly. To address this, we present three key innovations: First, we deform surface elements based on a human body model such that large-scale deformations caused by articulation are explicitly separated from topological changes and local clothing deformations. Second, we address the limitations of existing neural surface elements by regressing local geometry from local features, significantly improving the expressiveness. Third, we learn a pose embedding on a 2D parameterization space that encodes posed body geometry, improving generalization to unseen poses by reducing non-local spurious correlations. We demonstrate the efficacy of our surface representation by learning models of complex clothing from point clouds. The clothing can change topology and deviate from the topology of the body. Once learned, we can animate previously unseen motions, producing high-quality point clouds, from which we generate realistic images with neural rendering. We assess the importance of each technical contribution and show that our approach outperforms the state-of-the-art methods in terms of reconstruction accuracy and inference time. The code is available for research purposes at https://qianlim.github.io/SCALE .
We present SCANimate, an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar. These avatars are driven by pose parameters and have realistic clothing that moves and deforms naturally. SCA Nimate does not rely on a customized mesh template or surface mesh registration. We observe that fitting a parametric 3D body model, like SMPL, to a clothed human scan is tractable while surface registration of the body topology to the scan is often not, because clothing can deviate significantly from the body shape. We also observe that articulated transformations are invertible, resulting in geometric cycle consistency in the posed and unposed shapes. These observations lead us to a weakly supervised learning method that aligns scans into a canonical pose by disentangling articulated deformations without template-based surface registration. Furthermore, to complete missing regions in the aligned scans while modeling pose-dependent deformations, we introduce a locally pose-aware implicit function that learns to complete and model geometry with learned pose correctives. In contrast to commonly used global pose embeddings, our local pose conditioning significantly reduces long-range spurious correlations and improves generalization to unseen poses, especially when training data is limited. Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar. We demonstrate our approach on various clothing types with different amounts of training data, outperforming existing solutions and other variants in terms of fidelity and generality in every setting. The code is available at https://scanimate.is.tue.mpg.de.
Recently, an adaptive variational algorithm termed Adaptive Derivative-Assembled Pseudo-Trotter ansatz Variational Quantum Eigensolver (ADAPT-VQE) has been proposed by Grimsley et al. (Nat. Commun. 10, 3007) while the number of measurements required to perform this algorithm scales O(N^8). In this work, we present an efficient adaptive variational quantum solver of the Schrodinger equation based on ADAPT-VQE together with the reduced density matrix reconstruction approach, which reduces the number of measurements from O(N^8) to O(N^4). This new algorithm is quite suitable for quantum simulations of chemical systems on near-term noisy intermediate-scale hardware due to low circuit complexity and reduced measurement. Numerical benchmark calculations for small molecules demonstrate that this new algorithm provides an accurate description of the ground-state potential energy curves. In addition, we generalize this new algorithm for excited states with the variational quantum deflation approach and achieve the same accuracy as ground-state simulations.
Robotic grasping of house-hold objects has made remarkable progress in recent years. Yet, human grasps are still difficult to synthesize realistically. There are several key reasons: (1) the human hand has many degrees of freedom (more than robotic m anipulators); (2) the synthesized hand should conform to the surface of the object; and (3) it should interact with the object in a semantically and physically plausible manner. To make progress in this direction, we draw inspiration from the recent progress on learning-based implicit representations for 3D object reconstruction. Specifically, we propose an expressive representation for human grasp modelling that is efficient and easy to integrate with deep neural networks. Our insight is that every point in a three-dimensional space can be characterized by the signed distances to the surface of the hand and the object, respectively. Consequently, the hand, the object, and the contact area can be represented by implicit surfaces in a common space, in which the proximity between the hand and the object can be modelled explicitly. We name this 3D to 2D mapping as Grasping Field, parameterize it with a deep neural network, and learn it from data. We demonstrate that the proposed grasping field is an effective and expressive representation for human grasp generation. Specifically, our generative model is able to synthesize high-quality human grasps, given only on a 3D object point cloud. The extensive experiments demonstrate that our generative model compares favorably with a strong baseline and approaches the level of natural human grasps. Our method improves the physical plausibility of the hand-object contact reconstruction and achieves comparable performance for 3D hand reconstruction compared to state-of-the-art methods.
Three-dimensional human body models are widely used in the analysis of human pose and motion. Existing models, however, are learned from minimally-clothed 3D scans and thus do not generalize to the complexity of dressed people in common images and vi deos. Additionally, current models lack the expressive power needed to represent the complex non-linear geometry of pose-dependent clothing shapes. To address this, we learn a generative 3D mesh model of clothed people from 3D scans with varying pose and clothing. Specifically, we train a conditional Mesh-VAE-GAN to learn the clothing deformation from the SMPL body model, making clothing an additional term in SMPL. Our model is conditioned on both pose and clothing type, giving the ability to draw samples of clothing to dress different body shapes in a variety of styles and poses. To preserve wrinkle detail, our Mesh-VAE-GAN extends patchwise discriminators to 3D meshes. Our model, named CAPE, represents global shape and fine local structure, effectively extending the SMPL body model to clothing. To our knowledge, this is the first generative model that directly dresses 3D human body meshes and generalizes to different poses. The model, code and data are available for research purposes at https://cape.is.tue.mpg.de.
When using atom-centered integration grids, the portion of the grid that belongs to a certain atom also moves when this atom is displaced. In the paper, we investigate the moving-grid effect in the calculation of the harmonic vibrational frequencies when using all-electron full-potential numeric atomic-centered orbitals as the basis set. We find that, unlike the first order derivative (i.e., forces), the moving-grid effect plays an essential role for the second order derivatives (i.e., vibrational frequencies). Further analysis reveals that predominantly diagonal force constant terms are affected, which can be bypassed efficiently by invoking translational symmetry. Our approaches have been demonstrated in both finite (molecules) and extended (periodic) systems.
Recently, a new type of two-dimensional layered material, i.e. C3N, has been fabricated by polymerization of 2,3-diaminophenazine and used to fabricate a field-effect transistor device with an on/off current ratio reaching 5.5E10 (Adv. Mater. 2017, 1 605625). Here we have performed a comprehensive first-principles study mechanical and electronic properties of C3N and related derivatives. Ab inito molecular dynamics simulation shows that C3N monolayer can withstand high temperature up to 2000K. Besides high stability, C3N is predicted to be a superior stiff material with high Youngs modulus (1090.0 GPa), which is comparable or even higher than that of graphene (1057.7 GPa). By roll-up C3N nanosheet into the corresponding nanotube, an out-of-plane bending deformation is also investigated. The calculation indicates C3N nanosheet possesses a fascinating bending Poissons effect, namely, bending induced lateral contraction. Further investigation shows that most of the corresponding nanotubes also present high Youngs modulus and semiconducting properties. In addition, the electronic properties of few-layer C3N nanosheet is also investigated. It is predicated that C3N monolayer is an indirect semiconductor (1.09 eV) with strongly polar covalent bonds, while the multi-layered C3N possesses metallic properties with AD-stacking. Due to high stability, suitable band gap and superior mechanical strength, the C3N nanosheet will be an ideal candidate in high-strength nano-electronic device applications.
245 - Jun Dai , Zhenyu Li , Jinlong Yang 2008
We report a systematic first-principles study on the recent discovered superconducting Ba$_{1-x}$K$_x$Fe$_2$As$_2$ systems ($x$ = 0.00, 0.25, 0.50, 0.75, and 1.00). Previous theoretical studies strongly overestimated the magnetic moment on Fe of the parent compound BaFe$_2$As$_2$. Using a negative on-site energy $U$, we obtain a magnetic moment 0.83 $mu_B$ per Fe, which agrees well with the experimental value (0.87 $mu_B$). K doping tends to increase the density of states at fermi level. The magnetic instability is enhanced with light doping, and is then weaken by increasing the doping level. The energetics for the different K doping sites are also discussed.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا