ترغب بنشر مسار تعليمي؟ اضغط هنا

A new thixotropic model is developed integrating the Papanastasiou-Bingham model with thixotropy equations to simulate the flow behaviour of Tremie Concrete in the Material Point Method framework. The effect of thixotropy on the rheological behaviour of fresh concrete is investigated by comparing field measurements with numerical simulations. The comparison yields new insights into a critical and often overlooked behaviour of concrete. A parametric study is performed to understand the effect of model parameters and rest-time on the shear stress response of fresh concrete. The Material Point Method with the Papanastasiou-Bingham model reproduces slump-flow measurements observed in the field. The novel model revealed a decline in concrete workability during the Slump-flow test after a period of rest due to thixotropy, which the physical version of the test fails to capture. This reduction in workability significantly affects the flow behaviour and the effective use of fresh concrete in construction operation.
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images from only a single training sample. We leverage the knowledge of image semantics from a pre-trained classifier to a chieve plausible generations via matching multi-level feature representations in the classifier, associated with adversarial training with an external discriminator. IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process. With extensive experimental results, we demonstrate qualitatively and quantitatively that IMAGINE performs favorably against state-of-the-art GAN-based and inversion-based methods, across three different image domains (i.e., objects, scenes, and textures).
We consider the novel task of learning disentangled representations of object shape and appearance across multiple domains (e.g., dogs and cars). The goal is to learn a generative model that learns an intermediate distribution, which borrows a subset of properties from each domain, enabling the generation of images that did not exist in any domain exclusively. This challenging problem requires an accurate disentanglement of object shape, appearance, and background from each domain, so that the appearance and shape factors from the two domains can be interchanged. We augment an existing approach that can disentangle factors within a single domain but struggles to do so across domains. Our key technical contribution is to represent object appearance with a differentiable histogram of visual features, and to optimize the generator so that two images with the same latent appearance factor but different latent shape factors produce similar histograms. On multiple multi-domain datasets, we demonstrate our method leads to accurate and consistent appearance and shape transfer across domains.
Existing models often leverage co-occurrences between objects and their context to improve recognition accuracy. However, strongly relying on context risks a models generalizability, especially when typical co-occurrence patterns are absent. This wor k focuses on addressing such contextual biases to improve the robustness of the learnt feature representations. Our goal is to accurately recognize a category in the absence of its context, without compromising on performance when it co-occurs with context. Our key idea is to decorrelate feature representations of a category from its co-occurring context. We achieve this by learning a feature subspace that explicitly represents categories occurring in the absence of context along side a joint feature subspace that represents both categories and context. Our very simple yet effective method is extensible to two multi-label tasks -- object and attribute classification. On 4 challenging datasets, we demonstrate the effectiveness of our method in reducing contextual bias.
We present MixNMatch, a conditional generative model that learns to disentangle and encode background, object pose, shape, and texture from real images with minimal supervision, for mix-and-match image generation. We build upon FineGAN, an unconditio nal generative model, to learn the desired disentanglement and image generator, and leverage adversarial joint image-code distribution matching to learn the latent factor encoders. MixNMatch requires bounding boxes during training to model background, but requires no other supervision. Through extensive experiments, we demonstrate MixNMatchs ability to accurately disentangle, encode, and combine multiple factors for mix-and-match image generation, including sketch2color, cartoon2img, and img2gif applications. Our code/models/demo can be found at https://github.com/Yuheng-Li/MixNMatch
We propose a novel unsupervised generative model that learns to disentangle object identity from other low-level aspects in class-imbalanced data. We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN, and demon strate its ineffectiveness to properly disentangle object identity in imbalanced data. Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real images, and use that as a signal to learn the appropriate latent distribution representing object identity. Experiments on both artificial (MNIST, 3D cars, 3D chairs, ShapeNet) and real-world (YouTube-Faces) imbalanced datasets demonstrate the effectiveness of our method in disentangling object identity as a latent factor of variation.
In this paper, we describe a new scalable and modular material point method (MPM) code developed for solving large-scale problems in continuum mechanics. The MPM is a hybrid Eulerian-Lagrangian approach, which uses both moving material points and com putational nodes on a background mesh. The MPM has been successfully applied to solve large-deformation problems such as landslides, failure of slopes, concrete flows, etc. Solving these large-deformation problems result in the material points actively moving through the mesh. Developing an efficient parallelisation scheme for the MPM code requires dynamic load-balancing techniques for both the material points and the background mesh. This paper describes the data structures and algorithms employed to improve the performance and portability of the MPM code. An object-oriented programming paradigm is adopted to modularise the MPM code. The Unified Modelling Language (UML) diagram of the MPM code structure is shown in Figure 1.
Recently observed magnetophonon resonances in the magnetoresistance of graphene are investigated using the Kubo formalism. This analysis provides a quantitative fit to the experimental data over a wide range of carrier densities. It demonstrates the predominance of carrier scattering by low energy transverse acoustic (TA) mode phonons: the magnetophonon resonance amplitude is significantly stronger for the TA modes than for the longitudinal acoustic (LA) modes. We demonstrate that the LA and TA phonon speeds and the electron-phonon coupling strengths determined from the magnetophonon resonance measurements also provide an excellent fit to the measured dependence of the resistivity at zero magnetic field over a temperature range of 4-150 K. A semiclassical description of magnetophonon resonance in graphene is shown to provide a simple physical explanation for the dependence of the magneto-oscillation period on carrier density. The correspondence between the quantum calculation and the semiclassical model is discussed.
We propose FineGAN, a novel unsupervised GAN framework, which disentangles the background, object shape, and object appearance to hierarchically generate images of fine-grained object categories. To disentangle the factors without supervision, our ke y idea is to use information theory to associate each factor to a latent code, and to condition the relationships between the codes in a specific way to induce the desired hierarchy. Through extensive experiments, we show that FineGAN achieves the desired disentanglement to generate realistic and diverse images belonging to fine-grained classes of birds, dogs, and cars. Using FineGANs automatically learned features, we also cluster real images as a first attempt at solving the novel problem of unsupervised fine-grained object category discovery. Our code/models/demo can be found at https://github.com/kkanshul/finegan
We propose Hide-and-Seek a general purpose data augmentation technique, which is complementary to existing data augmentation techniques and is beneficial for various visual recognition tasks. The key idea is to hide patches in a training image random ly, in order to force the network to seek other relevant content when the most discriminative content is hidden. Our approach only needs to modify the input image and can work with any network to improve its performance. During testing, it does not need to hide any patches. The main advantage of Hide-and-Seek over existing data augmentation techniques is its ability to improve object localization accuracy in the weakly-supervised setting, and we therefore use this task to motivate the approach. However, Hide-and-Seek is not tied only to the image localization task, and can generalize to other forms of visual input like videos, as well as other recognition tasks like image classification, temporal action localization, semantic segmentation, emotion recognition, age/gender estimation, and person re-identification. We perform extensive experiments to showcase the advantage of Hide-and-Seek on these various visual recognition problems.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا