ترغب بنشر مسار تعليمي؟ اضغط هنا

TailorGAN: Making User-Defined Fashion Designs

60   0   0.0 ( 0 )
 نشر من قبل Lele Chen
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Attribute editing has become an important and emerging topic of computer vision. In this paper, we consider a task: given a reference garment image A and another image B with target attribute (collar/sleeve), generate a photo-realistic image which combines the texture from reference A and the new attribute from reference B. The highly convoluted attributes and the lack of paired data are the main challenges to the task. To overcome those limitations, we propose a novel self-supervised model to synthesize garment images with disentangled attributes (e.g., collar and sleeves) without paired data. Our method consists of a reconstruction learning step and an adversarial learning step. The model learns texture and location information through reconstruction learning. And, the models capability is generalized to achieve single-attribute manipulation by adversarial learning. Meanwhile, we compose a new dataset, named GarmentSet, with annotation of landmarks of collars and sleeves on clean garment images. Extensive experiments on this dataset and real-world samples demonstrate that our method can synthesize much better results than the state-of-the-art methods in both quantitative and qualitative comparisons.



قيم البحث

اقرأ أيضاً

We present Magic Layouts; a method for parsing screenshots or hand-drawn sketches of user interface (UI) layouts. Our core contribution is to extend existing detectors to exploit a learned structural prior for UI designs, enabling robust detection of UI components; buttons, text boxes and similar. Specifically we learn a prior over mobile UI layouts, encoding common spatial co-occurrence relationships between different UI components. Conditioning region proposals using this prior leads to performance gains on UI layout parsing for both hand-drawn UIs and app screenshots, which we demonstrate within the context an interactive application for rapidly acquiring digital prototypes of user experience (UX) designs.
375 - Qing Ping , Bing Wu , Wanying Ding 2019
In this paper, we introduce attribute-aware fashion-editing, a novel task, to the fashion domain. We re-define the overall objectives in AttGAN and propose the Fashion-AttGAN model for this new task. A dataset is constructed for this task with 14,221 and 22 attributes, which has been made publically available. Experimental results show the effectiveness of our Fashion-AttGAN on fashion editing over the original AttGAN.
128 - Wu Shi , Tak-Wai Hui , Ziwei Liu 2019
Existing unconditional generative models mainly focus on modeling general objects, such as faces and indoor scenes. Fashion textures, another important type of visual elements around us, have not been extensively studied. In this work, we propose an effective generative model for fashion textures and also comprehensively investigate the key components involved: internal representation, latent space sampling and the generator architecture. We use Gram matrix as a suitable internal representation for modeling realistic fashion textures, and further design two dedicated modules for modulating Gram matrix into a low-dimension vector. Since fashion textures are scale-dependent, we propose a recursive auto-encoder to capture the dependency between multiple granularity levels of texture feature. Another important observation is that fashion textures are multi-modal. We fit and sample from a Gaussian mixture model in the latent space to improve the diversity of the generated textures. Extensive experiments demonstrate that our approach is capable of synthesizing more realistic and diverse fashion textures over other state-of-the-art methods.
146 - Yun Ye , Yixin Li , Bo Wu 2019
Fashion attribute classification is of great importance to many high-level tasks such as fashion item search, fashion trend analysis, fashion recommendation, etc. The task is challenging due to the extremely imbalanced data distribution, particularly the attributes with only a few positive samples. In this paper, we introduce a hard-aware pipeline to make full use of hard samples/attributes. We first propose Hard-Aware BackPropagation (HABP) to efficiently and adaptively focus on training hard data. Then for the identified hard labels, we propose to synthesize more complementary samples for training. To stabilize training, we extend semi-supervised GAN by directly deactivating outputs for synthetic complementary samples (Deact). In general, our method is more effective in addressing hard cases. HABP weights more on hard samples. For hard attributes with insufficient training data, Deact brings more stable synthetic samples for training and further improve the performance. Our method is verified on large scale fashion dataset, outperforming other state-of-the-art without any additional supervisions.
Stimulated Raman adiabatic passage (STIRAP) is a widely-used technique of coherent state-to-state manipulation for many applications in physics, chemistry, and beyond. The adiabatic evolution of the state involved in STIRAP, called adiabatic passage, guarantees its robustness against control errors, but also leads to problems of low efficiency and decoherence. Here we propose and experimentally demonstrate an alternative approach, termed stimulated Raman user-defined passage (STIRUP), where a parameterized state is employed for constructing desired evolutions to replace the adiabatic passage in STIRAP. The user-defined passages can be flexibly designed for optimizing different objectives for different tasks, e.g. minimizing leakage error. To experimentally benchmark its performance, we apply STIRUP to the task of coherent state transfer in a superconducting Xmon qutrit. We found that STIRUP completed the transfer more then four times faster than STIRAP with enhanced robustness, and achieved a fidelity of 99.5%, which is the highest among all recent experiments based on STIRAP and its variants. In practice, STIRUP differs from STIRAP only in the design of driving pulses; therefore, most existing applications of STIRAP can be readily implemented with STIRUP.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا