ترغب بنشر مسار تعليمي؟ اضغط هنا

161 - Juan Wang , Rui Kang , Tao Xing 2021
A compact HTS cable that is able to carry large current density is crucial for developing high field accelerator magnets. We are reporting a novel HTS cable (named X-cable) that could achieve a high current density as the Roebel cable, but is impleme nted by in-plane bending stacked HTS tapes directly to realize the transposition. The cable is jointly developed with an industrial company on a production line: ready for large scale production from the beginning. Recently, a prototype cable with REBCO coated conductor has been successfully fabricated. Test results show no significant degradation, demonstrated the feasibility of such cable concept. In this paper, the cables design concept, in-plane bending performance of the REBCO tapes, fabrication procedure and test results of this first prototype cable will be presented.
294 - Rui Kang , Juan Wang , Qingjin Xu 2021
Protecting a high temperature superconducting (HTS) magnet from a quench event is a challenging task. Because of the slow normal zone propagation velocity, the long reliable quench detection method by directly monitoring coil voltage may not be timel y for HTS anymore, leaving HTS magnets under danger of overheating. Using a NbTi low temperature superconducting (LTS) wire to detect quench in coils wound with ReBCO HTS tapes have recently been experimentally proved, yet a theoretical study is still needed to further develop this technique and make it prepared to be applied more generally in high field magnets. In this manuscript, we have demonstrated that it is the significant difference in the temperature dependence of critical current between LTS and HTS but not the normal zone propagation velocity (NZPV), that makes LTSs good quench detectors. Simulations show that LTS quench detectors should have low matrix fraction or high matrix resistivity. At last, at field up to 15 T or 20 T, Nb3Sn is proven to be a good quench detector.
In this work, we revisit the isospin violating decays of $X(3872)$ in a coupled-channel effective field theory. In the molecular scheme, the $X(3872)$ is interpreted as the bound state of $bar{D}^{*0}D^0/bar{D}^0D^{*0}$ and $D^{*-}D^+/D^-D^{*+}$ chan nels. In a cutoff-independent formalism, we relate the coupling constants of $X(3872)$ with the two channels to the molecular wave function. The isospin violating decays of $X(3872)$ are obtained by two equivalent approaches, which amend some deficiencies about this issue in literature. In the quantum field theory approach, the isospin violating decays arise from the coupling constants of $X(3872)$ to two di-meson channels. In the quantum mechanics approach, the isospin violating is attributed to wave functions at the origin. We illustrate that how to cure the insufficient results in literature. Within the comprehensive analysis, we bridge the isospin violating decays of $X(3872)$ to its inner structure. Our results show that the proportion of the neutral channel in $X(3872)$ is over $80%$. As a by-product, we calculate the strong decay width of $X(3872)to bar{D}^0 D^0pi^0$ and radiative one $X(3872)to bar{D}^0 D^0gamma$. The strong decay width and radiative decay width are about 30 keV and 10 keV, respectively, for the binding energy from $-300$ keV to $-50$ keV.
We introduce the task of open-vocabulary visual instance search (OVIS). Given an arbitrary textual search query, Open-vocabulary Visual Instance Search (OVIS) aims to return a ranked list of visual instances, i.e., image patches, that satisfies the s earch intent from an image database. The term open vocabulary means that there are neither restrictions to the visual instance to be searched nor restrictions to the word that can be used to compose the textual search query. We propose to address such a search challenge via visual-semantic aligned representation learning (ViSA). ViSA leverages massive image-caption pairs as weak image-level (not instance-level) supervision to learn a rich cross-modal semantic space where the representations of visual instances (not images) and those of textual queries are aligned, thus allowing us to measure the similarities between any visual instance and an arbitrary textual query. To evaluate the performance of ViSA, we build two datasets named OVIS40 and OVIS1600 and also introduce a pipeline for error analysis. Through extensive experiments on the two datasets, we demonstrate ViSAs ability to search for visual instances in images not available during training given a wide range of textual queries including those composed of uncommon words. Experimental results show that ViSA achieves an mAP@50 of 21.9% on OVIS40 under the most challenging setting and achieves an mAP@6 of 14.9% on OVIS1600 dataset.
Very recently, the LHCb Collaboration reported the doubly charmed tetraquark state $T_{cc}^+$ below the $D^{*+}D^0$ threshold about $273$ keV. As a very near-threshold state, its long-distance structure is very important. In the molecular scheme, we relate the coupling constants of $T_{cc}^+$ with $D^{*0}D^+$ and $D^{*+}D^0$ to its binding energy and mixing angle of two components with a coupled-channel effective field theory. With the coupling constants, we investigate the kinetically allowed strong decays $T_{cc}^+to D^0D^0pi^+$, $T_{cc}^+to D^+D^0pi^0$ and radiative decays $D^+D^0 gamma$. Our results show that the decay width of $T_{cc}^+to D^0D^0pi^+$ is the largest one, which is just the experimental observation channel. Our theoretical total strong and radiative widths are in favor of the $T_{cc}^+$ as a $|D^{*+}D^0rangle$ dominated bound state. The total strong and radiative width in the single channel limit and isospin singlet limit are given as $59.7^{+4.6}_{-4.4} text{ keV}$ and $46.7^{+2.7}_{-2.9} text{ keV}$, respectively. Our calculation is cutoff-independent and without prior isospin assignment. The absolute partial widths and ratios of the different decay channels can be used to test the structure of $T_{cc}^+$ state when the updated experimental results are available.
This paper revisits human-object interaction (HOI) recognition at image level without using supervisions of object location and human pose. We name it detection-free HOI recognition, in contrast to the existing detection-supervised approaches which r ely on object and keypoint detections to achieve state of the art. With our method, not only the detection supervision is evitable, but superior performance can be achieved by properly using image-text pre-training (such as CLIP) and the proposed Log-Sum-Exp Sign (LSE-Sign) loss function. Specifically, using text embeddings of class labels to initialize the linear classifier is essential for leveraging the CLIP pre-trained image encoder. In addition, LSE-Sign loss facilitates learning from multiple labels on an imbalanced dataset by normalizing gradients over all classes in a softmax format. Surprisingly, our detection-free solution achieves 60.5 mAP on the HICO dataset, outperforming the detection-supervised state of the art by 13.4 mAP
A novel framework is proposed to extract near-threshold resonant states from finite-volume energy levels of lattice QCD and is applied to elucidate structures of the positive parity $D_s$. The quark model, the quark-pair-creation mechanism and $D^{(* )}K$ interaction are incorporated into the Hamiltonian effective field theory. The bare $1^+$ $cbar s$ states are almost purely given by the states with heavy-quark spin bases. The physical $D^*_{s0}(2317)$ and $D^*_{s1}(2460)$ are the mixtures of bare $cbar s$ core and $D^{(*)}K$ component, while the $D^*_{s1}(2536)$ and $D^*_{s2}(2573)$ are almost dominated by bare $cbar{s}$. Furthermore, our model reproduces the clear level crossing of the $D^*_{s1}(2536)$ with the scattering state at a finite volume.
Knowledge representation learning has received a lot of attention in the past few years. The success of existing methods heavily relies on the quality of knowledge graphs. The entities with few triplets tend to be learned with less expressive power. Fortunately, there are many knowledge graphs constructed from various sources, the representations of which could contain much information. We propose an adversarial embedding transfer network ATransN, which transfers knowledge from one or more teacher knowledge graphs to a target one through an aligned entity set without explicit data leakage. Specifically, we add soft constraints on aligned entity pairs and neighbours to the existing knowledge representation learning methods. To handle the problem of possible distribution differences between teacher and target knowledge graphs, we introduce an adversarial adaption module. The discriminator of this module evaluates the degree of consistency between the embeddings of an aligned entity pair. The consistency score is then used as the weights of soft constraints. It is not necessary to acquire the relations and triplets in teacher knowledge graphs because we only utilize the entity representations. Knowledge graph completion results show that ATransN achieves better performance against baselines without transfer on three datasets, CN3l, WK3l, and DWY100k. The ablation study demonstrates that ATransN can bring steady and consistent improvement in different settings. The extension of combining other knowledge graph embedding algorithms and the extension with three teacher graphs display the promising generalization of the adversarial transfer network.
We present a graph-convolution-reinforced transformer, named Mesh Graphormer, for 3D human pose and mesh reconstruction from a single image. Recently both transformers and graph convolutional neural networks (GCNNs) have shown promising progress in h uman mesh reconstruction. Transformer-based approaches are effective in modeling non-local interactions among 3D mesh vertices and body joints, whereas GCNNs are good at exploiting neighborhood vertex interactions based on a pre-specified mesh topology. In this paper, we study how to combine graph convolutions and self-attentions in a transformer to model both local and global interactions. Experimental results show that our proposed method, Mesh Graphormer, significantly outperforms the previous state-of-the-art methods on multiple benchmarks, including Human3.6M, 3DPW, and FreiHAND datasets. Code and pre-trained models are available at https://github.com/microsoft/MeshGraphormer
This paper presents a detection-aware pre-training (DAP) approach, which leverages only weakly-labeled classification-style datasets (e.g., ImageNet) for pre-training, but is specifically tailored to benefit object detection tasks. In contrast to the widely used image classification-based pre-training (e.g., on ImageNet), which does not include any location-related training tasks, we transform a classification dataset into a detection dataset through a weakly supervised object localization method based on Class Activation Maps to directly pre-train a detector, making the pre-trained model location-aware and capable of predicting bounding boxes. We show that DAP can outperform the traditional classification pre-training in terms of both sample efficiency and convergence speed in downstream detection tasks including VOC and COCO. In particular, DAP boosts the detection accuracy by a large margin when the number of examples in the downstream task is small.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا