ترغب بنشر مسار تعليمي؟ اضغط هنا

228 - Chao-Kai Li , Xu-Ping Yao , 2021
The type-II terminated 1T-TaS$_2$ surface of a three-dimensional 1T-TaS$_2$ bulk material realizes the effective spin-1/2 degree of freedom on each David-star cluster with ${T^2=-1}$ such that the time reversal symmetry is realized anomalously, despi te the bulk three-dimensional 1T-TaS$_2$ material has an even number of electrons per unit cell with ${T^2=+1}$. This surface is effectively viewed as a spin-1/2 triangular lattice magnet, except with a symmetry-protected topological bulk. We further propose this surface termination realizes a spinon Fermi surface spin liquid with the surface fractionalization but with a non-exotic three-dimensional bulk. We analyze possible experimental consequences of the type-II terminated surface spin liquid.
In distributed applications, Brewers CAP theorem tells us that when networks become partitioned, there is a tradeoff between consistency and availability. Consistency is agreement on the values of shared variables across a system, and availability is the ability to respond to reads and writes accessing those shared variables. We quantify these concepts, giving numerical values to inconsistency and unavailability. Recognizing that network partitioning is not an all-or-nothing proposition, we replace the P in CAP with L, a numerical measure of apparent latency, and derive the CAL theorem, an algebraic relation between inconsistency, unavailability, and apparent latency. This relation shows that if latency becomes unbounded (e.g., the network becomes partitioned), then one of inconsistency and unavailability must also become unbounded, and hence the CAP theorem is a special case of the CAL theorem. We describe two distributed coordination mechanisms, which we have implemented as an extension of the Lingua Franca coordination language, that support arbitrary tradeoffs between consistency and availability as apparent latency varies. With centralized coordination, inconsistency remains bounded by a chosen numerical value at the cost that unavailability becomes unbounded under network partitioning. With decentralized coordination, unavailability remains bounded by a chosen numerical quantity at the cost that inconsistency becomes unbounded under network partitioning. Our centralized coordination mechanism is an extension of techniques that have historically been used for distributed simulation, an application where consistency is paramount. Our decentralized coordination mechanism is an extension of techniques that have been used in distributed databases when availability is paramount.
Named Entity Recognition (NER) and Relation Extraction (RE) are the core sub-tasks for information extraction. Many recent works formulate these two tasks as the span (pair) classification problem, and thus focus on investigating how to obtain a bett er span representation from the pre-trained encoder. However, a major limitation of existing works is that they ignore the dependencies between spans (pairs). In this work, we propose a novel span representation approach, named Packed Levitated Markers, to consider the dependencies between the spans (pairs) by strategically packing the markers in the encoder. In particular, we propose a group packing strategy to enable our model to process massive spans together to consider their dependencies with limited resources. Furthermore, for those more complicated span pair classification tasks, we design a subject-oriented packing strategy, which packs each subject and all its objects into an instance to model the dependencies between the same-subject span pairs. Our experiments show that our model with packed levitated markers outperforms the sequence labeling model by 0.4%-1.9% F1 on three flat NER tasks, beats the token concat model on six NER benchmarks, and obtains a 3.5%-3.6% strict relation F1 improvement with higher speed over previous SOTA models on ACE04 and ACE05. Code and models are publicly available at https://github.com/thunlp/PL-Marker.
73 - Kai Li , Yingjie Tian , 2021
Pixel-wise crack detection is a challenging task because of poor continuity and low contrast in cracks. The existing frameworks usually employ complex models leading to good accuracy and yet low inference efficiency. In this paper, we present a light weight encoder-decoder architecture, CarNet, for efficient and high-quality crack detection. To this end, we first propose that the ideal encoder should present an olive-type distribution about the number of convolutional layers at different stages. Specifically, as the network stages deepen in the encoder, the number of convolutional layers shows a downward trend after the model input is compressed in the initial network stage. Meanwhile, in the decoder, we introduce a lightweight up-sampling feature pyramid module to learn rich hierarchical features for crack detection. In particular, we compress the feature maps of the last three network stages to the same channels and then employ up-sampling with different multiples to resize them to the same resolutions for information fusion. Finally, extensive experiments on four public databases, i.e., Sun520, Rain365, BJN260, and Crack360, demonstrate that our CarNet gains a good trade-off between inference efficiency and test accuracy over the existing state-of-the-art methods.
Data auditing is a process to verify whether certain data have been removed from a trained model. A recently proposed method (Liu et al. 20) uses Kolmogorov-Smirnov (KS) distance for such data auditing. However, it fails under certain practical condi tions. In this paper, we propose a new method called Ensembled Membership Auditing (EMA) for auditing data removal to overcome these limitations. We compare both methods using benchmark datasets (MNIST and SVHN) and Chest X-ray datasets with multi-layer perceptrons (MLP) and convolutional neural networks (CNN). Our experiments show that EMA is robust under various conditions, including the failure cases of the previously proposed method. Our code is available at: https://github.com/Hazelsuko07/EMA.
108 - Kai Li , Qi-Qi Xia , Chun-Hwey Kim 2021
The cut-off mass ratio is under debate for contact binaries. In this paper, we present the investigation of two contact binaries with mass ratios close to the low mass ratio limit. It is found that the mass ratios of VSX J082700.8+462850 (hereafter J 082700) and 1SWASP J132829.37+555246.1 (hereafter J132829) are both less than 0.1 ($qsim0.055$ for J082700, and $qsim0.089$ for J132829). J082700 is a shallow contact binary with a contact degree of $sim$19%, and J132829 is a deep contact system with a fillout factor of $sim$70%. The $O-C$ diagram analysis indicated that both the two systems manifest long-term period decrease. In addition, J082700 exhibits a cyclic modulation which is more likely resulted from Applegate mechanism. In order to explore the properties of extremely low mass ratio contact binaries (ELMRCBs), we carried out a statistical analysis on contact binaries with mass ratios of $qlesssim0.1$ and discovered that the values of $J_{spin}/J_{orb}$ of three systems are greater than 1/3. Two possible explanations can interpret this phenomenon. One is that some physical processes, unknown to date, are not considered when Hut presented the dynamically instability criterion. The other is that the dimensionless gyration radius ($k$) should be smaller than the value we used ($k^2=0.06$). We also found that the formation of ELMRCBs possibly has two channels. The study of evolutionary states of ELMRCBs reveals that their evolutionary states are similar with those of normal W UMa contact binaries.
We show that the two-photon autocorrelation in a black hole spacetime is closely related to the quasinormal modes of the latter. In particular, the emergence of the light echoes can be attributed to a collective effect of the asymptotic poles in the weighted sum of the squared modulus of the relevant Greens functions in the frequency domain, where the summation is carried out with respect to different angular components. For the Schwarzschild black holes, we demonstrate the results numerically by considering a transient spherical light source. Furthermore, analytically, it is argued that the physically pertinent geometric-optics approximation for the high-energy photons does not correspond to the large overtone limit, but the eikonal limit. For the latter, both the real and imaginary parts of the oscillations in the correlator can be mapped onto their geometric-optics counterparts, in terms of observable signatures of the black hole glimmer. To be specific, the physical interpretations of the results are elaborated by exploiting the well-known relations between the black hole quasinormal modes and the null geodesics. Moreover, we scrutinize an apparent dilemma: although it is rather intuitive that the frequency of the black hole glimmer is largely identical to the light ring orbital frequency, why it does not numerically approximate to the real part of an arbitrary quasinormal frequency at the eikonal limit. Besides exploring the Schwarzschild case, discussions are further extended to the Kerr black holes. We point out a subtlety in the resulting light echoes from the viewpoint of black hole perturbation theory. Possible astrophysical implications of the present study are also addressed.
Named entity recognition (NER) is a fundamental task of natural language processing (NLP). However, most state-of-the-art research is mainly oriented to high-resource languages such as English and has not been widely applied to low-resource languages . In Malay language, relevant NER resources are limited. In this work, we propose a dataset construction framework, which is based on labeled datasets of homologous languages and iterative optimization, to build a Malay NER dataset (MYNER) comprising 28,991 sentences (over 384 thousand tokens). Additionally, to better integrate boundary information for NER, we propose a multi-task (MT) model with a bidirectional revision (Bi-revision) mechanism for Malay NER task. Specifically, an auxiliary task, boundary detection, is introduced to improve NER training in both explicit and implicit ways. Furthermore, a gated ignoring mechanism is proposed to conduct conditional label transfer and alleviate error propagation by the auxiliary task. Experimental results demonstrate that our model achieves comparable results over baselines on MYNER. The dataset and the model in this paper would be publicly released as a benchmark dataset.
135 - Weikai Li , Songcan Chen 2021
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a well-labeled source domain to a different but related unlabeled target domain with identical label space. Currently, the main workhorse for solving UDA is domain alignment, which has proven successful. However, it is often difficult to find an appropriate source domain with identical label space. A more practical scenario is so-called partial domain adaptation (PDA) in which the source label set or space subsumes the target one. Unfortunately, in PDA, due to the existence of the irrelevant categories in the source domain, it is quite hard to obtain a perfect alignment, thus resulting in mode collapse and negative transfer. Although several efforts have been made by down-weighting the irrelevant source categories, the strategies used tend to be burdensome and risky since exactly which irrelevant categories are unknown. These challenges motivate us to find a relatively simpler alternative to solve PDA. To achieve this, we first provide a thorough theoretical analysis, which illustrates that the target risk is bounded by both model smoothness and between-domain discrepancy. Considering the difficulty of perfect alignment in solving PDA, we turn to focus on the model smoothness while discard the riskier domain alignment to enhance the adaptability of the model. Specifically, we instantiate the model smoothness as a quite simple intra-domain structure preserving (IDSP). To our best knowledge, this is the first naive attempt to address the PDA without domain alignment. Finally, our empirical results on multiple benchmark datasets demonstrate that IDSP is not only superior to the PDA SOTAs by a significant margin on some benchmarks (e.g., +10% on Cl->Rw and +8% on Ar->Rw ), but also complementary to domain alignment in the standard UDA
71 - Daiyong Chen 2021
Non-sensitive axis feedback control is crucial for cross-coupling noise suppression in the application of full-maglev vertical superconducting gravity instruments. This paper introduces the non-sensitive axis feedback control of the test mass in a ho me-made full-maglev vertical superconducting accelerometer. In the feedback system, special superconducting circuits are designed to decouple and detect the multi-degrees-of-freedom motions of the test mass. Then the decoupled motion signals are dealt with by the PID controller and fed back to the side-wall coils to control the test mass. In our test, the test mass is controlled successfully and the displacement is reduced by about one order of magnitude in the laboratory. Accordingly, the noise level of the vertical superconducting accelerometer in the sensitive axis is also reduced.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا