No Arabic abstract
Performance prediction, the task of estimating a systems performance without performing experiments, allows us to reduce the experimental burden caused by the combinatorial explosion of different datasets, languages, tasks, and models. In this paper, we make two contributions to improving performance prediction for NLP tasks. First, we examine performance predictors not only for holistic measures of accuracy like F1 or BLEU but also fine-grained performance measures such as accuracy over individual classes of examples. Second, we propose methods to understand the reliability of a performance prediction model from two angles: confidence intervals and calibration. We perform an analysis of four types of NLP tasks, and both demonstrate the feasibility of fine-grained performance prediction and the necessity to perform reliability analysis for performance prediction methods in the future. We make our code publicly available: url{https://github.com/neulab/Reliable-NLPPP}
Aspect Sentiment Triplet Extraction (ASTE) aims to extract aspect term, sentiment and opinion term triplets from sentences and tries to provide a complete solution for aspect-based sentiment analysis (ABSA). However, some triplets extracted by ASTE are confusing, since the sentiment in a triplet extracted by ASTE is the sentiment that the sentence expresses toward the aspect term rather than the sentiment of the aspect term and opinion term pair. In this paper, we introduce a more fine-grained Aspect-Sentiment-Opinion Triplet Extraction (ASOTE) Task. ASOTE also extracts aspect term, sentiment and opinion term triplets. However, the sentiment in a triplet extracted by ASOTE is the sentiment of the aspect term and opinion term pair. We build four datasets for ASOTE based on several popular ABSA benchmarks. We propose a Position-aware BERT-based Framework (PBF) to address this task. PBF first extracts aspect terms from sentences. For each extracted aspect term, PBF first generates aspect term-specific sentence representations considering both the meaning and the position of the aspect term, then extracts associated opinion terms and predicts the sentiments of the aspect term and opinion term pairs based on the sentence representations. Experimental results on the four datasets show the effectiveness of PBF.
Texture exists in lots of the products, such as wood, beef and compression tea. These abundant and stochastic texture patterns are significantly different between any two products. Unlike the traditional digital ID tracking, in this paper, we propose a novel approach for product traceability, which directly uses the natural texture of the product itself as the unique identifier. A texture identification based traceability system for Puer compression tea is developed to demonstrate the feasibility of the proposed solution. With tea-brick images collected from manufactures and individual users, a large-scale dataset has been formed to evaluate the performance of tea-brick texture verification and searching algorithm. The texture similarity approach with local feature extraction and matching achieves the verification accuracy of 99.6% and the top-1 searching accuracy of 98.9%, respectively.
Automated knowledge discovery from trending chemical literature is essential for more efficient biomedical research. How to extract detailed knowledge about chemical reactions from the core chemistry literature is a new emerging challenge that has not been well studied. In this paper, we study the new problem of fine-grained chemical entity typing, which poses interesting new challenges especially because of the complex name mentions frequently occurring in chemistry literature and graphic representation of entities. We introduce a new benchmark data set (CHEMET) to facilitate the study of the new task and propose a novel multi-modal representation learning framework to solve the problem of fine-grained chemical entity typing by leveraging external resources with chemical structures and using cross-modal attention to learn effective representation of text in the chemistry domain. Experiment results show that the proposed framework outperforms multiple state-of-the-art methods.
We propose to measure fine-grained domain relevance - the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., deep learning) domain. Such measurement is crucial for many downstream tasks in natural language processing. To handle long-tail terms, we build a core-anchored semantic graph, which uses core terms with rich description information to bridge the vast remaining fringe terms semantically. To support a fine-grained domain without relying on a matching corpus for supervision, we develop hierarchical core-fringe learning, which learns core and fringe terms jointly in a semi-supervised manner contextualized in the hierarchy of the domain. To reduce expensive human efforts, we employ automatic annotation and hierarchical positive-unlabeled learning. Our approach applies to big or small domains, covers head or tail terms, and requires little human effort. Extensive experiments demonstrate that our methods outperform strong baselines and even surpass professional human performance.
Considering event structure information has proven helpful in text-based stock movement prediction. However, existing works mainly adopt the coarse-grained events, which loses the specific semantic information of diverse event types. In this work, we propose to incorporate the fine-grained events in stock movement prediction. Firstly, we propose a professional finance event dictionary built by domain experts and use it to extract fine-grained events automatically from finance news. Then we design a neural model to combine finance news with fine-grained event structure and stock trade data to predict the stock movement. Besides, in order to improve the generalizability of the proposed method, we design an advanced model that uses the extracted fine-grained events as the distant supervised label to train a multi-task framework of event extraction and stock prediction. The experimental results show that our method outperforms all the baselines and has good generalizability.