Do you want to publish a course? Click here

Existing pre-trained language models (PLMs) have demonstrated the effectiveness of self-supervised learning for a broad range of natural language processing (NLP) tasks. However, most of them are not explicitly aware of domain-specific knowledge, whi ch is essential for downstream tasks in many domains, such as tasks in e-commerce scenarios. In this paper, we propose K-PLUG, a knowledge-injected pre-trained language model based on the encoder-decoder transformer that can be transferred to both natural language understanding and generation tasks. Specifically, we propose five knowledge-aware self-supervised pre-training objectives to formulate the learning of domain-specific knowledge, including e-commerce domain-specific knowledge-bases, aspects of product entities, categories of product entities, and unique selling propositions of product entities. We verify our method in a diverse range of e-commerce scenarios that require domain-specific knowledge, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue. K-PLUG significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks. Our code is available.
Most question answering tasks focuses on predicting concrete answers, e.g., named entities. These tasks can be normally achieved by understanding the contexts without additional information required. In Reading Comprehension of Abstract Meaning (ReCA M) task, the abstract answers are introduced. To understand abstract meanings in the context, additional knowledge is essential. In this paper, we propose an approach that leverages the pre-trained BERT Token embeddings as a prior knowledge resource. According to the results, our approach using the pre-trained BERT outperformed the baselines. It shows that the pre-trained BERT token embeddings can be used as additional knowledge for understanding abstract meanings in question answering.
When the sports coach practicing the training profession, it's important for him to work to strengthen the various creative capacities that ensure him quick understanding of different playing situations during competition, and recall his previous exp eriences that he has been through, where these experiences would help to find the appropriate solutions rapidly and picture what the opponent intends to do in order to achieve the objects sought by the sports coach. May the prior experiences and the positive feedback resulted from achieving the coaches champions and successes have the greatest effect on these coaches' raised planned thinking level, so the close knowing of planned thinking level of the football coaches according to their achieved successes is the main objective of this research using descriptive method through survey and analyzing sample of (32) football coaches, of whom (12) coaches have their own sports achievements, (20) with no achievements at all, the research use planned thinking scale in addition to meetings with some Asian football lecturers. The study found that planned thinking level is high with coaches who have achieved champions and sports achievements, and they were also distinguished from the rest by their high education, focus attention, good match management and self-confidence.
Abstract Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-i ntensive to suit low- capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted considerable research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
This study aims at demonstrating how effective the training programs are from the viewpoint of the Company's trainees, to know nature of the relationship between the training program dimen-sions and the effectiveness of the training programs, besides showing how IPA technology is used as a new management tool for dealing with the factors influencing the training programs effectiveness and to define the strategies to treat the training programs dimensions.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا