Do you want to publish a course? Click here

Humans are capable of learning novel concepts from very few examples; in contrast, state-of-the-art machine learning algorithms typically need thousands of examples to do so. In this paper, we propose an algorithm for learning novel concepts by repre senting them as programs over existing concepts. This way the concept learning problem is naturally a program synthesis problem and our algorithm learns from a few examples to synthesize a program representing the novel concept. In addition, we perform a theoretical analysis of our approach for the case where the program defining the novel concept over existing ones is context-free. We show that given a learned grammar-based parser and a novel production rule, we can augment the parser with the production rule in a way that provably generalizes. We evaluate our approach by learning concepts in the semantic parsing domain extended to the few-shot novel concept learning setting, showing that our approach significantly outperforms end-to-end neural semantic parsers.
Prior studies on text-to-text generation typically assume that the model could figure out what to attend to in the input and what to include in the output via seq2seq learning, with only the parallel training data and no additional guidance. However, it remains unclear whether current models can preserve important concepts in the source input, as seq2seq learning does not have explicit focus on the concepts and commonly used evaluation metrics also treat them equally important as other tokens. In this paper, we present a systematic analysis that studies whether current seq2seq models, especially pre-trained language models, are good enough for preserving important input concepts and to what extent explicitly guiding generation with the concepts as lexical constraints is beneficial. We answer the above questions by conducting extensive analytical experiments on four representative text-to-text generation tasks. Based on the observations, we then propose a simple yet effective framework to automatically extract, denoise, and enforce important input concepts as lexical constraints. This new method performs comparably or better than its unconstrained counterpart on automatic metrics, demonstrates higher coverage for concept preservation, and receives better ratings in the human evaluation. Our code is available at https://github.com/morningmoni/EDE.
We investigate ways to compose complex concepts in texts from primitive ones while grounding them in images. We propose Concept and Relation Graph (CRG), which builds on top of constituency analysis and consists of recursively combined concepts with predicate functions. Meanwhile, we propose a concept composition neural network called Composer to leverage the CRG for visually grounded concept learning. Specifically, we learn the grounding of both primitive and all composed concepts by aligning them to images and show that learning to compose leads to more robust grounding results, measured in text-to-image matching accuracy. Notably, our model can model grounded concepts forming at both the finer-grained sentence level and the coarser-grained intermediate level (or word-level). Composer leads to pronounced improvement in matching accuracy when the evaluation data has significant compound divergence from the training data.
Biomedical Concept Normalization (BCN) is widely used in biomedical text processing as a fundamental module. Owing to numerous surface variants of biomedical concepts, BCN still remains challenging and unsolved. In this paper, we exploit biomedical c oncept hypernyms to facilitate BCN. We propose Biomedical Concept Normalizer with Hypernyms (BCNH), a novel framework that adopts list-wise training to make use of both hypernyms and synonyms, and also employs norm constraint on the representation of hypernym-hyponym entity pairs. The experimental results show that BCNH outperforms the previous state-of-the-art model on the NCBI dataset.
In this paper, we study the problem of recognizing compositional attribute-object concepts within the zero-shot learning (ZSL) framework. We propose an episode-based cross-attention (EpiCA) network which combines merits of cross-attention mechanism a nd episode-based training strategy to recognize novel compositional concepts. Firstly, EpiCA bases on cross-attention to correlate conceptvisual information and utilizes the gated pooling layer to build contextualized representations for both images and concepts. The updated representations are used for a more indepth multi-modal relevance calculation for concept recognition. Secondly, a two-phase episode training strategy, especially the ransductive phase, is adopted to utilize unlabeled test examples to alleviate the low-resource learning problem. Experiments on two widelyused zero-shot compositional learning (ZSCL) benchmarks have demonstrated the effectiveness of the model compared with recent approaches on both conventional and generalized ZSCL settings.
Biomaterials are synthetic or natural materials used for constructing artificial organs, fabricating prostheses, or replacing tissues. The last century saw the development of thousands of novel biomaterials and, as a result, an exponential increase i n scientific publications in the field. Large-scale analysis of biomaterials and their performance could enable data-driven material selection and implant design. However, such analysis requires identification and organization of concepts, such as materials and structures, from published texts. To facilitate future information extraction and the application of machine-learning techniques, we developed a semantic annotator specifically tailored for the biomaterials literature. The Biomaterials Annotator has been implemented following a modular organization using software containers for the different components and orchestrated using Nextflow as workflow manager. Natural language processing (NLP) components are mainly developed in Java. This set-up has allowed named entity recognition of seventeen classes relevant to the biomaterials domain. Here we detail the development, evaluation and performance of the system, as well as the release of the first collection of annotated biomaterials abstracts. We make both the corpus and system available to the community to promote future efforts in the field and contribute towards its sustainability.
The current study aims at revealing the body dysmorphic disorder and its relation to self-esteem among cosmetic surgery clinic visitors in the county of Homs in the light of the following independent variables: sex and age, using the correlative d escriptive methodology. For the purpose of achieving the goals of the study, the researcher applied the body dysmorphic disorder criterion together with the self-esteem criterion to a sample of 200 visitors to the cosmetic surgery clinics.
The tools of evaluation and operational analyze for traffic flow are changeable from each other. Every country has its own model. The technology revolution has change these tools. In this paper, we use one of the famous model in traffic analyze fie ld. The Gipps Model, which known by the application of Traffic Ware Collection (Synchro- Simtraffic). We use Synchro 9.0 for the operational analyze depending on the High Way Capacity Manuel 2010 (HCM 2010) mythology. Then we use the Gipps model to evaluate the study area depending on microscopic criteria, which are Saturation flow, speed, headway and delay. After the operational analyze and microscopic evaluation, we choose two parameters to compare and estimate the performance of the network. Speed and Delay are the main parameters of comparison. According to speed and delay parameters, we determined the Level of Service (LOS) and a reflection of the field traffic flow. According to these promising results, we recommended analyzer to use both model in the operational analyze and evaluation of traffic flow.
The present study aims to identify the similarities and differences in the view of values between the main social theoretical trends adopted as assets to discuss topics of interest to sociology. These trends agreed in principle to give quality of o bjectively to the values, and differed about their interpretation, understanding and change. We resort to the comparative approach to find the disclosure of similarities and differences. This approach is the closest approaches of social research to the nature of the studied subject, mainly in the study of social phenomena. We will also turn to the analysis of the content according to the requirements of the study when determining the point of view concerning the values in this direction or that as stated by the first founders of these trends, namely Emile Durkheim functional trend, and the founder of the rules of the approach in sociology, and Karl Marx, the founder of marital comprehension of history and social life, Max Weber, the founder of sociology of understanding, a pioneer of the ideal model of social analysis. The efforts made by the founders to study social problems and phenomena, which have been adopted as the basis of research in sociology, the researcher can draw from their general context their view of values and reveal the underlying aspect of their studies around them, especially if we know that each of the first pioneers did not highlight directly on the values concept.
Based on the above research, we see that displacement is a distinctive aesthetic feature in the art of the text, and it is of interest to it, and its proper presence in the studies of criticism and stylistic, so this research sought to explore its uses in Western and Arab thought. It should be noted that the affective charge of displacement reveals the emotional (moral) dimension of displacement in its rhythmic, syntactic and semantic contexts.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا