Do you want to publish a course? Click here

The study sought to determine the effectiveness of a training program based on the theory of cognitive flexibility in developing some habits of productive mind and preferred learning methods among female student teachers, by identifying the level of habits of mind necessary for female student teachers in kindergartens and their preferred learning methods, and determining the procedures of the training program based on the theory of cognitive flexibility. To study its effectiveness in developing some habits of the productive mind and to know the percentage of the contribution of the habits of the productive mind to their preferred learning methods, so the study followed the quasi-experimental approach by designing two equal groups (control and experimental), by preparing a scale of the sixteen habits of the productive mind according to Costa & Kallick’s list. (2009) and a measure of productive mind habits necessary for female kindergarten students, and applying Felder and Silverman’s preferred learning styles scale (Index of learning style, 1999), on a purposive sample consisting of (46) female kindergarten students from the third year because they are in the intermediate learning stage according to the theory. Cognitive flexibility, as the sample represents 20% of the research population, and the results of the study revealed a low level of six habits of the productive mind in the sample: perseverance, control of recklessness, flexibility of thinking, creativity, continuous learning, and striving for accuracy. The sample’s learning preferences also varied between... Methods of processing, perception, input and thinking. The results showed the effectiveness of the training program based on the theory of cognitive flexibility in developing the necessary productive mind habits for kindergarten students. The results also revealed the contribution of productive mind habits to preferences for learning methods, as the habits of the productive mind individually predict preferred learning methods in proportion. It ranges from 31% to 64% in the post-measurement, and the six habits of the productive mind contribute together over time, as they predict preferred learning methods by rates ranging from 18% to 63.8%, with the exception of the processing style, of which the creativity habit predicted 34%. Some Recommendations in light of these results.
The study aimed to identify the difficulties of using the Moodle platform from the point of view of members of the teaching staff at the Faculty of Education at Tishreen University, where the study sample consisted of (50) members of the teaching sta ff at the Faculty of Education at Tishreen University. A questionnaire consisting of three axes (difficulties related to members of the educational staff, difficulties related to students, difficulties related to infrastructure), each axis includes a number of items, the study used the descriptive approach, and the results showed that the most difficult difficulties experienced by members of the educational staff from their point of view The lack of conviction in the effectiveness of the Moodle platform for the member of the educational staff, and the difficulty of the student's inability to understand the study material through the platform, which came with a high degree. (Academic degree, number of years of experience, gender ) .
Brain Computer Interface (BCI), especially systems for recognizing brain signals using deep learning after characterizing these signals as EEG (Electroencephalography), is one of the important research topics that arouse the interest of many research ers currently. Convolutional Neural Nets (CNN) is one of the most important deep learning classifiers used in this recognition process, but the parameters of this classifier have not yet been precisely defined so that it gives the highest recognition rate and the lowest possible training and recognition time. This research proposes a system for recognizing EEG signals using the CNN network, while studying the effect of changing the parameters of this network on the recognition rate, training time, and recognition time of brain signals, as a result the proposed recognition system was achieved 76.38 % recognition rate, And the reduction of classifier training time (3 seconds) by using Common Spatial Pattern (CSP) in the preprocessing of IV2b dataset, and a recognition rate of 76.533% was reached by adding a layer to the proposed classifier.
Dialogue summarization comes with its own peculiar challenges as opposed to news or scientific articles summarization. In this work, we explore four different challenges of the task: handling and differentiating parts of the dialogue belonging to mul tiple speakers, negation understanding, reasoning about the situation, and informal language understanding. Using a pretrained sequence-to-sequence language model, we explore speaker name substitution, negation scope highlighting, multi-task learning with relevant tasks, and pretraining on in-domain data. Our experiments show that our proposed techniques indeed improve summarization performance, outperforming strong baselines.
Due to complex cognitive and inferential efforts involved in the manual generation of one caption per image/video input, the human annotation resources are very limited for captioning tasks. We define language resource efficient as reaching the same performance with fewer annotated captions per input. We first study the performance degradation of caption models in different language resource settings. Our analysis of caption models with SC loss shows that the performance degradation is caused by the increasingly noisy estimation of reward and baseline with fewer language resources. To mitigate this issue, we propose to reduce the variance of noise in the baseline by generalizing the single pairwise comparison in SC loss and using multiple generalized pairwise comparisons. The generalized pairwise comparison (GPC) measures the difference between the evaluation scores of two captions with respect to an input. Empirically, we show that the model trained with the proposed GPC loss is efficient on language resource and achieves similar performance with the state-of-the-art models on MSCOCO by using only half of the language resources. Furthermore, our model significantly outperforms the state-of-the-art models on a video caption dataset that has only one labeled caption per input in the training set.
Natural Language Processing (NLP) is increasingly relying on general end-to-end systems that need to handle many different linguistic phenomena and nuances. For example, a Natural Language Inference (NLI) system has to recognize sentiment, handle num bers, perform coreference, etc. Our solutions to complex problems are still far from perfect, so it is important to create systems that can learn to correct mistakes quickly, incrementally, and with little training data. In this work, we propose a continual few-shot learning (CFL) task, in which a system is challenged with a difficult phenomenon and asked to learn to correct mistakes with only a few (10 to 15) training examples. To this end, we first create benchmarks based on previously annotated data: two NLI (ANLI and SNLI) and one sentiment analysis (IMDB) datasets. Next, we present various baselines from diverse paradigms (e.g., memory-aware synapses and Prototypical networks) and compare them on few-shot learning and continual few-shot learning setups. Our contributions are in creating a benchmark suite and evaluation protocol for continual few-shot learning on the text classification tasks, and making several interesting observations on the behavior of similarity-based methods. We hope that our work serves as a useful starting point for future work on this important topic.
Previous work on crosslingual Relation and Event Extraction (REE) suffers from the monolingual bias issue due to the training of models on only the source language data. An approach to overcome this issue is to use unlabeled data in the target langua ge to aid the alignment of crosslingual representations, i.e., via fooling a language discriminator. However, as this approach does not condition on class information, a target language example of a class could be incorrectly aligned to a source language example of a different class. To address this issue, we propose a novel crosslingual alignment method that leverages class information of REE tasks for representation learning. In particular, we propose to learn two versions of representation vectors for each class in an REE task based on either source or target language examples. Representation vectors for corresponding classes will then be aligned to achieve class-aware alignment for crosslingual representations. In addition, we propose to further align representation vectors for language-universal word categories (i.e., parts of speech and dependency relations). As such, a novel filtering mechanism is presented to facilitate the learning of word category representations from contextualized representations on input texts based on adversarial learning. We conduct extensive crosslingual experiments with English, Chinese, and Arabic over REE tasks. The results demonstrate the benefits of the proposed method that significantly advances the state-of-the-art performance in these settings.
In this paper, we propose a novel fact checking and verification system to check claims against Wikipedia content. Our system retrieves relevant Wikipedia pages using Anserini, uses BERT-large-cased question answering model to select correct evidence , and verifies claims using XLNET natural language inference model by comparing it with the evidence. Table cell evidence is obtained through looking for entity-matching cell values and TAPAS table question answering model. The pipeline utilizes zero-shot capabilities of existing models and all the models used in the pipeline requires no additional training. Our system got a FEVEROUS score of 0.06 and a label accuracy of 0.39 in FEVEROUS challenge.
With counterfactual bandit learning, models can be trained based on positive and negative feedback received for historical predictions, with no labeled data needed. Such feedback is often available in real-world dialog systems, however, the modulariz ed architecture commonly used in large-scale systems prevents the direct application of such algorithms. In this paper, we study the feedback attribution problem that arises when using counterfactual bandit learning for multi-domain spoken language understanding. We introduce an experimental setup to simulate the problem on small-scale public datasets, propose attribution methods inspired by multi-agent reinforcement learning and evaluate them against multiple baselines. We find that while directly using overall feedback leads to disastrous performance, our proposed attribution methods can allow training competitive models from user feedback.
Cross-domain Named Entity Recognition (NER) transfers the NER knowledge from high-resource domains to the low-resource target domain. Due to limited labeled resources and domain shift, cross-domain NER is a challenging task. To address these challeng es, we propose a progressive domain adaptation Knowledge Distillation (KD) approach -- PDALN. It achieves superior domain adaptability by employing three components: (1) Adaptive data augmentation techniques, which alleviate cross-domain gap and label sparsity simultaneously; (2) Multi-level Domain invariant features, derived from a multi-grained MMD (Maximum Mean Discrepancy) approach, to enable knowledge transfer across domains; (3) Advanced KD schema, which progressively enables powerful pre-trained language models to perform domain adaptation. Extensive experiments on four benchmarks show that PDALN can effectively adapt high-resource domains to low-resource target domains, even if they are diverse in terms and writing styles. Comparison with other baselines indicates the state-of-the-art performance of PDALN.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا