Do you want to publish a course? Click here

The study sought to determine the effectiveness of a training program based on the theory of cognitive flexibility in developing some habits of productive mind and preferred learning methods among female student teachers, by identifying the level of habits of mind necessary for female student teachers in kindergartens and their preferred learning methods, and determining the procedures of the training program based on the theory of cognitive flexibility. To study its effectiveness in developing some habits of the productive mind and to know the percentage of the contribution of the habits of the productive mind to their preferred learning methods, so the study followed the quasi-experimental approach by designing two equal groups (control and experimental), by preparing a scale of the sixteen habits of the productive mind according to Costa & Kallick’s list. (2009) and a measure of productive mind habits necessary for female kindergarten students, and applying Felder and Silverman’s preferred learning styles scale (Index of learning style, 1999), on a purposive sample consisting of (46) female kindergarten students from the third year because they are in the intermediate learning stage according to the theory. Cognitive flexibility, as the sample represents 20% of the research population, and the results of the study revealed a low level of six habits of the productive mind in the sample: perseverance, control of recklessness, flexibility of thinking, creativity, continuous learning, and striving for accuracy. The sample’s learning preferences also varied between... Methods of processing, perception, input and thinking. The results showed the effectiveness of the training program based on the theory of cognitive flexibility in developing the necessary productive mind habits for kindergarten students. The results also revealed the contribution of productive mind habits to preferences for learning methods, as the habits of the productive mind individually predict preferred learning methods in proportion. It ranges from 31% to 64% in the post-measurement, and the six habits of the productive mind contribute together over time, as they predict preferred learning methods by rates ranging from 18% to 63.8%, with the exception of the processing style, of which the creativity habit predicted 34%. Some Recommendations in light of these results.
The study aimed to identify the difficulties of using the Moodle platform from the point of view of members of the teaching staff at the Faculty of Education at Tishreen University, where the study sample consisted of (50) members of the teaching sta ff at the Faculty of Education at Tishreen University. A questionnaire consisting of three axes (difficulties related to members of the educational staff, difficulties related to students, difficulties related to infrastructure), each axis includes a number of items, the study used the descriptive approach, and the results showed that the most difficult difficulties experienced by members of the educational staff from their point of view The lack of conviction in the effectiveness of the Moodle platform for the member of the educational staff, and the difficulty of the student's inability to understand the study material through the platform, which came with a high degree. (Academic degree, number of years of experience, gender ) .
Brain Computer Interface (BCI), especially systems for recognizing brain signals using deep learning after characterizing these signals as EEG (Electroencephalography), is one of the important research topics that arouse the interest of many research ers currently. Convolutional Neural Nets (CNN) is one of the most important deep learning classifiers used in this recognition process, but the parameters of this classifier have not yet been precisely defined so that it gives the highest recognition rate and the lowest possible training and recognition time. This research proposes a system for recognizing EEG signals using the CNN network, while studying the effect of changing the parameters of this network on the recognition rate, training time, and recognition time of brain signals, as a result the proposed recognition system was achieved 76.38 % recognition rate, And the reduction of classifier training time (3 seconds) by using Common Spatial Pattern (CSP) in the preprocessing of IV2b dataset, and a recognition rate of 76.533% was reached by adding a layer to the proposed classifier.
Common acquisition functions for active learning use either uncertainty or diversity sampling, aiming to select difficult and diverse data points from the pool of unlabeled data, respectively. In this work, leveraging the best of both worlds, we prop ose an acquisition function that opts for selecting contrastive examples, i.e. data points that are similar in the model feature space and yet the model outputs maximally different predictive likelihoods. We compare our approach, CAL (Contrastive Active Learning), with a diverse set of acquisition functions in four natural language understanding tasks and seven datasets. Our experiments show that CAL performs consistently better or equal than the best performing baseline across all tasks, on both in-domain and out-of-domain data. We also conduct an extensive ablation study of our method and we further analyze all actively acquired datasets showing that CAL achieves a better trade-off between uncertainty and diversity compared to other strategies.
Dialogue summarization comes with its own peculiar challenges as opposed to news or scientific articles summarization. In this work, we explore four different challenges of the task: handling and differentiating parts of the dialogue belonging to mul tiple speakers, negation understanding, reasoning about the situation, and informal language understanding. Using a pretrained sequence-to-sequence language model, we explore speaker name substitution, negation scope highlighting, multi-task learning with relevant tasks, and pretraining on in-domain data. Our experiments show that our proposed techniques indeed improve summarization performance, outperforming strong baselines.
Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to construct contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named Mixsum, in addition to the cross-entropy-loss. Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG News, and IMDb) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and Mixsum regularization.
Generative Adversarial Networks (GANs) have achieved great success in image synthesis, but have proven to be difficult to generate natural language. Challenges arise from the uninformative learning signals passed from the discriminator. In other word s, the poor learning signals limit the learning capacity for generating languages with rich structures and semantics. In this paper, we propose to adopt the counter-contrastive learning (CCL) method to support the generator's training in language GANs. In contrast to standard GANs that adopt a simple binary classifier to discriminate whether a sample is real or fake, we employ a counter-contrastive learning signal that advances the training of language synthesizers by (1) pulling the language representations of generated and real samples together and (2) pushing apart representations of real samples to compete with the discriminator and thus prevent the discriminator from being overtrained. We evaluate our method on both synthetic and real benchmarks and yield competitive performance compared to previous GANs for adversarial sequence generation.
We investigate transfer learning based on pre-trained neural machine translation models to translate between (low-resource) similar languages. This work is part of our contribution to the WMT 2021 Similar Languages Translation Shared Task where we su bmitted models for different language pairs, including French-Bambara, Spanish-Catalan, and Spanish-Portuguese in both directions. Our models for Catalan-Spanish (82.79 BLEU)and Portuguese-Spanish (87.11 BLEU) rank top 1 in the official shared task evaluation, and we are the only team to submit models for the French-Bambara pairs.
In this work, we propose a novel framework, Gradient Aligned Mutual Learning BERT (GAML-BERT), for improving the early exiting of BERT. GAML-BERT's contributions are two-fold. We conduct a set of pilot experiments, which shows that mutual knowledge d istillation between a shallow exit and a deep exit leads to better performances for both. From this observation, we use mutual learning to improve BERT's early exiting performances, that is, we ask each exit of a multi-exit BERT to distill knowledge from each other. Second, we propose GA, a novel training method that aligns the gradients from knowledge distillation to cross-entropy losses. Extensive experiments are conducted on the GLUE benchmark, which shows that our GAML-BERT can significantly outperform the state-of-the-art (SOTA) BERT early exiting methods.
Class imbalance is a common challenge in many NLP tasks, and has clear connections to bias, in that bias in training data often leads to higher accuracy for majority groups at the expense of minority groups. However there has traditionally been a dis connect between research on class-imbalanced learning and mitigating bias, and only recently have the two been looked at through a common lens. In this work we evaluate long-tail learning methods for tweet sentiment and occupation classification, and extend a margin-loss based approach with methods to enforce fairness. We empirically show through controlled experiments that the proposed approaches help mitigate both class imbalance and demographic biases.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا