ﻻ يوجد ملخص باللغة العربية
Curiosity is a vital metacognitive skill in educational contexts. Yet, little is known about how social factors influence curiosity in group work. We argue that curiosity is evoked not only through individual, but also interpersonal activities, and present what we believe to be the first theoretical framework that articulates an integrated socio-cognitive account of curiosity based on literature spanning psychology, learning sciences and group dynamics, along with empirical observation of small-group science activity in an informal learning environment. We make a bipartite distinction between individual and interpersonal functions that contribute to curiosity, and multimodal behaviors that fulfill these functions. We validate the proposed framework by leveraging a longitudinal latent variable modeling approach. Findings confirm positive predictive relationship of the latent variables of individual and interpersonal functions on curiosity, with the interpersonal functions exercising a comparatively stronger influence. Prominent behavioral realizations of these functions are also discovered in a data-driven way. This framework is a step towards designing learning technologies that can recognize and evoke curiosity during learning in social contexts.
Researchers and designers have incorporated social media affordances into learning technologies to engage young people and support personally relevant learning, but youth may reject these attempts because they do not meet user expectations. Through i
Over decades traditional information theory of source and channel coding advances toward learning and effective extraction of information from data. We propose to go one step further and offer a theoretical foundation for learning classical patterns
Curiosity is the strong desire to learn or know more about something or someone. Since learning is often a social endeavor, social dynamics in collaborative learning may inevitably influence curiosity. There is a scarcity of research, however, focusi
The success of deep learning, a brain-inspired form of AI, has sparked interest in understanding how the brain could similarly learn across multiple layers of neurons. However, the majority of biologically-plausible learning algorithms have not yet r
Designing and implementing efficient, provably correct parallel machine learning (ML) algorithms is challenging. Existing high-level parallel abstractions like MapReduce are insufficiently expressive while low-level tools like MPI and Pthreads leave