Do you want to publish a course? Click here

An interdisciplinary approach to high school curriculum development: Swarming Powered by Neuroscience

440   0   0.0 ( 0 )
 Added by Grace Hwang
 Publication date 2021
  fields Biology
and research's language is English




Ask ChatGPT about the research

This article discusses how to create an interactive virtual training program at the intersection of neuroscience, robotics, and computer science for high school students. A four-day microseminar, titled Swarming Powered by Neuroscience (SPN), was conducted virtually through a combination of presentations and interactive computer game simulations, delivered by subject matter experts in neuroscience, mathematics, multi-agent swarm robotics, and education. The objective of this research was to determine if taking an interdisciplinary approach to high school education would enhance the students learning experiences in fields such as neuroscience, robotics, or computer science. This study found an improvement in student engagement for neuroscience by 16.6%, while interest in robotics and computer science improved respectively by 2.7% and 1.8%. The curriculum materials, developed for the SPN microseminar, can be used by high school teachers to further evaluate interdisciplinary instructions across life and physical sciences and computer science.



rate research

Read More

Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers and scientists to assess the quality whether particular pieces of software are ready for use in research.
Imaging methods used in modern neuroscience experiments are quickly producing large amounts of data capable of providing increasing amounts of knowledge about neuroanatomy and function. A great deal of information in these datasets is relatively unexplored and untapped. One of the bottlenecks in knowledge extraction is that often there is no feedback loop between the knowledge produced (e.g., graph, density estimate, or other statistic) and the earlier stages of the pipeline (e.g., acquisition). We thus advocate for the development of sample-to-knowledge discovery pipelines that one can use to optimize acquisition and processing steps with a particular end goal (i.e., piece of knowledge) in mind. We therefore propose that optimization takes place not just within each processing stage but also between adjacent (and non-adjacent) steps of the pipeline. Furthermore, we explore the existing categories of knowledge representation and models to motivate the types of experiments and analysis needed to achieve the ultimate goal. To illustrate this approach, we provide an experimental paradigm to answer questions about large-scale synaptic distributions through a multimodal approach combining X-ray microtomography and electron microscopy.
Readability is on the cusp of a revolution. Fixed text is becoming fluid as a proliferation of digital reading devices rewrite what a document can do. As past constraints make way for more flexible opportunities, there is great need to understand how reading formats can be tuned to the situation and the individual. We aim to provide a firm foundation for readability research, a comprehensive framework for modern, multi-disciplinary readability research. Readability refers to aspects of visual information design which impact information flow from the page to the reader. Readability can be enhanced by changes to the set of typographical characteristics of a text. These aspects can be modified on-demand, instantly improving the ease with which a reader can process and derive meaning from text. We call on a multi-disciplinary research community to take up these challenges to elevate reading outcomes and provide the tools to do so effectively.
129 - Laurent Perrinet 2009
If modern computers are sometimes superior to humans in some specialized tasks such as playing chess or browsing a large database, they cant beat the efficiency of biological vision for such simple tasks as recognizing and following an object in a complex cluttered background. We present in this paper our attempt at outlining the dynamical, parallel and event-based representation for vision in the architecture of the central nervous system. We will illustrate this on static natural images by showing that in a signal matching framework, a L/LN (linear/non-linear) cascade may efficiently transform a sensory signal into a neural spiking signal and we will apply this framework to a model retina. However, this code gets redundant when using an over-complete basis as is necessary for modeling the primary visual cortex: we therefore optimize the efficiency cost by increasing the sparseness of the code. This is implemented by propagating and canceling redundant information using lateral interactions. We compare the efficiency of this representation in terms of compression as the reconstruction quality as a function of the coding length. This will correspond to a modification of the Matching Pursuit algorithm where the ArgMax function is optimized for competition, or Competition Optimized Matching Pursuit (COMP). We will in particular focus on bridging neuroscience and image processing and on the advantages of such an interdisciplinary approach.
Individual neurons often produce highly variable responses over nominally identical trials, reflecting a mixture of intrinsic noise and systematic changes in the animals cognitive and behavioral state. In addition to investigating how noise and state changes impact neural computation, statistical models of trial-to-trial variability are becoming increasingly important as experimentalists aspire to study naturalistic animal behaviors, which never repeat themselves exactly and may rarely do so even approximately. Estimating the basic features of neural response distributions may seem impossible in this trial-limited regime. Fortunately, by identifying and leveraging simplifying structure in neural data -- e.g. shared gain modulations across neural subpopulations, temporal smoothness in neural firing rates, and correlations in responses across behavioral conditions -- statistical estimation often remains tractable in practice. We review recent advances in statistical neuroscience that illustrate this trend and have enabled novel insights into the trial-by-trial operation of neural circuits.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا