Do you want to publish a course? Click here

Current practice in software development for computational neuroscience and how to improve it

444   0   0.0 ( 0 )
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

Almost all research work in computational neuroscience involves software. As researchers try to understand ever more complex systems, there is a continual need for software with new capabilities. Because of the wide range of questions being investigated, new software is often developed rapidly by individuals or small groups. In these cases, it can be hard to demonstrate that the software gives the right results. Software developers are often open about the code they produce and willing to share it, but there is little appreciation among potential users of the great diversity of software development practices and end results, and how this affects the suitability of software tools for use in research projects. To help clarify these issues, we have reviewed a range of software tools and asked how the culture and practice of software development affects their validity and trustworthiness. We identified four key questions that can be used to categorize software projects and correlate them with the type of product that results. The first question addresses what is being produced. The other three concern why, how, and by whom the work is done. The answers to these questions show strong correlations with the nature of the software being produced, and its suitability for particular purposes. Based on our findings, we suggest ways in which current software development practice in computational neuroscience can be improved and propose checklists to help developers, reviewers and scientists to assess the quality whether particular pieces of software are ready for use in research.



rate research

Read More

Within computational neuroscience, informal interactions with modelers often reveal wildly divergent goals. In this opinion piece, we explicitly address the diversity of goals that motivate and ultimately influence modeling efforts. We argue that a wide range of goals can be meaningfully taken to be of highest importance. A simple informal survey conducted on the Internet confirmed the diversity of goals in the community. However, different priorities or preferences of individual researchers can lead to divergent model evaluation criteria. We propose that many disagreements in evaluating the merit of computational research stem from differences in goals and not from the mechanics of constructing, describing, and validating models. We suggest that authors state explicitly their goals when proposing models so that others can judge the quality of the research with respect to its stated goals.
The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improvements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
This article discusses how to create an interactive virtual training program at the intersection of neuroscience, robotics, and computer science for high school students. A four-day microseminar, titled Swarming Powered by Neuroscience (SPN), was conducted virtually through a combination of presentations and interactive computer game simulations, delivered by subject matter experts in neuroscience, mathematics, multi-agent swarm robotics, and education. The objective of this research was to determine if taking an interdisciplinary approach to high school education would enhance the students learning experiences in fields such as neuroscience, robotics, or computer science. This study found an improvement in student engagement for neuroscience by 16.6%, while interest in robotics and computer science improved respectively by 2.7% and 1.8%. The curriculum materials, developed for the SPN microseminar, can be used by high school teachers to further evaluate interdisciplinary instructions across life and physical sciences and computer science.
Context:Software Development Analytics is a research area concerned with providing insights to improve product deliveries and processes. Many types of studies, data sources and mining methods have been used for that purpose. Objective:This systematic literature review aims at providing an aggregate view of the relevant studies on Software Development Analytics in the past decade (2010-2019), with an emphasis on its application in practical settings. Method:Definition and execution of a search string upon several digital libraries, followed by a quality assessment criteria to identify the most relevant papers. On those, we extracted a set of characteristics (study type, data source, study perspective, development life-cycle activities covered, stakeholders, mining methods, and analytics scope) and classified their impact against a taxonomy. Results:Source code repositories, experimental case studies, and developers are the most common data sources, study types, and stakeholders, respectively. Product and project managers are also often present, but less than expected. Mining methods are evolving rapidly and that is reflected in the long list identified. Descriptive statistics are the most usual method followed by correlation analysis. Being software development an important process in every organization, it was unexpected to find that process mining was present in only one study. Most contributions to the software development life cycle were given in the quality dimension. Time management and costs control were lightly debated. The analysis of security aspects suggests it is an increasing topic of concern for practitioners. Risk management contributions are scarce. Conclusions:There is a wide improvement margin for software development analytics in practice. For instance, mining and analyzing the activities performed by software developers in their actual workbench, the IDE.
In recent years, the field of neuroscience has gone through rapid experimental advances and extensive use of quantitative and computational methods. This accelerating growth has created a need for methodological analysis of the role of theory and the modeling approaches currently used in this field. Toward that end, we start from the general view that the primary role of science is to solve empirical problems, and that it does so by developing theories that can account for phenomena within their domain of application. We propose a commonly-used set of terms - descriptive, mechanistic, and normative - as methodological designations that refer to the kind of problem a theory is intended to solve. Further, we find that models of each kind play distinct roles in defining and bridging the multiple levels of abstraction necessary to account for any neuroscientific phenomenon. We then discuss how models play an important role to connect theory and experiment, and note the importance of well-defined translation functions between them. Furthermore, we describe how models themselves can be used as a form of experiment to test and develop theories. This report is the summary of a discussion initiated at the conference Present and Future Theoretical Frameworks in Neuroscience, which we hope will contribute to a much-needed discussion in the neuroscientific community.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا