ترغب بنشر مسار تعليمي؟ اضغط هنا

The brain is a computer is a brain: neurosciences internal debate and the social significance of the Computational Metaphor

144   0   0.0 ( 0 )
 نشر من قبل Alexis Baria
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Computational Metaphor, comparing the brain to the computer and vice versa, is the most prominent metaphor in neuroscience and artificial intelligence (AI). Its appropriateness is highly debated in both fields, particularly with regards to whether it is useful for the advancement of science and technology. Considerably less attention, however, has been devoted to how the Computational Metaphor is used outside of the lab, and particularly how it may shape societys interactions with AI. As such, recently publicized concerns over AIs role in perpetuating racism, genderism, and ableism suggest that the term artificial intelligence is misplaced, and that a new lexicon is needed to describe these computational systems. Thus, there is an essential question about the Computational Metaphor that is rarely asked by neuroscientists: whom does it help and whom does it harm? This essay invites the neuroscience community to consider the social implications of the fields most controversial metaphor.


قيم البحث

اقرأ أيضاً

112 - Martin Suda 2021
Vampire has been for a long time the strongest first-order automatic theorem prover, widely used for hammer-style proof automation in ITPs such as Mizar, Isabelle, HOL, and Coq. In this work, we considerably improve the performance of Vampire in hamm ering over the full Mizar library by enhancing its saturation procedure with efficient neural guidance. In particular, we employ a recently proposed recursive neural network classifying the generated clauses based only on their derivation history. Compared to previous neural methods based on considering the logical content of the clauses, our architecture makes evaluating a single clause much less time consuming. The resulting system shows good learning capability and improves on the state-of-the-art performance on the Mizar library, while proving many theorems that the related ENIGMA system could not prove in a similar hammering evaluation.
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity. We present a method where these auxiliary samples are generated on the fly, given onl y the model that is being trained for the assessed objective, without extraneous buffers or generator networks. Instead the implicit memory of learned samples within the assessed model itself is exploited. Furthermore, whereas existing work focuses on reinforcing the full seen data distribution, we show that optimizing for not forgetting calls for the generation of samples that are specialized to each real training batch, which is more efficient and scalable. We consider high-level parallels with the brain, notably the use of a single model for inference and recall, the dependency of recalled samples on the current environment batch, top-down modulation of activations and learning, abstract recall, and the dependency between the degree to which a task is learned and the degree to which it is recalled. These characteristics emerge naturally from the method without being controlled for.
176 - Ken Wharton 2012
When we want to predict the future, we compute it from what we know about the present. Specifically, we take a mathematical representation of observed reality, plug it into some dynamical equations, and then map the time-evolved result back to real-w orld predictions. But while this computational process can tell us what we want to know, we have taken this procedure too literally, implicitly assuming that the universe must compute itself in the same manner. Physical theories that do not follow this computational framework are deemed illogical, right from the start. But this anthropocentric assumption has steered our physical models into an impossible corner, primarily because of quantum phenomena. Meanwhile, we have not been exploring other models in which the universe is not so limited. In fact, some of these alternate models already have a well-established importance, but are thought to be mathematical tricks without physical significance. This essay argues that only by dropping our assumption that the universe is a computer can we fully develop such models, explain quantum phenomena, and understand the workings of our universe. (This essay was awarded third prize in the 2012 FQXi essay contest; a new afterword compares and contrasts this essay with Robert Spekkens first prize entry.)
The deep neural nets of modern artificial intelligence (AI) have not achieved defining features of biological intelligence, including abstraction, causal learning, and energy-efficiency. While scaling to larger models has delivered performance improv ements for current applications, more brain-like capacities may demand new theories, models, and methods for designing artificial learning systems. Here, we argue that this opportunity to reassess insights from the brain should stimulate cooperation between AI research and theory-driven computational neuroscience (CN). To motivate a brain basis of neural computation, we present a dynamical view of intelligence from which we elaborate concepts of sparsity in network structure, temporal dynamics, and interactive learning. In particular, we suggest that temporal dynamics, as expressed through neural synchrony, nested oscillations, and flexible sequences, provide a rich computational layer for reading and updating hierarchical models distributed in long-term memory networks. Moreover, embracing agent-centered paradigms in AI and CN will accelerate our understanding of the complex dynamics and behaviors that build useful world models. A convergence of AI/CN theories and objectives will reveal dynamical principles of intelligence for brains and engineered learning systems. This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
The human brain provides a range of functions such as expressing emotions, controlling the rate of breathing, etc., and its study has attracted the interest of scientists for many years. As machine learning models become more sophisticated, and bio-m etric data becomes more readily available through new non-invasive technologies, it becomes increasingly possible to gain access to interesting biometric data that could revolutionize Human-Computer Interaction. In this research, we propose a method to assess and quantify human attention levels and their effects on learning. In our study, we employ a brain computer interface (BCI) capable of detecting brain wave activity and displaying the corresponding electroencephalograms (EEG). We train recurrent neural networks (RNNS) to identify the type of activity an individual is performing.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا