Do you want to publish a course? Click here

An analysis of the abstracts presented at the annual meetings of the Society for Neuroscience from 2001 to 2006

113   0   0.0 ( 0 )
 Added by John Lin
 Publication date 2007
  fields Physics Biology
and research's language is English




Ask ChatGPT about the research

We extracted and processed abstract data from the SFN annual meeting abstracts during the period 2001-2006, using techniques and software from natural language processing, database management, and data visualization and analysis. An important first step in the process was the application of data cleaning and disambiguation methods to construct a unified database, since the data were too noisy to be of full utility in the raw form initially available. The resulting co-author graph in 2006, for example, had 39,645 nodes (with an estimated 6% error rate in our disambiguation of similar author names) and 13,979 abstracts, with an average of 1.5 abstracts per author, 4.3 authors per abstract, and 5.96 collaborators per author (including all authors on shared abstracts). Recent work in related areas has focused on reputational indices such as highly cited papers or scientists and journal impact factors, and to a lesser extent on creating visual maps of the knowledge space. In contrast, there has been relatively less work on the demographics and community structure, the dynamics of the field over time to examine major research trends and the structure of the sources of research funding. In this paper we examined each of these areas in order to gain an objective overview of contemporary neuroscience. Some interesting findings include a high geographical concentration of neuroscience research in north eastern United States, a surprisingly large transient population (60% of the authors appear in only one out of the six studied years), the central role played by the study of neurodegenerative disorders in the neuroscience community structure, and an apparent growth of behavioral/systems neuroscience with a corresponding shrinkage of cellular/molecular neuroscience over the six year period.



rate research

Read More

Symbolic methods of analysis are valuable tools for investigating complex time-dependent signals. In particular, the ordinal method defines sequences of symbols according to the ordering in which values appear in a time series. This method has been shown to yield useful information, even when applied to signals with large noise contamination. Here we use ordinal analysis to investigate the transition between eyes closed (EC) and eyes open (EO) resting states. We analyze two {EEG} datasets (with 71 and 109 healthy subjects) with different recording conditions (sampling rates and the number of electrodes in the scalp). Using as diagnostic tools the permutation entropy, the entropy computed from symbolic transition probabilities, and an asymmetry coefficient (that measures the asymmetry of the likelihood of the transitions between symbols) we show that ordinal analysis applied to the raw data distinguishes the two brain states. In both datasets, we find that the EO state is characterized by higher entropies and lower asymmetry coefficient, as compared to the EC state. Our results thus show that these diagnostic tools have the potential for detecting and characterizing changes in time-evolving brain states.
223 - P. A. Ritto 2011
A tetra of sets which elements are time series of interbeats has been obtained from the databank Physionet-MIT-BIH, corresponding to the following failures at the humans heart: Obstructive Sleep Apnea, Congestive Heart Failure, and Atrial Fibrillation. Those times series has been analyzed statistically using an already known technique based on the Wavelet and Hilbert Transforms. That technique has been applied to the time series of interbeats for 87 patients, in order to find out the dynamics of the heart. The size of the times series varies around 7 to 24 h. while the kind of wavelet selected for this study has been any one of: Daubechies, Biortoghonal, and Gaussian. The analysis has been done for the complet set of scales ranging from: 1-128 heartbeats. Choosing the Biorthogonal wavelet: bior3.1, it is observed: (a) That the time series hasnt to be cutted in shorter periods, with the purpose to obtain the collapsing of the data, (b) An analytical, universal behavior of the data, for the first and second diseases, but not for the third.
82 - R. A. Ewings , A. Buts , M. D. Le 2016
The HORACE suite of programs has been developed to work with large multiple-measurement data sets collected from time-of-flight neutron spectrometers equipped with arrays of position-sensitive detectors. The software allows exploratory studies of the four dimensions of reciprocal space and excitation energy to be undertaken, enabling multi-dimensional subsets to be visualized, algebraically manipulated, and models for the scattering to simulated or fitted to the data. The software is designed to be an extensible framework, thus allowing user-customized operations to be performed on the data. Examples of the use of its features are given for measurements exploring the spin waves of the simple antiferromagnet RbMnF$_{3}$ and ferromagnetic iron, and the phonons in URu$_{2}$Si$_{2}$.
We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability : Reading out the code consists of inferring such a probability. This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman and collaborators. A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation. This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding only ? A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed.
When the data do not conform to the hypothesis of a known sampling-variance, the fitting of a constant to a set of measured values is a long debated problem. Given the data, fitting would require to find what measurand value is the most trustworthy. Bayesian inference is here reviewed, to assign probabilities to the possible measurand values. Different hypothesis about the data variance are tested by Bayesian model comparison. Eventually, model selection is exemplified in deriving an estimate of the Planck constant.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا