Do you want to publish a course? Click here

Regularized brain reading with shrinkage and smoothing

159   0   0.0 ( 0 )
 Added by Leila Wehbe
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Functional neuroimaging measures how the brain responds to complex stimuli. However, sample sizes are modest, noise is substantial, and stimuli are high dimensional. Hence, direct estimates are inherently imprecise and call for regularization. We compare a suite of approaches which regularize via shrinkage: ridge regression, the elastic net (a generalization of ridge regression and the lasso), and a hierarchical Bayesian model based on small area estimation (SAE). We contrast regularization with spatial smoothing and combinations of smoothing and shrinkage. All methods are tested on functional magnetic resonance imaging (fMRI) data from multiple subjects participating in two different experiments related to reading, for both predicting neural response to stimuli and decoding stimuli from responses. Interestingly, when the regularization parameters are chosen by cross-validation independently for every voxel, low/high regularization is chosen in voxels where the classification accuracy is high/low, indicating that the regularization intensity is a good tool for identification of relevant voxels for the cognitive task. Surprisingly, all the regularization methods work about equally well, suggesting that beating basic smoothing and shrinkage will take not only clever methods, but also careful modeling.

rate research

Read More

61 - Ziyi Ye , Xiaohui Xie , Yiqun Liu 2021
Reading comprehension is a complex cognitive process involving many human brain activities. Plenty of works have studied the reading patterns and attention allocation mechanisms in the reading process. However, little is known about what happens in human brain during reading comprehension and how we can utilize this information as implicit feedback to facilitate information acquisition performance. With the advances in brain imaging techniques such as EEG, it is possible to collect high-precision brain signals in almost real time. With neuroimaging techniques, we carefully design a lab-based user study to investigate brain activities during reading comprehension. Our findings show that neural responses vary with different types of contents, i.e., contents that can satisfy users information needs and contents that cannot. We suggest that various cognitive activities, e.g., cognitive loading, semantic-thematic understanding, and inferential processing, at the micro-time scale during reading comprehension underpin these neural responses. Inspired by these detectable differences in cognitive activities, we construct supervised learning models based on EEG features for two reading comprehension tasks: answer sentence classification and answer extraction. Results show that it is feasible to improve their performance with brain signals. These findings imply that brain signals are valuable feedback for enhancing human-computer interactions during reading comprehension.
Rescaled spike and slab models are a new Bayesian variable selection method for linear regression models. In high dimensional orthogonal settings such models have been shown to possess optimal model selection properties. We review background theory and discuss applications of rescaled spike and slab models to prediction problems involving orthogonal polynomials. We first consider global smoothing and discuss potential weaknesses. Some of these deficiencies are remedied by using local regression. The local regression approach relies on an intimate connection between local weighted regression and weighted generalized ridge regression. An important implication is that one can trace the effective degrees of freedom of a curve as a way to visualize and classify curvature. Several motivating examples are presented.
Conversational machine reading (CMR) requires machines to communicate with humans through multi-turn interactions between two salient dialogue states of decision making and question generation processes. In open CMR settings, as the more realistic scenario, the retrieved background knowledge would be noisy, which results in severe challenges in the information transmission. Existing studies commonly train independent or pipeline systems for the two subtasks. However, those methods are trivial by using hard-label decisions to activate question generation, which eventually hinders the model performance. In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference. Experiments on the OR-ShARC dataset show the effectiveness of our method, which achieves new state-of-the-art results.
Motivated by increasing trends of relating brain images to a clinical outcome of interest, we propose a functional domain selection (FuDoS) method that effectively selects subregions of the brain associated with the outcome. View each individuals brain as a 3D functional object, the statistical aim is to distinguish the region where a regression coefficient $beta(t)=0$ from $beta(t) eq0$, where $t$ denotes spatial location. FuDoS is composed of two stages of estimation. We first segment the brain into several small parts based on the correlation structure. Then, potential subsets are built using the obtained segments and their predictive performance are evaluated to select the best subset, augmented by a stability selection criterion. We conduct extensive simulations both for 1D and 3D functional data, and evaluate its effectiveness in selecting the true subregion. We also investigate predictive ability of the selected stable regions. To find the brain regions related to cognitive ability, FuDoS is applied to the ADNIs PET data. Due to the induced sparseness, the results naturally provide more interpretable information about the relations between the regions and the outcome. Moreover, the selected regions from our analysis show high associations with the expected anatomical brain areas known to have memory-related functions.
Recently, in the context of covariance matrix estimation, in order to improve as well as to regularize the performance of the Tylers estimator [1] also called the Fixed-Point Estimator (FPE) [2], a shrinkage fixed-point estimator has been introduced in [3]. First, this work extends the results of [3,4] by giving the general solution of the shrinkage fixed-point algorithm. Secondly, by analyzing this solution, called the generalized robust shrinkage estimator, we prove that this solution converges to a unique solution when the shrinkage parameter $beta$ (losing factor) tends to 0. This solution is exactly the FPE with the trace of its inverse equal to the dimension of the problem. This general result allows one to give another interpretation of the FPE and more generally, on the Maximum Likelihood approach for covariance matrix estimation when constraints are added. Then, some simulations illustrate our theoretical results as well as the way to choose an optimal shrinkage factor. Finally, this work is applied to a Space-Time Adaptive Processing (STAP) detection problem on real STAP data.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا