ترغب بنشر مسار تعليمي؟ اضغط هنا

Large-Scale Unsupervised Deep Representation Learning for Brain Structure

139   0   0.0 ( 0 )
 نشر من قبل Ayush Jaiswal
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine Learning (ML) is increasingly being used for computer aided diagnosis of brain related disorders based on structural magnetic resonance imaging (MRI) data. Most of such work employs biologically and medically meaningful hand-crafted features calculated from different regions of the brain. The construction of such highly specialized features requires a considerable amount of time, manual oversight and careful quality control to ensure the absence of errors in the computational process. Recent advances in Deep Representation Learning have shown great promise in extracting highly non-linear and information-rich features from data. In this paper, we present a novel large-scale deep unsupervised approach to learn generic feature representations of structural brain MRI scans, which requires no specialized domain knowledge or manual intervention. Our method produces low-dimensional representations of brain structure, which can be used to reconstruct brain images with very low error and exhibit performance comparable to FreeSurfer features on various classification tasks.



قيم البحث

اقرأ أيضاً

In this paper, we propose an unsupervised collaborative representation deep network (UCRDNet) which consists of novel collaborative representation RBM (crRBM) and collaborative representation GRBM (crGRBM). The UCRDNet is a novel deep collaborative f eature extractor for exploring more sophisticated probabilistic models of real-valued and binary data. Unlike traditional representation methods, one similarity relation between the input instances and another similarity relation between the features of the input instances are collaboratively fused together in the representation process of the crGRBM and crRBM models. Here, we use the Locality Sensitive Hashing (LSH) method to divide the input instance matrix into many mini blocks which contain similar instance and local features. Then, we expect the hidden layer feature units of each block gather to block center as much as possible in the training processes of the crRBM and crGRBM. Hence, the correlations between the instances and features as collaborative relations are fused in the hidden layer features. In the experiments, we use K-means and Spectral Clustering (SC) algorithms based on four contrast deep networks to verify the deep collaborative representation capability of the UCRDNet architecture. One architecture of the UCRDNet is composed with a crGRBM and two crRBMs for modeling real-valued data and another architecture of it is composed with three crRBMs for modeling binary data. The experimental results show that the proposed UCRDNet has more outstanding performance than the Autoencoder and DeepFS deep networks (without collaborative representation strategy) for unsupervised clustering on the MSRA-MM2.0 and UCI datasets. Furthermore, the proposed UCRDNet shows more excellent collaborative representation capabilities than the CDL deep collaborative networks for unsupervised clustering.
On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state -action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.
Unsupervised active learning has attracted increasing attention in recent years, where its goal is to select representative samples in an unsupervised setting for human annotating. Most existing works are based on shallow linear models by assuming th at each sample can be well approximated by the span (i.e., the set of all linear combinations) of certain selected samples, and then take these selected samples as representative ones to label. However, in practice, the data do not necessarily conform to linear models, and how to model nonlinearity of data often becomes the key point to success. In this paper, we present a novel Deep neural network framework for Unsupervised Active Learning, called DUAL. DUAL can explicitly learn a nonlinear embedding to map each input into a latent space through an encoder-decoder architecture, and introduce a selection block to select representative samples in the the learnt latent space. In the selection block, DUAL considers to simultaneously preserve the whole input patterns as well as the cluster structure of data. Extensive experiments are performed on six publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared with state-of-the-arts.
We present a large-scale study on unsupervised spatiotemporal representation learning from videos. With a unified perspective on four recent image-based frameworks, we study a simple objective that can easily generalize all these methods to space-tim e. Our objective encourages temporally-persistent features in the same video, and in spite of its simplicity, it works surprisingly well across: (i) different unsupervised frameworks, (ii) pre-training datasets, (iii) downstream datasets, and (iv) backbone architectures. We draw a series of intriguing observations from this study, e.g., we discover that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds. In addition to state-of-the-art results in multiple benchmarks, we report a few promising cases in which unsupervised pre-training can outperform its supervised counterpart. Code is made available at https://github.com/facebookresearch/SlowFast
Effectively and efficiently deploying graph neural networks (GNNs) at scale remains one of the most challenging aspects of graph representation learning. Many powerful solutions have only ever been validated on comparatively small datasets, often wit h counter-intuitive outcomes -- a barrier which has been broken by the Open Graph Benchmark Large-Scale Challenge (OGB-LSC). We entered the OGB-LSC with two large-scale GNNs: a deep transductive node classifier powered by bootstrapping, and a very deep (up to 50-layer) inductive graph regressor regularised by denoising objectives. Our models achieved an award-level (top-3) performance on both the MAG240M and PCQM4M benchmarks. In doing so, we demonstrate evidence of scalable self-supervised graph representation learning, and utility of very deep GNNs -- both very important open issues. Our code is publicly available at: https://github.com/deepmind/deepmind-research/tree/master/ogb_lsc.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا