ترغب بنشر مسار تعليمي؟ اضغط هنا

3D Based Landmark Tracker Using Superpixels Based Segmentation for Neuroscience and Biomechanics Studies

102   0   0.0 ( 0 )
 نشر من قبل Omid Haji Maghsoudi
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Examining locomotion has improved our basic understanding of motor control and aided in treating motor impairment. Mice and rats are premier models of human disease and increasingly the model systems of choice for basic neuroscience. High frame rates (250 Hz) are needed to quantify the kinematics of these running rodents. Manual tracking, especially for multiple markers, becomes time-consuming and impossible for large sample sizes. Therefore, the need for automatic segmentation of these markers has grown in recent years. Here, we address this need by presenting a method to segment the markers using the SLIC superpixel method. The 2D coordinates on the image plane are projected to a 3D domain using direct linear transform (DLT) and a 3D Kalman filter has been used to predict the position of markers based on the speed and position of markers from the previous frames. Finally, a probabilistic function is used to find the best match among superpixels. The method is evaluated for different difficulties for tracking of the markers and it achieves 95% correct labeling of markers.



قيم البحث

اقرأ أيضاً

Examining locomotion has improved our basic understanding of motor control and aided in treating motor impairment. Mice and rats are premier models of human disease and increasingly the model systems of choice for basic neuroscience. High frame rates (250 Hz) are needed to quantify the kinematics of these running rodents. Manual tracking, especially for multiple markers, becomes time-consuming and impossible for large sample sizes. Therefore, the need for automatic segmentation of these markers has grown in recent years. We propose two methods to segment and track these markers: first, using SLIC superpixels segmentation with a tracker based on position, speed, shape, and color information of the segmented region in the previous frame; second, using a thresholding on hue channel following up with the same tracker. The comparison showed that the SLIC superpixels method was superior because the segmentation was more reliable and based on both color and spatial information.
Wireless Capsule Endoscopy (WCE) is relatively a new technology to examine the entire GI trace. During an examination, it captures more than 55,000 frames. Reviewing all these images is time-consuming and prone to human error. It has been a challenge to develop intelligent methods assisting physicians to review the frames. The WCE frames are captured in 8-bit color depths which provides enough a color range to detect abnormalities. Here, superpixel based methods are proposed to segment five diseases including: bleeding, Crohns disease, Lymphangiectasia, Xanthoma, and Lymphoid hyperplasia. Two superpixels methods are compared to provide semantic segmentation of these prolific diseases: simple linear iterative clustering (SLIC) and quick shift (QS). The segmented superpixels were classified into two classes (normal and abnormal) by support vector machine (SVM) using texture and color features. For both superpixel methods, the accuracy, specificity, sensitivity, and precision (SLIC, QS) were around 92%, 93%, 93%, and 88%, respectively. However, SLIC was dramatically faster than QS.
We propose a new Patch-based Iterative Network (PIN) for fast and accurate landmark localisation in 3D medical volumes. PIN utilises a Convolutional Neural Network (CNN) to learn the spatial relationship between an image patch and anatomical landmark positions. During inference, patches are repeatedly passed to the CNN until the estimated landmark position converges to the true landmark location. PIN is computationally efficient since the inference stage only selectively samples a small number of patches in an iterative fashion rather than a dense sampling at every location in the volume. Our approach adopts a multi-task learning framework that combines regression and classification to improve localisation accuracy. We extend PIN to localise multiple landmarks by using principal component analysis, which models the global anatomical relationships between landmarks. We have evaluated PIN using 72 3D ultrasound images from fetal screening examinations. PIN achieves quantitatively an average landmark localisation error of 5.59mm and a runtime of 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2D standard scan planes derived from the predicted landmark locations are visually similar to the clinical ground truth. Source code is publicly available at https://github.com/yuanwei1989/landmark-detection.
Robust automated organ segmentation is a prerequisite for computer-aided diagnosis (CAD), quantitative imaging analysis and surgical assistance. For high-variability organs such as the pancreas, previous approaches report undesirably low accuracies. We present a bottom-up approach for pancreas segmentation in abdominal CT scans that is based on a hierarchy of information propagation by classifying image patches at different resolutions; and cascading superpixels. There are four stages: 1) decomposing CT slice images as a set of disjoint boundary-preserving superpixels; 2) computing pancreas class probability maps via dense patch labeling; 3) classifying superpixels by pooling both intensity and probability features to form empirical statistics in cascaded random forest frameworks; and 4) simple connectivity based post-processing. The dense image patch labeling are conducted by: efficient random forest classifier on image histogram, location and texture features; and more expensive (but with better specificity) deep convolutional neural network classification on larger image windows (with more spatial contexts). Evaluation of the approach is performed on a database of 80 manually segmented CT volumes in six-fold cross-validation (CV). Our achieved results are comparable, or better than the state-of-the-art methods (evaluated by leave-one-patient-out), with Dice 70.7% and Jaccard 57.9%. The computational efficiency has been drastically improved in the order of 6~8 minutes, comparing with others of ~10 hours per case. Finally, we implement a multi-atlas label fusion (MALF) approach for pancreas segmentation using the same datasets. Under six-fold CV, our bottom-up segmentation method significantly outperforms its MALF counterpart: (70.7 +/- 13.0%) versus (52.5 +/- 20.8%) in Dice. Deep CNN patch labeling confidences offer more numerical stability, reflected by smaller standard deviations.
A unified neural network structure is presented for joint 3D object detection and point cloud segmentation in this paper. We leverage rich supervision from both detection and segmentation labels rather than using just one of them. In addition, an ext ension based on single-stage object detectors is proposed based on the implicit function widely used in 3D scene and object understanding. The extension branch takes the final feature map from the object detection module as input, and produces an implicit function that generates semantic distribution for each point for its corresponding voxel center. We demonstrated the performance of our structure on nuScenes-lidarseg, a large-scale outdoor dataset. Our solution achieves competitive results against state-of-the-art methods in both 3D object detection and point cloud segmentation with little additional computation load compared with object detection solutions. The capability of efficient weakly supervision semantic segmentation of the proposed method is also validated by experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا