ترغب بنشر مسار تعليمي؟ اضغط هنا

Investigation and Assessment of Disorder of Ultrasound B-mode Images

127   0   0.0 ( 0 )
 نشر من قبل Rdv Ijcsis
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Digital image plays a vital role in the early detection of cancers, such as prostate cancer, breast cancer, lungs cancer, cervical cancer. Ultrasound imaging method is also suitable for early detection of the abnormality of fetus. The accurate detection of region of interest in ultrasound image is crucial. Since the result of reflection, refraction and deflection of ultrasound waves from different types of tissues with different acoustic impedance. Usually, the contrast in ultrasound image is very low and weak edges make the image difficult to identify the fetus region in the ultrasound image. So the analysis of ultrasound image is more challenging one. We try to develop a new algorithmic approach to solve the problem of non clarity and find disorder of it. Generally there is no common enhancement approach for noise reduction. This paper proposes different filtering techniques based on statistical methods for the removal of various noise. The quality of the enhanced images is measured by the statistical quantity measures: Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE).



قيم البحث

اقرأ أيضاً

99 - Hang Zhu , Zihao Wang 2020
Feature matching is an important technique to identify a single object in different images. It helps machines to construct recognition of a specific object from multiple perspectives. For years, feature matching has been commonly used in various comp uter vision applications, like traffic surveillance, self-driving, and other systems. With the arise of Computer-Aided Diagnosis(CAD), the need for feature matching techniques also emerges in the medical imaging field. In this paper, we present a deep learning-based method specially for ultrasound images. It will be examined against existing methods that have outstanding results on regular images. As the ultrasound images are different from regular images in many fields like texture, noise type, and dimension, traditional methods will be evaluated and optimized to be applied to ultrasound images.
This paper addresses the task of detecting and localising fetal anatomical regions in 2D ultrasound images, where only image-level labels are present at training, i.e. without any localisation or segmentation information. We examine the use of convol utional neural network architectures coupled with soft proposal layers. The resulting network simultaneously performs anatomical region detection (classification) and localisation tasks. We generate a proposal map describing the attention of the network for a particular class. The network is trained on 85,500 2D fetal Ultrasound images and their associated labels. Labels correspond to six anatomical regions: head, spine, thorax, abdomen, limbs, and placenta. Detection achieves an average accuracy of 90% on individual regions, and show that the proposal maps correlate well with relevant anatomical structures. This work presents itself as a powerful and essential step towards subsequent tasks such as fetal position and pose estimation, organ-specific segmentation, or image-guided navigation. Code and additional material is available at https://ntoussaint.github.io/fetalnav
84 - Zhihao Fang , Wanyi Zhang , He Ma 2019
Ultrasound image diagnosis of breast tumors has been widely used in recent years. However, there are some problems of it, for instance, poor quality, intense noise and uneven echo distribution, which has created a huge obstacle to diagnosis. To overc ome these problems, we propose a novel method, a breast cancer classification with ultrasound images based on SLIC (BCCUI). We first utilize the Region of Interest (ROI) extraction based on Simple Linear Iterative Clustering (SLIC) algorithm and region growing algorithm to extract the ROI at the super-pixel level. Next, the features of ROI are extracted. Furthermore, the Support Vector Machine (SVM) classifier is applied. The calculation states that the accuracy of this segment algorithm is up to 88.00% and the sensitivity of the algorithm is up to 92.05%, which proves that the classifier presents in this paper has certain research meaning and applied worthiness.
Three-dimensional medical imaging enables detailed understanding of osteoarthritis structural status. However, there remains a vast need for automatic, thus, reader-independent measures that provide reliable assessment of subject-specific clinical ou tcomes. To this end, we derive a consistent generalization of the recently proposed B-score to Riemannian shape spaces. We further present an algorithmic treatment yielding simple, yet efficient computations allowing for analysis of large shape populations with several thousand samples. Our intrinsic formulation exhibits improved discrimination ability over its Euclidean counterpart, which we demonstrate for predictive validity on assessing risks of total knee replacement. This result highlights the potential of the geodesic B-score to enable improved personalized assessment and stratification for interventions.
220 - Bo Li , Kele Xu , Dawei Feng 2019
B-mode ultrasound tongue imaging is widely used in the speech production field. However, efficient interpretation is in a great need for the tongue image sequences. Inspired by the recent success of unsupervised deep learning approach, we explore uns upervised convolutional network architecture for the feature extraction in the ultrasound tongue image, which can be helpful for the clinical linguist and phonetics. By quantitative comparison between different unsupervised feature extraction approaches, the denoising convolutional autoencoder (DCAE)-based method outperforms the other feature extraction methods on the reconstruction task and the 2010 silent speech interface challenge. A Word Error Rate of 6.17% is obtained with DCAE, compared to the state-of-the-art value of 6.45% using Discrete cosine transform as the feature extractor. Our codes are available at https://github.com/DeePBluE666/Source-code1.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا