ترغب بنشر مسار تعليمي؟ اضغط هنا

Mammographic image restoration using maximum entropy deconvolution

69   0   0.0 ( 0 )
 نشر من قبل John Jackson
 تاريخ النشر 2005
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

An image restoration approach based on a Bayesian maximum entropy method (MEM) has been applied to a radiological image deconvolution problem, that of reduction of geometric blurring in magnification mammography. The aim of the work is to demonstrate an improvement in image spatial resolution in realistic noisy radiological images with no associated penalty in terms of reduction in the signal-to-noise ratio perceived by the observer. Images of the TORMAM mammographic image quality phantom were recorded using the standard magnification settings of 1.8 magnification/fine focus and also at 1.8 magnification/broad focus and 3.0 magnification/fine focus; the latter two arrangements would normally give rise to unacceptable geometric blurring. Measured point-spread functions were used in conjunction with the MEM image processing to de-blur these images. The results are presented as comparative images of phantom test features and as observer scores for the raw and processed images. Visualization of high resolution features and the total image scores for the test phantom were improved by the application of the MEM processing. It is argued that this successful demonstration of image de-blurring in noisy radiological images offers the possibility of weakening the link between focal spot size and geometric blurring in radiology, thus opening up new approaches to system optimization.

قيم البحث

اقرأ أيضاً

Uncertainty quantification for Particle Image Velocimetry (PIV) is critical for comparing flow fields with Computational Fluid Dynamics (CFD) results, and model design and validation. However, PIV features a complex measurement chain with coupled, no n-linear error sources, and quantifying the uncertainty is challenging. Multiple assessments show that none of the current methods can reliably measure the actual uncertainty across a wide range of experiments. Because the current methods differ in assumptions regarding the measurement process and calculation procedures, it is not clear which method is best to use for an experiment. To address this issue, we propose a method to estimate an uncertainty methods sensitivity and reliability, termed the Meta-Uncertainty. The novel approach is automated, local, and instantaneous, and based on perturbation of the recorded particle images. We developed an image perturbation scheme based on adding random unmatched particles to the interrogation window pair considering the signal-to-noise (SNR) of the correlation plane. Each uncertainty schemes response to several trials of random particle addition is used to estimate a reliability metric, defined as the rate of change of the inter-quartile range (IQR) of the uncertainties with increasing levels of particle addition. We also propose applying the meta-uncertainty as a weighting metric to combine uncertainty estimates from individual schemes, based on ideas from the consensus forecasting literature. We use PIV measurements across a range of canonical flows to assess the performance of the uncertainty schemes.The results show that the combined uncertainty method outperforms the individual methods, and establish the meta-uncertainty as a useful reliability assessment tool for PIV uncertainty quantification.
For conventional computed tomography (CT) image reconstruction tasks, the most popular method is the so-called filtered-back-projection (FBP) algorithm. In it, the acquired Radon projections are usually filtered first by a ramp kernel before back-pro jected to generate CT images. In this work, as a contrary, we realized the idea of image-domain backproject-filter (BPF) CT image reconstruction using the deep learning techniques for the first time. With a properly designed convolutional neural network (CNN), preliminary results demonstrate that it is feasible to reconstruct CT images with maintained high spatial resolution and accurate pixel values from the highly blurred back-projection image, i.e., laminogram. In addition, experimental results also show that this deconvolution-based CT image reconstruction network has the potential to reduce CT image noise (up to 20%), indicating that patient radiation dose may be reduced. Due to these advantages, this proposed CNN-based image-domain BPF type CT image reconstruction scheme provides promising prospects in generating high spatial resolution, low-noise CT images for future clinical applications.
The flicker-noise spectroscopy (FNS) approach is used to determine the dynamic characteristics of neuromagnetic responses by analyzing the magnetoencephalographic (MEG) signals recorded as the response of a group of control human subjects and a patie nt with photosensitive epilepsy (PSE) to equiluminant flickering stimuli of different color combinations. Parameters characterizing the analyzed stochastic biomedical signals for different frequency bands are identified. It is shown that the classification of the parameters of analyzed MEG responses with respect to different frequency bands makes it possible to separate the contribution of the chaotic component from the overall complex dynamics of the signals. It is demonstrated that the chaotic component can be adequately described by the anomalous diffusion approximation in the case of control subjects. On the other hand, the chaotic component for the patient is characterized by a large number of high-frequency resonances. This implies that healthy organisms can suppress the perturbations brought about by the flickering stimuli and reorganize themselves. The organisms affected by photosensitive epilepsy no longer have this ability. This result also gives a way to simulate the separate stages of the brain cortex activity in vivo. The examples illustrating the use of the FNS device for identifying even the slightest individual differences in the activity of human brains using their responses to external standard stimuli show a unique possibility to develop the individual medicine of the future.
73 - A. Ghosh , B. Yaeggy , R.Galindo 2021
This paper presents a novel neutral-pion reconstruction that takes advantage of the machine learning technique of semantic segmentation using MINERvA data collected between 2013-2017, with an average neutrino energy of $6$ GeV. Semantic segmentation improves the purity of neutral pion reconstruction from two gammas from 71% to 89% and improves the efficiency of the reconstruction by approximately 40%. We demonstrate our method in a charged current neutral pion production analysis where a single neutral pion is reconstructed. This technique is applicable to modern tracking calorimeters, such as the new generation of liquid-argon time projection chambers, exposed to neutrino beams with $langle E_ u rangle$ between 1-10 GeV. In such experiments it can facilitate the identification of ionization hits which are associated with electromagnetic showers, thereby enabling improved reconstruction of charged-current $ u_e$ events arising from $ u_{mu} rightarrow u_{e}$ appearance.
Purpose: Using linear transformation of the data allows studying detectability of an imaging system on a large number of signals. An appropriate transformation will produce a set of signals with different contrast and different frequency contents. In this work both strategies are explored to present a task-based test for the detectability of an x-ray imaging system. Methods: Images of a new star-bar phantom are acquired with different entrance air KERMA and with different beam qualities. Then, after a wavelet packet is applied to both input and output of the system, conventional statistical decision theory is applied to determine detectability on the different images or nodes resulting from the transformation. A non-prewhitening matching filter is applied to the data in the spatial domain, and ROC analysis is carried out in each of the nodes. Results: AUC maps resulting from the analysis present the area under the ROC curve over the whole 2D frequency space for the different doses and beam qualities. Also, AUC curves, obtained by radially averaging the AUC maps allows comparing detectability of the different techniques as a function of the frequency in one only figure. The results obtained show differences between images acquired with different doses for each of the beam qualities analyzed. Conclusions: Combining a star-bar as test object, a wavelet packet as linear transformation, and ROC analysis results in an appropriate task-based test for detectability performance of an imaging system. The test presented in this work allows quantification of system detectability as a function of the 2D frequency interval of the signal to detect. It also allows calculation of detectability differences between different acquisition techniques and beam qualities.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا