Do you want to publish a course? Click here

SNR-adaptive OCT angiography enabled by statistical characterization of intensity and decorrelation with multi-variate time series model

51   0   0.0 ( 0 )
 Added by Luzhe Huang
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

In OCT angiography (OCTA), decorrelation computation has been widely used as a local motion index to identify dynamic flow from static tissues, but its dependence on SNR severely degrades the vascular visibility, particularly in low- SNR regions. To mathematically characterize the decorrelation-SNR dependence of OCT signals, we developed a multi-variate time series (MVTS) model. Based on the model, we derived a universal asymptotic linear relation of decorrelation to inverse SNR (iSNR), with the variance in static and noise regions determined by the average kernel size. Accordingly, with the population distribution of static and noise voxels being explicitly calculated in the iSNR and decorrelation (ID) space, a linear classifier is developed by removing static and noise voxels at all SNR, to generate a SNR-adaptive OCTA, termed as ID-OCTA. Then, flow phantom and human skin experiments were performed to validate the proposed ID-OCTA. Both qualitative and quantitative assessments demonstrated that ID-OCTA offers a superior visibility of blood vessels, particularly in the deep layer. Finally, implications of this work on both system design and hemodynamic quantification are further discussed.

rate research

Read More

Multi-variate time series (MTS) data is a ubiquitous class of data abstraction in the real world. Any instance of MTS is generated from a hybrid dynamical system and their specific dynamics are usually unknown. The hybrid nature of such a dynamical system is a result of complex external attributes, such as geographic location and time of day, each of which can be categorized into either spatial attributes or temporal attributes. Therefore, there are two fundamental views which can be used to analyze MTS data, namely the spatial view and the temporal view. Moreover, from each of these two views, we can partition the set of data samples of MTS into disjoint forecasting tasks in accordance with their associated attribute values. Then, samples of the same task will manifest similar forthcoming pattern, which is less sophisticated to be predicted in comparison with the original single-view setting. Considering this insight, we propose a novel multi-view multi-task (MVMT) learning framework for MTS forecasting. Instead of being explicitly presented in most scenarios, MVMT information is deeply concealed in the MTS data, which severely hinders the model from capturing it naturally. To this end, we develop two kinds of basic operations, namely task-wise affine transformation and task-wise normalization, respectively. Applying these two operations with prior knowledge on the spatial and temporal view allows the model to adaptively extract MVMT information while predicting. Extensive experiments on three datasets are conducted to illustrate that canonical architectures can be greatly enhanced by the MVMT learning framework in terms of both effectiveness and efficiency. In addition, we design rich case studies to reveal the properties of representations produced at different phases in the entire prediction procedure.
Optical Coherence Tomography Angiography (OCT-A) is a non-invasive imaging technique, and has been increasingly used to image the retinal vasculature at capillary level resolution. However, automated segmentation of retinal vessels in OCT-A has been under-studied due to various challenges such as low capillary visibility and high vessel complexity, despite its significance in understanding many eye-related diseases. In addition, there is no publicly available OCT-A dataset with manually graded vessels for training and validation. To address these issues, for the first time in the field of retinal image analysis we construct a dedicated Retinal OCT-A SEgmentation dataset (ROSE), which consists of 229 OCT-A images with vessel annotations at either centerline-level or pixel level. This dataset has been released for public access to assist researchers in the community in undertaking research in related topics. Secondly, we propose a novel Split-based Coarse-to-Fine vessel segmentation network (SCF-Net), with the ability to detect thick and thin vessels separately. In the SCF-Net, a split-based coarse segmentation (SCS) module is first introduced to produce a preliminary confidence map of vessels, and a split-based refinement (SRN) module is then used to optimize the shape/contour of the retinal microvasculature. Thirdly, we perform a thorough evaluation of the state-of-the-art vessel segmentation models and our SCF-Net on the proposed ROSE dataset. The experimental results demonstrate that our SCF-Net yields better vessel segmentation performance in OCT-A than both traditional methods and other deep learning methods.
Corneal thickness (pachymetry) maps can be used to monitor restoration of corneal endothelial function, for example after Descemets membrane endothelial keratoplasty (DMEK). Automated delineation of the corneal interfaces in anterior segment optical coherence tomography (AS-OCT) can be challenging for corneas that are irregularly shaped due to pathology, or as a consequence of surgery, leading to incorrect thickness measurements. In this research, deep learning is used to automatically delineate the corneal interfaces and measure corneal thickness with high accuracy in post-DMEK AS-OCT B-scans. Three different deep learning strategies were developed based on 960 B-scans from 50 patients. On an independent test set of 320 B-scans, corneal thickness could be measured with an error of 13.98 to 15.50 micrometer for the central 9 mm range, which is less than 3% of the average corneal thickness. The accurate thickness measurements were used to construct detailed pachymetry maps. Moreover, follow-up scans could be registered based on anatomical landmarks to obtain differential pachymetry maps. These maps may enable a more comprehensive understanding of the restoration of the endothelial function after DMEK, where thickness often varies throughout different regions of the cornea, and subsequently contribute to a standardized postoperative regime.
Eye movements, blinking and other motion during the acquisition of optical coherence tomography (OCT) can lead to artifacts, when processed to OCT angiography (OCTA) images. Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information. The aim of this research is to fill these gaps using a deep generative model for OCT to OCTA image translation relying on a single intact OCT scan. Therefore, a U-Net is trained to extract the angiographic information from OCT patches. At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network. We show that generative models can augment the missing scans. The augmented volumes could then be used for 3-D segmentation or increase the diagnostic value.
The pyramid wavefront sensor (P-WFS) has replaced the Shack-Hartmann (SH-) WFS as sensor of choice for high performance adaptive optics (AO) systems in astronomy because of its flexibility in pupil sampling, its dynamic range, and its improved sensitivity in closed-loop application. Usually, a P-WFS requires modulation and high precision optics that lead to high complexity and costs of the sensor. These factors limit the competitiveness of the P-WFS with respect to other WFS devices for AO correction in visual science. Here, we present a cost effective realization of AO correction with a non-modulated PWFS and apply this technique to human retinal in vivo imaging using optical coherence tomography (OCT). P-WFS based high quality AO imaging was, to the best of our knowledge for the first time, successfully performed in 5 healthy subjects and benchmarked against the performance of conventional SH-WFS based AO. Smallest retinal cells such as central foveal cone photoreceptors are visualized and we observed a better quality of the images recorded with the P-WFS. The robustness and versatility of the sensor is demonstrated in the model eye under various conditions and in vivo by high-resolution imaging of other structures in the retina using standard and extended fields of view.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا