Do you want to publish a course? Click here

Deep D-bar: Real time Electrical Impedance Tomography Imaging with Deep Neural Networks

122   0   0.0 ( 0 )
 Added by Sarah Hamilton
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

The mathematical problem for Electrical Impedance Tomography (EIT) is a highly nonlinear ill-posed inverse problem requiring carefully designed reconstruction procedures to ensure reliable image generation. D-bar methods are based on a rigorous mathematical analysis and provide robust direct reconstructions by using a low-pass filtering of the associated nonlinear Fourier data. Similarly to low-pass filtering of linear Fourier data, only using low frequencies in the image recovery process results in blurred images lacking sharp features such as clear organ boundaries. Convolutional Neural Networks provide a powerful framework for post-processing such convolved direct reconstructions. In this study, we demonstrate that these CNN techniques lead to sharp and reliable reconstructions even for the highly nonlinear inverse problem of EIT. The network is trained on data sets of simulated examples and then applied to experimental data without the need to perform an additional transfer training. Results for absolute EIT images are presented using experimental EIT data from the ACT4 and KIT4 EIT systems.



rate research

Read More

Objective: To develop, and demonstrate the feasibility of, a novel image reconstruction method for absolute Electrical Impedance Tomography (a-EIT) that pairs deep learning techniques with real-time robust D-bar methods. Approach: A D-bar method is paired with a trained Convolutional Neural Network (CNN) as a post-processing step. Training data is simulated for the network using no knowledge of the boundary shape by using an associated nonphysical Beltrami equation rather than simulating the traditional current and voltage data specific to a given domain. This allows the training data to be boundary shape independent. The method is tested on experimental data from two EIT systems (ACT4 and KIT4). Main Results: Post processing the D-bar images with a CNN produces significant improvements in image quality measured by Structural SIMilarity indices (SSIMs) as well as relative $ell_2$ and $ell_1$ image errors. Significance: This work demonstrates that more general networks can be trained without being specific about boundary shape, a key challenge in EIT image reconstruction. The work is promising for future studies involving databases of anatomical atlases.
A method for including a priori information in the 2-D D-bar algorithm is presented. Two methods of assigning conductivity values to the prior are presented, each corresponding to a different scenario on applications. The method is tested on several numerical examples with and without noise and is demonstrated to be highly effective in improving the spatial resolution of the D-bar method.
In Electrical Impedance Tomography (EIT), the internal conductivity of a body is recovered via current and voltage measurements taken at its surface. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods, of which D-bar is the only proven method. The resulting EIT images have low spatial resolution due to smoothing caused by low-pass filtered regularization. In many applications, such as medical imaging, it is known emph{a priori} that the target contains sharp features such as organ boundaries, as well as approximate ranges for realistic conductivity values. In this paper, we use this information in a new edge-preserving EIT algorithm, based on the original D-bar method coupled with a deblurring flow stopped at a minimal data discrepancy. The method makes heavy use of a novel data fidelity term based on the so-called {em CGO sinogram}. This nonlinear data step provides superior robustness over traditional EIT data formats such as current-to-voltage matrices or Dirichlet-to-Neumann operators, for commonly used current patterns.
107 - Florian Stelzer 2021
The method recently introduced in arXiv:2011.10115 realizes a deep neural network with just a single nonlinear element and delayed feedback. It is applicable for the description of physically implemented neural networks. In this work, we present an infinite-dimensional generalization, which allows for a more rigorous mathematical analysis and a higher flexibility in choosing the weight functions. Precisely speaking, the weights are described by Lebesgue integrable functions instead of step functions. We also provide a functional back-propagation algorithm, which enables gradient descent training of the weights. In addition, with a slight modification, our concept realizes recurrent neural networks.
We study the expressivity of deep neural networks. Measuring a networks complexity by its number of connections or by its number of neurons, we consider the class of functions for which the error of best approximation with networks of a given complexity decays at a certain rate when increasing the complexity budget. Using results from classical approximation theory, we show that this class can be endowed with a (quasi)-norm that makes it a linear function space, called approximation space. We establish that allowing the networks to have certain types of skip connections does not change the resulting approximation spaces. We also discuss the role of the networks nonlinearity (also known as activation function) on the resulting spaces, as well as the role of depth. For the popular ReLU nonlinearity and its powers, we relate the newly constructed spaces to classical Besov spaces. The established embeddings highlight that some functions of very low Besov smoothness can nevertheless be well approximated by neural networks, if these networks are sufficiently deep.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا