No Arabic abstract
We present PRETUS -a Plugin-based Real Time UltraSound software platform for live ultrasound image analysis and operator support. The software is lightweight; functionality is brought in via independent plug-ins that can be arranged in sequence. The software allows to capture the real-time stream of ultrasound images from virtually any ultrasound machine, applies computational methods and visualises the results on-the-fly. Plug-ins can run concurrently without blocking each other. They can be implemented in C ++ and Python. A graphical user interface can be implemented for each plug-in, and presented to the user in a compact way. The software is free and open source, and allows for rapid prototyping and testing of real-time ultrasound imaging methods in a manufacturer-agnostic fashion. The software is provided with input, output and processing plug-ins, as well as with tutorials to illustrate how to develop new plug-ins for PRETUS.
We present an adaptation of the pixel-reassignment technique from confocal fluorescent microscopy to coherent ultrasound imaging. The method, Ultrasound Pixel-Reassignment (UPR), provides a resolution and signal to noise (SNR) improvement in ultrasound imaging by computationally reassigning off-focus signals acquired using traditional plane-wave compounding ultrasonography. We theoretically analyze the analogy between the optical and ultrasound implementations of pixel reassignment, and experimentally evaluate the imaging quality on tissue-mimicking acoustic phantoms. We demonstrate that UPR provides a $25%$ resolution improvement and a $3dB$ SNR improvement in in-vitro scans, without any change in hardware or acquisition scheme.
In this paper, we present a new method to generate an instantaneous volumetric image using a single x-ray projection. To fully extract motion information hidden in projection images, we partitioned a projection image into small patches. We utilized a sparse learning method to automatically select patches that have a high correlation with principal component analysis (PCA) coefficients of a lung motion model. A model that maps the patch intensity to the PCA coefficients is built along with the patch selection process. Based on this model, a measured projection can be used to predict the PCA coefficients, which are further used to generate a motion vector field and hence a volumetric image. We have also proposed an intensity baseline correction method based on the partitioned projection, where the first and the second moments of pixel intensities at a patch in a simulated image are matched with those in a measured image via a linear transformation. The proposed method has been valid in simulated data and real phantom data. The algorithm is able to identify patches that contain relevant motion information, e.g. diaphragm region. It is found that intensity correction step is important to remove the systematic error in the motion prediction. For the simulation case, the sparse learning model reduced prediction error for the first PCA coefficient to 5%, compared to the 10% error when sparse learning is not used. 95th percentile error for the predicted motion vector is reduced from 2.40 mm to 0.92mm. In the phantom case, the predicted tumor motion trajectory is successfully reconstructed with 0.82 mm mean vector field error compared to 1.66 mm error without using the sparse learning method. The algorithm robustness with respect to sparse level, patch size, and existence of diaphragm, as well as computation time, has also been studied.
Massive amounts of multimedia data (i.e., text, audio, video, graphics and animation) are being generated everyday. Conventionally, multimedia data are managed by the platforms maintained by multimedia service providers, which are generally designed using centralised architecture. However, such centralised architecture may lead to a single point of failure and disputes over royalties or other rights. It is hard to ensure the data integrity and track fulfilment of obligations listed on the copyright agreement. To tackle these issues, in this paper, we present a blockchain-based platform architecture for multimedia data management. We adopt self-sovereign identity for identity management and design a multi-level capability-based mechanism for access control. We implement a proof-of-concept prototype using the proposed approach and evaluate it using a use case. The results show that the proposed approach is feasible and has scalable performance.
Ultrasound (US) imaging is widely employed for diagnosis and staging of peripheral vascular diseases (PVD), mainly due to its high availability and the fact it does not emit radiation. However, high inter-operator variability and a lack of repeatability of US image acquisition hinder the implementation of extensive screening programs. To address this challenge, we propose an end-to-end workflow for automatic robotic US screening of tubular structures using only the real-time US imaging feedback. We first train a U-Net for real-time segmentation of the vascular structure from cross-sectional US images. Then, we represent the detected vascular structure as a 3D point cloud and use it to estimate the longitudinal axis of the target tubular structure and its mean radius by solving a constrained non-linear optimization problem. Iterating the previous processes, the US probe is automatically aligned to the orientation normal to the target tubular tissue and adjusted online to center the tracked tissue based on the spatial calibration. The real-time segmentation result is evaluated both on a phantom and in-vivo on brachial arteries of volunteers. In addition, the whole process is validated both in simulation and physical phantoms. The mean absolute radius error and orientation error ($pm$ SD) in the simulation are $1.16pm0.1~mm$ and $2.7pm3.3^{circ}$, respectively. On a gel phantom, these errors are $1.95pm2.02~mm$ and $3.3pm2.4^{circ}$. This shows that the method is able to automatically screen tubular tissues with an optimal probe orientation (i.e. normal to the vessel) and at the same to accurately estimate the mean radius, both in real-time.
Businesses, particularly small and medium-sized enterprises, aiming to start up in Model-Based Design (MBD) face difficult choices from a wide range of methods, notations and tools before making the significant investments in planning, procurement and training necessary to deploy new approaches successfully. In the development of Cyber-Physical Systems (CPSs) this is exacerbated by the diversity of formalisms covering computation, physical and human processes. In this paper, we propose the use of a cloud-enabled and open collaboration platform that allows businesses to offer models, tools and other assets, and permits others to access these on a pay-per-use basis as a means of lowering barriers to the adoption of MBD technology, and to promote experimentation in a sandbox environment.