ﻻ يوجد ملخص باللغة العربية
The use of Deep Learning in the medical field is hindered by the lack of interpretability. Case-based interpretability strategies can provide intuitive explanations for deep learning models decisions, thus, enhancing trust. However, the resulting explanations threaten patient privacy, motivating the development of privacy-preserving methods compatible with the specifics of medical data. In this work, we analyze existing privacy-preserving methods and their respective capacity to anonymize medical data while preserving disease-related semantic features. We find that the PPRL-VGAN deep learning method was the best at preserving the disease-related semantic features while guaranteeing a high level of privacy among the compared state-of-the-art methods. Nevertheless, we emphasize the need to improve privacy-preserving methods for medical imaging, as we identified relevant drawbacks in all existing privacy-preserving approaches.
With the rising use of Machine Learning (ML) and Deep Learning (DL) in various industries, the medical industry is also not far behind. A very simple yet extremely important use case of ML in this industry is for image classification. This is importa
Deep neural networks (DNN) have demonstrated unprecedented success for medical imaging applications. However, due to the issue of limited dataset availability and the strict legal and ethical requirements for patient privacy protection, the broad app
Unsupervised image-to-image translation methods such as CycleGAN learn to convert images from one domain to another using unpaired training data sets from different domains. Unfortunately, these approaches still require centrally collected unpaired r
Preserving privacy is a growing concern in our society where sensors and cameras are ubiquitous. In this work, for the first time, we propose a trainable image acquisition method that removes the sensitive identity revealing information in the optica
The training of medical image analysis systems using machine learning approaches follows a common script: collect and annotate a large dataset, train the classifier on the training set, and test it on a hold-out test set. This process bears no direct