No Arabic abstract
Background: One of the common causes of sudden cardiac death (SCD) in young people is hypertrophic cardiomyopathy (HCM) and the primary prevention of SCD is with an implantable cardioverter defibrillators (ICD). Concerning the incidence of appropriate ICD therapy and the complications associated with ICD implantation and discharge, patients with implanted ICDs are closely monitored and interrogation reports are generated from clinical consultations. Methods: In this study, we compared the performance of structured device data and unstructured interrogation reports for extracting information of ICD therapy and heart rhythm. We sampled 687 reports with a gold standard generated through manual chart review. A rule-based natural language processing (NLP) system was developed using 480 reports and the information in the corresponding device data was aggregated for the task. We compared the performance of the NLP system with information aggregated from structured device data using the remaining 207 reports. Results: The rule-based NLP system achieved F-measure of 0.92 and 0.98 for ICD therapy and heart rhythm while the performance of aggregating device data was significantly lower with F-measure of 0.78 and 0.45 respectively. Limitations of using only structured device data include no differentiation of real events from management events, data availability, and disparate perspectives of vendor and data granularity while using interrogation reports needs to overcome non-representative keyword/pattern and contextual errors. Conclusions: Extracting phenotyping information from data generated in real-world requires the incorporation of medical knowledge. It is essential to analyze, compare, and harmonize multiple data sources for real-world evidence generation.
Ventricular Fibrillation is a disorganized electrical excitation of the heart that results in inadequate blood flow to the body. It usually ends in death within seconds. The most common way to treat the symptoms of fibrillation is to implant a medical device, known as an Implantable Cardioverter Defibrillator (ICD), in the patients body. Model-based verification can supply rigorous proofs of safety and efficacy. In this paper, we build a hybrid system model of the human heart+ICD closed loop, and show it to be a STORMED system, a class of o-minimal hybrid systems that admit finite bisimulations. In general, it may not be possible to compute the bisimulation. We show that approximate reachability can yield a finite simulation for STORMED systems, which improves on the existing verification procedure. In the process, we show that certain compositions respect the STORMED property. Thus it is possible to model check important formal properties of ICDs in a closed loop with the heart, such as delayed therapy, missed therapy, or inappropriately administered therapy. The results of this paper are theoretical and motivate the creation of concrete model checking procedures for STORMED systems.
Recent works have demonstrated the added value of dynamic amino acid positron emission tomography (PET) for glioma grading and genotyping, biopsy targeting, and recurrence diagnosis. However, most of these studies are exclusively based on hand-crafted qualitative or semi-quantitative dynamic features extracted from the mean time activity curve (TAC) within predefined volumes. Voxelwise dynamic PET data analysis could instead provide a better insight into intra-tumour heterogeneity of gliomas. In this work, we investigate the ability of the widely used principal component analysis (PCA) method to extract meaningful quantitative dynamic features from high-dimensional motion-corrected dynamic [S-methyl-11C]methionine PET data in a first cohort of 20 glioma patients. By means of realistic numerical simulations, we demonstrate the robustness of our methodology to noise. In a second cohort of 13 glioma patients, we compare the resulting parametric maps to these provided by standard one- and two-tissue compartment pharmacokinetic (PK) models. We show that our PCA model outperforms PK models in the identification of intra-tumour uptake dynamics heterogeneity while being much less computationally expensive. Such parametric maps could be valuable to assess tumour aggressiveness locally with applications in treatment planning as well as in the evaluation of tumour progression and response to treatment. This work also provides further encouraging results on the added value of dynamic over static analysis of [S-methyl-11C]methionine PET data in gliomas, as previously demonstrated for O-(2-[18F]fluoroethyl)-L-tyrosine.
One of the biggest challenges in the High-Luminosity LHC (HL- LHC) era will be the significantly increased data size to be recorded and analyzed from the collisions at the ATLAS and CMS experiments. ServiceX is a software R&D project in the area of Data Organization, Management and Access of the IRIS- HEP to investigate new computational models for the HL- LHC era. ServiceX is an experiment-agnostic service to enable on-demand data delivery specifically tailored for nearly-interactive vectorized analyses. It is capable of retrieving data from grid sites, on-the-fly data transformation, and delivering user-selected data in a variety of different formats. New features will be presented that make the service ready for public use. An ongoing effort to integrate ServiceX with a popular statistical analysis framework in ATLAS will be described with an emphasis of a practical implementation of ServiceX into the physics analysis pipeline.
The $DDalpha$-classifier, a nonparametric fast and very robust procedure, is described and applied to fifty classification problems regarding a broad spectrum of real-world data. The procedure first transforms the data from their original property space into a depth space, which is a low-dimensional unit cube, and then separates them by a projective invariant procedure, called $alpha$-procedure. To each data point the transformation assigns its depth values with respect to the given classes. Several alternative depth notions (spatial depth, Mahalanobis depth, projection depth, and Tukey depth, the latter two being approximated by univariate projections) are used in the procedure, and compared regarding their average error rates. With the Tukey depth, which fits the distributions shape best and is most robust, `outsiders, that is data points having zero depth in all classes, need an additional treatment for classification. Evidence is also given about the dimension of the extended feature space needed for linear separation. The $DDalpha$-procedure is available as an R-package.
There has been a widely held view that visual representations (e.g., photographs and illustrations) do not depict negation, for example, one that can be expressed by a sentence the train is not coming. This view is empirically challenged by analyzing the real-world visual representations of comic (manga) illustrations. In the experiment using image captioning tasks, we gave people comic illustrations and asked them to explain what they could read from them. The collected data showed that some comic illustrations could depict negation without any aid of sequences (multiple panels) or conventional devices (special symbols). This type of comic illustrations was subjected to further experiments, classifying images into those containing negation and those not containing negation. While this image classification was easy for humans, it was difficult for data-driven machines, i.e., deep learning models (CNN), to achieve the same high performance. Given the findings, we argue that some comic illustrations evoke background knowledge and thus can depict negation with purely visual elements.