Do you want to publish a course? Click here

A Fast Algorithm for Heart Disease Prediction using Bayesian Network Model

82   0   0.0 ( 0 )
 Added by Mistura Muibideen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Cardiovascular disease is the number one cause of death all over the world. Data mining can help to retrieve valuable knowledge from available data from the health sector. It helps to train a model to predict patients health which will be faster as compared to clinical experimentation. Various implementation of machine learning algorithms such as Logistic Regression, K-Nearest Neighbor, Naive Bayes (NB), Support Vector Machine, etc. have been applied on Cleveland heart datasets but there has been a limit to modeling using Bayesian Network (BN). This research applied BN modeling to discover the relationship between 14 relevant attributes of the Cleveland heart data collected from The UCI repository. The aim is to check how the dependency between attributes affects the performance of the classifier. The BN produces a reliable and transparent graphical representation between the attributes with the ability to predict new scenarios. The model has an accuracy of 85%. It was concluded that the model outperformed the NB classifier which has an accuracy of 80%.



rate research

Read More

Cardiovascular disease, especially heart failure is one of the major health hazard issues of our time and is a leading cause of death worldwide. Advancement in data mining techniques using machine learning (ML) models is paving promising prediction approaches. Data mining is the process of converting massive volumes of raw data created by the healthcare institutions into meaningful information that can aid in making predictions and crucial decisions. Collecting various follow-up data from patients who have had heart failures, analyzing those data, and utilizing several ML models to predict the survival possibility of cardiovascular patients is the key aim of this study. Due to the imbalance of the classes in the dataset, Synthetic Minority Oversampling Technique (SMOTE) has been implemented. Two unsupervised models (K-Means and Fuzzy C-Means clustering) and three supervised classifiers (Random Forest, XGBoost and Decision Tree) have been used in our study. After thorough investigation, our results demonstrate a superior performance of the supervised ML algorithms over unsupervised models. Moreover, we designed and propose a supervised stacked ensemble learning model that can achieve an accuracy, precision, recall and F1 score of 99.98%. Our study shows that only certain attributes collected from the patients are imperative to successfully predict the surviving possibility post heart failure, using supervised ML algorithms.
Ischemic heart disease (IHD), particularly in its chronic stable form, is a subtle pathology due to its silent behavior before developing in unstable angina, myocardial infarction or sudden cardiac death. Machine learning techniques applied to parameters extracted form heart rate variability (HRV) signal seem to be a valuable support in the early diagnosis of some cardiac diseases. However, so far, IHD patients were identified using Artificial Neural Networks (ANNs) applied to a limited number of HRV parameters and only to very few subjects. In this study, we used several linear and non-linear HRV parameters applied to ANNs, in order to confirm these results on a large cohort of 965 sample of subjects and to identify which features could discriminate IHD patients with high accuracy. By using principal component analysis and stepwise regression, we reduced the original 17 parameters to five, used as inputs, for a series of ANNs. The highest accuracy of 82% was achieved using meanRR, LFn, SD1, gender and age parameters and two hidden neurons.
Cardiovascular diseases and their associated disorder of heart failure are one of the major death causes globally, being a priority for doctors to detect and predict its onset and medical consequences. Artificial Intelligence (AI) allows doctors to discover clinical indicators and enhance their diagnosis and treatments. Specifically, explainable AI offers tools to improve the clinical prediction models that experience poor interpretability of their results. This work presents an explainability analysis and evaluation of a prediction model for heart failure survival by using a dataset that comprises 299 patients who suffered heart failure. The model employs a data workflow pipeline able to select the best ensemble tree algorithm as well as the best feature selection technique. Moreover, different post-hoc techniques have been used for the explainability analysis of the model. The papers main contribution is an explainability-driven approach to select the best prediction model for HF survival based on an accuracy-explainability balance. Therefore, the most balanced explainable prediction model implements an Extra Trees classifier over 5 selected features (follow-up time, serum creatinine, ejection fraction, age and diabetes) out of 12, achieving a balanced-accuracy of 85.1% and 79.5% with cross-validation and new unseen data respectively. The follow-up time is the most influencing feature followed by serum-creatinine and ejection-fraction. The explainable prediction model for HF survival presented in this paper would improve a further adoption of clinical prediction models by providing doctors with intuitions to better understand the reasoning of, usually, black-box AI clinical solutions, and make more reasonable and data-driven decisions.
3D printing has been widely adopted for clinical decision making and interventional planning of Congenital heart disease (CHD), while whole heart and great vessel segmentation is the most significant but time-consuming step in the model generation for 3D printing. While various automatic whole heart and great vessel segmentation frameworks have been developed in the literature, they are ineffective when applied to medical images in CHD, which have significant variations in heart structure and great vessel connections. To address the challenge, we leverage the power of deep learning in processing regular structures and that of graph algorithms in dealing with large variations and propose a framework that combines both for whole heart and great vessel segmentation in CHD. Particularly, we first use deep learning to segment the four chambers and myocardium followed by the blood pool, where variations are usually small. We then extract the connection information and apply graph matching to determine the categories of all the vessels. Experimental results using 683D CT images covering 14 types of CHD show that our method can increase Dice score by 11.9% on average compared with the state-of-the-art whole heart and great vessel segmentation method in normal anatomy. The segmentation results are also printed out using 3D printers for validation.
Probabilistic approaches for tensor factorization aim to extract meaningful structure from incomplete data by postulating low rank constraints. Recently, variational Bayesian (VB) inference techniques have successfully been applied to large scale models. This paper presents full Bayesian inference via VB on both single and coupled tensor factorization models. Our method can be run even for very large models and is easily implemented. It exhibits better prediction performance than existing approaches based on maximum likelihood on several real-world datasets for missing link prediction problem.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا