Do you want to publish a course? Click here

Discovering Maximal Generalized Decision Rules in Databases

اكتشاف قواعد القرار المعممة الأعظمية من قواعد المعطيات

1150   0   16   0 ( 0 )
 Publication date 2016
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

The volume of data being generated nowadays is increasing at phenomenal rate. Extracting useful knowledge from such data collections is an important and challenging issue. A promising technique is the rough set approach, a new mathematical method to data analysis based on classification of objects into similarity classes, which are indiscernible with respect to some features. This paper focuses on discovering maximal generalized decision rules in databases based on a simple or multiple regression, generalized theory, and decision matrix.

References used
W. Ziarko, 1993- Variable Precision Rough Sets Model, Journal of Computer and Systems Sciences, vol. 46. no. 1, pp. 39-59
Pawlak, Z. and Skowron, 2007- Rudiments of Rough Sets, Information Sciences, 177,3-27
S. Bhattacharya and K. Debnath, 2016- A Study on Lower Interval Probability Function Based Decision Theoretic Rough Set Models , Annals of Fuzzy Mathematics and Informatics, Volume x, No. x, pp. 1-xx
rate research

Read More

The purpose of this study is to offer help to patients through the employment of databases applications of existing and available telecommunication systems in medical services ,particularly treatment. So that it can be possible to avoided what can be avoided of health disasters that a human being encounter without warning. This study examines how modern technologies can be employed in controlling and processing some vital signs of human beings,particulary those who suffer some health problems affiliated with some diseases ,and keeping these problems under control in order to maintain the stability of the patients health statues. The vital signs that the study is applied to are blood pressure, pulse and blood glucose, since any of change in the value of any of these signs, positive or negative, may cause the patient to have a sudden health problems.
We tackle the problem of self-training networks for NLU in low-resource environment---few labeled data and lots of unlabeled data. The effectiveness of self-training is a result of increasing the amount of training data while training. Yet it becomes less effective in low-resource settings due to unreliable labels predicted by the teacher model on unlabeled data. Rules of grammar, which describe the grammatical structure of data, have been used in NLU for better explainability. We propose to use rules of grammar in self-training as a more reliable pseudo-labeling mechanism, especially when there are few labeled data. We design an effective algorithm that constructs and expands rules of grammar without human involvement. Then we integrate the constructed rules as a pseudo-labeling mechanism into self-training. There are two possible scenarios regarding data distribution: it is unknown or known in prior to training. We empirically demonstrate that our approach substantially outperforms the state-of-the-art methods in three benchmark datasets for both scenarios.
The mentioning relational database system term has become a synonymous to database system, but the monopoly of big companies that work in database systems field has become an obsession for persons who work in this field, because of the high costs o f this systems. For this reason the concerns turn towards the advanced technique : the native XML database systems which are free or most of them are open source systems because of the increasing dependency on XML files and particularly in transporting data between different applications and the availability of collections of related files. This has summoned towards a system to manage and organize them, for this reason the NXDs appeared. The aim of this study is making a comparison between the capabilities of RDBMS and NXDs in accordance to multiple standards , investing these two techniques in practical application , make the relevant tests which reflect the use of these techniques on the suggested application , display the results and give future suggestions.
relation extraction systems have made extensive use of features generated by linguistic analysis modules. Errors in these features lead to errors of relation detection and classification. In this work, we depart from these traditional approaches w ith complicated feature engineering by introducing a convolutional neural network for relation extraction that automatically learns features from sentences and minimizes the dependence on external toolkits and resources. Our model takes advantages of multiple window sizes for filters and pre-trained word embeddings as an initializer on a nonstatic architecture to improve the performance.
In our research we offer detailed study of one of the data mining functions within the text data using the object properties in databases. It studies the possibility of applying this function on the Arabic texts. We use procedural query language P L / SQL that deals with the object of Oracle databases. Data mining model Has been built. It works on classification of Arabic texts documents using SVM algorithm for indexing of texts and texts preparation, Naïve Bayes algorithm to classify data after transformation it into nested tables. So we made an evaluation of the obtained results and conclusions.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا