No Arabic abstract
Predicting the execution time of queries is an important problem with applications in scheduling, service level agreements and error detection. During query planning, a cost is associated with the chosen execution plan and used to rank competing plans. It would be convenient to use that cost to predict execution time, but it has been claimed in the literature that this is not possible. In this paper, we thoroughly investigate this claim considering both linear and non-linear models. We find that the accuracy using more complex models with only the optimizer cost is comparable to the reported accuracy in the literature. The most accurate method in the literature is nearest-neighbour regression which does not produce a model. The published results used a large feature set to identify nearest neighbours. We show that it is possible to achieve the same level of accuracy using only the cost to identify nearest neighbours. Using a smaller feature set brings the advantages of reduced overhead in terms of both storage space for the training data and the time to produce a prediction.
Emerging wireless technologies, such as 5G and beyond, are bringing new use cases to the forefront, one of the most prominent being machine learning empowered health care. One of the notable modern medical concerns that impose an immense worldwide health burden are respiratory infections. Since cough is an essential symptom of many respiratory infections, an automated system to screen for respiratory diseases based on raw cough data would have a multitude of beneficial research and medical applications. In literature, machine learning has already been successfully used to detect cough events in controlled environments. In this paper, we present a low complexity, automated recognition and diagnostic tool for screening respiratory infections that utilizes Convolutional Neural Networks (CNNs) to detect cough within environment audio and diagnose three potential illnesses (i.e., bronchitis, bronchiolitis and pertussis) based on their unique cough audio features. Both proposed detection and diagnosis models achieve an accuracy of over 89%, while also remaining computationally efficient. Results show that the proposed system is successfully able to detect and separate cough events from background noise. Moreover, the proposed single diagnosis model is capable of distinguishing between different illnesses without the need of separate models.
The advancement of artificial intelligence has cast a new light on the development of optimization algorithm. This paper proposes to learn a two-phase (including a minimization phase and an escaping phase) global optimization algorithm for smooth non-convex functions. For the minimization phase, a model-driven deep learning method is developed to learn the update rule of descent direction, which is formalized as a nonlinear combination of historical information, for convex functions. We prove that the resultant algorithm with the proposed adaptive direction guarantees convergence for convex functions. Empirical study shows that the learned algorithm significantly outperforms some well-known classical optimization algorithms, such as gradient descent, conjugate descent and BFGS, and performs well on ill-posed functions. The escaping phase from local optimum is modeled as a Markov decision process with a fixed escaping policy. We further propose to learn an optimal escaping policy by reinforcement learning. The effectiveness of the escaping policies is verified by optimizing synthesized functions and training a deep neural network for CIFAR image classification. The learned two-phase global optimization algorithm demonstrates a promising global search capability on some benchmark functions and machine learning tasks.
The database systems course is offered as part of an undergraduate computer science degree program in many major universities. A key learning goal of learners taking such a course is to understand how SQL queries are processed in a RDBMS in practice. Since a query execution plan (QEP) describes the execution steps of a query, learners can acquire the understanding by perusing the QEPs generated by a RDBMS. Unfortunately, in practice, it is often daunting for a learner to comprehend these QEPs containing vendor-specific implementation details, hindering her learning process. In this paper, we present a novel, end-to-end, generic system called lantern that generates a natural language description of a qep to facilitate understanding of the query execution steps. It takes as input an SQL query and its QEP, and generates a natural language description of the execution strategy deployed by the underlying RDBMS. Specifically, it deploys a declarative framework called pool that enables subject matter experts to efficiently create and maintain natural language descriptions of physical operators used in QEPs. A rule-based framework called RULE-LANTERN is proposed that exploits pool to generate natural language descriptions of QEPs. Despite the high accuracy of RULE-LANTERN, our engagement with learners reveal that, consistent with existing psychology theories, perusing such rule-based descriptions lead to boredom due to repetitive statements across different QEPs. To address this issue, we present a novel deep learning-based language generation framework called NEURAL-LANTERN that infuses language variability in the generated description by exploiting a set of paraphrasing tools and word embedding. Our experimental study with real learners shows the effectiveness of lantern in facilitating comprehension of QEPs.
Incremental processing is widely-adopted in many applications, ranging from incremental view maintenance, stream computing, to recently emerging progressive data warehouse and intermittent query processing. Despite many algorithms developed on this topic, none of them can produce an incremental plan that always achieves the best performance, since the optimal plan is data dependent. In this paper, we develop a novel cost-based optimizer framework, called Tempura, for optimizing incremental data processing. We propose an incremental query planning model called TIP based on the concept of time-varying relations, which can formally model incremental processing in its most general form. We give a full specification of Tempura, which can not only unify various existing techniques to generate an optimal incremental plan, but also allow the developer to add their rewrite rules. We study how to explore the plan space and search for an optimal incremental plan. We conduct a thorough experimental evaluation of Tempura in various incremental processing scenarios to show its effectiveness and efficiency.
Finding a good query plan is key to the optimization of query runtime. This holds in particular for cost-based federation engines, which make use of cardinality estimations to achieve this goal. A number of studies compare SPARQL federation engines across different performance metrics, including query runtime, result set completeness and correctness, number of sources selected and number of requests sent. Albeit informative, these metrics are generic and unable to quantify and evaluate the accuracy of the cardinality estimators of cost-based federation engines. To thoroughly evaluate cost-based federation engines, the effect of estimated cardinality errors on the overall query runtime performance must be measured. In this paper, we address this challenge by presenting novel evaluation metrics targeted at a fine-grained benchmarking of cost-based federated SPARQL query engines. We evaluate five cost-based federated SPARQL query engines using existing as well as novel evaluation metrics by using LargeRDFBench queries. Our results provide a detailed analysis of the experimental outcomes that reveal novel insights, useful for the development of future cost-based federated SPARQL query processing engines.