للتغلب على عدم الإجابة عن الاستعلامات المركبة (CQs) وحل مشاكل الرضا القيود (CSPs)، تم اقتراح عدة مفاهيم من تقسيم الهايبرجرافات -- مما أدى إلى مفاهيم مختلفة من العرض، على وجه الخصوص، العرض العادي والعرض المشترك والعرض الجزئي لعرض الهايبرشجرة (hw، ghw، وfhw). بناءً على الاهتمام المتزايد في استخدام هذه الطرق التقسيمية في الممارسة، يطلب مخزن عام للبرامج التقسيمية، فضلاً عن مجموعة كبيرة من المعيارات، ومحطة عمل عبر الإنترنت لإدخال وتحليل واسترجاع الهايبرجرافات. نحن نتحدث إلى هذه الحاجة بتوفير (i) تنفيذات محددة لتقسيم الهايبرجرافات (بما في ذلك الخوارزميات العملية الجديدة)، (ii) معيار جديد، واسع النطاق، للهايبرجرافات التي تنبع من مجموعات مختلفة من CQ وCSP، و (iii) HyperBench، واجهتنا الجديدة عبر الإنترنت للوصول إلى المعيار ونتائج تحليلاتنا. بالإضافة إلى ذلك، نصفح عدداً من التجارب الفعلية التي قمنا بها باستخدام هذه البنية الجديدة.
To cope with the intractability of answering Conjunctive Queries (CQs) and solving Constraint Satisfaction Problems (CSPs), several notions of hypergraph decompositions have been proposed -- giving rise to different notions of width, noticeably, plain, generalized, and fractional hypertree width (hw, ghw, and fhw). Given the increasing interest in using such decomposition methods in practice, a publicly accessible repository of decomposition software, as well as a large set of benchmarks, and a web-accessible workbench for inserting, analyzing, and retrieving hypergraphs are called for. We address this need by providing (i) concrete implementations of hypergraph decompositions (including new practical algorithms), (ii) a new, comprehensive benchmark of hypergraphs stemming from disparate CQ and CSP collections, and (iii) HyperBench, our new web-inter-face for accessing the benchmark and the results of our analyses. In addition, we describe a number of actual experiments we carried out with this new infrastructure.
To cope with the intractability of answering Conjunctive Queries (CQs) and solving Constraint Satisfaction Problems (CSPs), several notions of hypergraph decompositions have been proposed -- giving rise to different notions of width, noticeably, plain, generalized, and fractional hypertree width (hw, ghw, and fhw). Given the increasing interest in using such decomposition methods in practice, a publicly accessible repository of decomposition software, as well as a large set of benchmarks, and a web-accessible workbench for inserting, analysing, and retrieving hypergraphs are called for. We address this need by providing (i) concrete implementations of hypergraph decompositions (including new practical algorithms), (ii) a new, comprehensive benchmark of hypergraphs stemming from disparate CQ and CSP collections, and (iii) HyperBench, our new web-inter-face for accessing the benchmark and the results of our analyses. In addition, we describe a number of actual experiments we carried out with this new infrastructure.
Cardinality estimation (CardEst) plays a significant role in generating high-quality query plans for a query optimizer in DBMS. In the last decade, an increasing number of advanced CardEst methods (especially ML-based) have been proposed with outstanding estimation accuracy and inference latency. However, there exists no study that systematically evaluates the quality of these methods and answer the fundamental problem: to what extent can these methods improve the performance of query optimizer in real-world settings, which is the ultimate goal of a CardEst method. In this paper, we comprehensively and systematically compare the effectiveness of CardEst methods in a real DBMS. We establish a new benchmark for CardEst, which contains a new complex real-world dataset STATS and a diverse query workload STATS-CEB. We integrate multiple most representative CardEst methods into an open-source database system PostgreSQL, and comprehensively evaluate their true effectiveness in improving query plan quality, and other important aspects affecting their applicability, ranging from inference latency, model size, and training time, to update efficiency and accuracy. We obtain a number of key findings for the CardEst methods, under different data and query settings. Furthermore, we find that the widely used estimation accuracy metric(Q-Error) cannot distinguish the importance of different sub-plan queries during query optimization and thus cannot truly reflect the query plan quality generated by CardEst methods. Therefore, we propose a new metric P-Error to evaluate the performance of CardEst methods, which overcomes the limitation of Q-Error and is able to reflect the overall end-to-end performance of CardEst methods. We have made all of the benchmark data and evaluation code publicly available at https://github.com/Nathaniel-Han/End-to-End-CardEst-Benchmark.
Data analysis requires translating higher level questions and hypotheses into computable statistical models. We present a mixed-methods study aimed at identifying the steps, considerations, and challenges involved in operationalizing hypotheses into statistical models, a process we refer to as hypothesis formalization. In a formative content analysis of research papers, we find that researchers highlight decomposing a hypothesis into sub-hypotheses, selecting proxy variables, and formulating statistical models based on data collection design as key steps. In a lab study, we find that analysts fixated on implementation and shaped their analysis to fit familiar approaches, even if sub-optimal. In an analysis of software tools, we find that tools provide inconsistent, low-level abstractions that may limit the statistical models analysts use to formalize hypotheses. Based on these observations, we characterize hypothesis formalization as a dual-search process balancing conceptual and statistical considerations constrained by data and computation, and discuss implications for future tools.
Chest X-rays are the most common diagnostic exams in emergency rooms and hospitals. There has been a surge of work on automatic interpretation of chest X-rays using deep learning approaches after the availability of large open source chest X-ray dataset from NIH. However, the labels are not sufficiently rich and descriptive for training classification tools. Further, it does not adequately address the findings seen in Chest X-rays taken in anterior-posterior (AP) view which also depict the placement of devices such as central vascular lines and tubes. In this paper, we present a new chest X-ray benchmark database of 73 rich sentence-level descriptors of findings seen in AP chest X-rays. We describe our method of obtaining these findings through a semi-automated ground truth generation process from crowdsourcing of clinician annotations. We also present results of building classifiers for these findings that show that such higher granularity labels can also be learned through the framework of deep learning classifiers.
Electrocardiography plays an essential role in diagnosing and screening cardiovascular diseases in daily healthcare. Deep neural networks have shown the potentials to improve the accuracies of arrhythmia detection based on electrocardiograms (ECGs). However, more ECG records with ground truth are needed to promote the development and progression of deep learning techniques in automatic ECG analysis. Here we propose a web-based tool for ECG viewing and annotating, LabelECG. With the facilitation of unified data management, LabelECG is able to distribute large cohorts of ECGs to dozens of technicians and physicians, who can simultaneously make annotations through web-browsers on PCs, tablets and cell phones. Along with the doctors from four hospitals in China, we applied LabelECG to support the annotations of about 15,000 12-lead resting ECG records in three months. These annotated ECGs have successfully supported the First China ECG intelligent Competition. La-belECG will be freely accessible on the Internet to support similar researches, and will also be upgraded through future works.